Scroll Top

IBM’s brain inspired analogue chip dramatically cuts AI energy use

Futurist_analogueai

WHY THIS MATTERS IN BRIEF

AI is increasingly consuming huge quantities of energy and it’s unsustainable so researchers are trying to find solutions to the problem.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

ChatGPTDALL-EStable Diffusion, and other generative AIs have taken the world by storm. They create fabulous poetry and images. They’re seeping into every nook of our world, from marketing to writing legal briefs and drug discovery. They seem like the poster child for a man-machine mind meld success story.

 

RELATED
IBM unveils a new 5nm chip process, but 1nm looms on the horizon

 

But under the hood, things are looking less peachy. These systems are massive energy hogs, requiring data centers that spit out thousands of tons of carbon emissions – further stressing an already volatile climate – and suck up billions of dollars. As the neural networks become more sophisticated and more widely used, energy consumption is likely to skyrocket even more.

Plenty of ink has been spilled on generative AI’s carbon footprint. Its energy demand could be its downfall, hindering development as it further grows. Using current hardware, generative AI is “expected to stall soon if it continues to rely on standard computing hardware,” said Dr. Hechen Wang at Intel Labs.

So, it’s high time we build sustainable AI.

This week, on the back of another similar announcement from Microsoft, a study from IBM took a practical step in that direction. They created a 14-nanometer analog computer chip packed with 35 million memory units. Unlike current chips, computation happens directly within those units, nixing the need to shuttle data back and forth – in turn saving energy.

 

RELATED
DARPA splashes $1.5bn to re-invent the way electronics are designed and made

 

Data shuttling can increase energy consumption anywhere from 3 to 10,000 times above what’s required for the actual computation, said Wang.

The chip was highly efficient when challenged with two speech recognition tasks. One, Google Speech Commands, is small but practical. Here, speed is key. The other, Librispeech, is a mammoth system that helps transcribe speech to text, taxing the chip’s ability to process massive amounts of data.

When pitted against conventional computers, the chip performed equally as accurately but finished the job faster and with far less energy, using less than a tenth of what’s normally required for some tasks.

“These are, to our knowledge, the first demonstrations of commercially relevant accuracy levels on a commercially relevant model… with efficiency and massive parallelism” for an analog chip, the team said.

This is hardly the first analog chip. However, it pushes the idea of neuromorphic computing into the realm of practicality – a chip that could one day power your phone, smart home, and other devices with an efficiency near that of the brain.

Um, what? Let’s back up.

 

RELATED
Scientists have designed a nanoscale computer that's smaller than a virus

 

Current computers are built on the Von Neumann architecture. Think of it as a house with multiple rooms. One, the Central Processing Unit (CPU), analyzes data. Another stores memory.

For each calculation, the computer needs to shuttle data back and forth between those two rooms, and it takes time and energy and decreases efficiency.

The brain, in contrast, combines both computation and memory into a studio apartment. Its mushroom-like junctions, called synapses, both form neural networks and store memories at the same location. Synapses are highly flexible, adjusting how strongly they connect with other neurons based on stored memory and new learnings – a property called “weights.” Our brains quickly adapt to an ever-changing environment by adjusting these synaptic weights.

IBM has been at the forefront of designing analog chips that mimic brain computation. A breakthrough came in 2016, when they introduced a chip based on a fascinating material usually found in rewritable CDs. The material changes its physical state and shape-shifts from a goopy soup to crystal-like structures when zapped with electricity – akin to a digital 0 and 1.

Here’s the key: the chip can also exist in a hybrid state. In other words, similar to a biological synapse, the artificial synapse can encode a myriad of different weights – not just binary – allowing it to accumulate multiple calculations without having to move a single bit of data.

 

RELATED
Re-imagining retail, Robomart unveil their autonomous, mobile shop concept

 

The new study built on previous work by also using phase-change materials. The basic components are “memory tiles.” Each is jam-packed with thousands of phase-change materials in a grid structure. The tiles readily communicate with each other.

Each tile is controlled by a programmable local controller, allowing the team to tweak the component – akin to a neuron – with precision. The chip further stores hundreds of commands in sequence, creating a black box of sorts that allows them to dig back in and analyze its performance.

Overall, the chip contained 35 million phase-change memory structures. The connections amounted to 45 million synapses – a far cry from the human brain, but very impressive on a 14-nanometer chip.

These mind-numbing numbers present a problem for initializing the AI chip: there are simply too many parameters to seek through. The team tackled the problem with what amounts to an AI kindergarten, pre-programming synaptic weights before computations begin which is  a bit like seasoning a new cast-iron pan before cooking with it.

They “tailored their network-training techniques with the benefits and limitations of the hardware in mind,” and then set the weights for the most optimal results, explained Wang, who was not involved in the study.

 

RELATED
DoNotPay's famous robo-lawyer helps people avoid homelessness

 

It worked out. In one initial test, the chip readily churned through 12.4 trillion operations per second for each watt of power. The energy consumption is “tens or even hundreds of times lower than for the most powerful CPUs and GPUs,” said Wang.

The chip nailed a core computational process underlying deep neural networks with just a few classical hardware components in the memory tiles. In contrast, traditional computers need hundreds or thousands of transistors (a basic unit that performs calculations).

The team next challenged the chip to two speech recognition tasks. Each one stressed a different facet of the chip.

The first test was speed when challenged with a relatively small database. Using the Google Speech Commands database, the task required the AI chip to spot 12 keywords in a set of roughly 65,000 clips of thousands of people speaking 30 short words – “small” is relative in deep learning universe. When using an accepted benchmark – MLPerf – the chip performed seven times faster than in previous work.

The chip also shone when challenged with a large database, Librispeech. The corpus contains over 1,000 hours of read English speech commonly used to train AI for parsing speech and automatic speech-to-text transcription.

Overall, the team used five chips to eventually encode more than 45 million weights using data from 140 million phase-change devices. When pitted against conventional hardware, the chip was roughly 14 times more energy-efficient – processing nearly 550 samples every second per watt of energy consumption – with an error rate a bit over 9 percent.

 

RELATED
IBM's 127 Qubit Eagle quantum computer is the biggest yet

 

Although impressive, analog chips are still in their infancy. They show “enormous promise for combating the sustainability problems associated with AI,” said Wang, but the path forward requires clearing a few more hurdles.

One factor is finessing the design of the memory technology itself and its surrounding components – that is, how the chip is laid out. IBM’s new chip does not yet contain all the elements needed. A next critical step is integrating everything onto a single chip while maintaining its efficacy.

On the software side, we’ll also need algorithms that specifically tailor to analog chips, and software that readily translates code into language that machines can understand. As these chips become increasingly commercially viable, developing dedicated applications will keep the dream of an analog chip future alive.

“It took decades to shape the computational ecosystems in which CPUs and GPUs operate so successfully,” said Wang. “And it will probably take years to establish the same sort of environment for analog AI.”

Related Posts

Comments (1)

[…] drone racing – and discovering 200 new CRISPR-like systems and 380,000 compounds, the first analogue AI and computer chips, brain navigating microbots and Cancer killing nanorobotic swarms, then […]

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This