Scroll Top

Revolutionary algorithm will soon let scientists simulate the entire human brain

WHY THIS MATTERS IN BRIEF

Today we can simulate 10 percent of the human brain in a supercomputer, but a whole brain simulation would revolutionise neuroscience, unlock the secrets of the brain and lead to new breakthrough cognitive treatments.

 

The renowned physicist Dr. Richard Feynman once said “What I cannot create, I do not understand. Know how to solve every problem that has been solved,” and now an increasingly influential subfield of neuroscience has taken Feynman’s words to heart. To theoretical neuroscientists the key to understanding how intelligence works is, apparently, to recreate it inside a supercomputer, neuron by neuron, and they’re hoping a new algorithm they’ve developed will very soon let them reconstruct the neural processes that produce a thought, a memory, or a feeling, and ultimately allow them to create a real life digital simulation of the entire human brain – something that’s never been done before.

 

RELATED
Botto the decentralised AI artist bags its first million dollars

 

With a digital brain in place they’ll be able to test out current theories of cognition and explore the parameters that lead to a “malfunctioning mind” and conditions like dementia. As philosopher Dr. Nick Bostrom at the University of Oxford argues, simulating the human mind is perhaps one of the most promising, albeit laborious, ways to recreate and surpass human level ingenuity, but there’s just one problem – traditional supercomputers still can’t handle the massively parallel nature of our brains. Squashed into our 3 pound organ are over 100 billion interconnected neurons and trillions of synapses, and that’s a lot for a computer, even a supercomputer, to get its circuits around.

Even the most powerful supercomputers today balk and fall over at that scale, for example, so far, machines like the ultra-powerful K computer at the Advanced Institute for Computational Science in Kobe, Japan can only simulate at most 10 percent of neurons and their synapses.

This is partially due to the software the systems are running, as computational hardware inevitably gets faster, it’s the algorithms that increasingly become the bottleneck that’s holding us back from realising our goal of 100 percent whole-brain simulation.

This month though an international team completely revamped the structure of a popular brain simulation algorithm, developing a powerful new algorithm that dramatically slashes computing time and memory use, and better yet it’s compatible with a wide range of computing hardware, from laptops to supercomputers, so when future exascale supercomputers hit the scene, which should be in the next year or two, which are projected to be 10 to 100 times more powerful than today’s top performers, the algorithm will be able to immediately run on them and “do its thing.”

 

RELATED
Universal translators edge closer after Google Assistant gets "Interpreter Mode"

 

“With this new technology we can exploit the increased parallelism of modern microprocessors a lot better than previously, which will become even more important in exascale computers,” said study author Jakob Jordan from the Jülich Research Center in Germany, who published the work in Frontiers in Neuroinformatics.

“It’s a decisive step towards creating the technology to achieve simulations of brain-scale networks,” the authors said.

Current supercomputers are composed of hundreds of thousands of subdomains called nodes, and each node has multiple processing centers that can support a handful of virtual neurons and their connections, but the main issue in brain simulation is how to effectively represent those millions of neurons and their connections inside these processing centers in a way that cuts down time and power.

One of the most popular simulation algorithms today is one called the “Memory Usage Model.” Before scientists simulate changes in their simulated neuronal networks they need to first create all the neurons and their connections within the virtual brain using the algorithm, but here’s the rub – for any neuronal pair, the model stores all information about connectivity in each node that houses the receiving neuron, something called the Postsynaptic neuron.

In other words, the presynaptic neuron, which sends out electrical impulses, is shouting into the void, the algorithm has to figure out where a particular message came from by solely looking at the receiver neuron and data stored within its individual node.

 

RELATED
Fully autonomous raspberry picking robot successfully completes field trials

 

It sounds like a strange setup, but the model allows all the nodes to construct their particular portion of the neural network in parallel. This dramatically cuts down boot-up time, which is partly why the algorithm is so popular, but as you probably guessed, it comes with severe problems in scaling. The sender node broadcasts its message to all receiver neuron nodes. This means that each receiver node needs to sort through every single message in the network, even ones meant for neurons housed in other nodes. That means a huge portion of messages get thrown away in each node because the addressee neuron isn’t present in that particular node.

Imagine overworked post office staff skimming an entire country’s worth of mail to find the few that belong to their jurisdiction and you get the picture. It’d be crazily inefficient, but that’s pretty much what goes on in today’s traditional Memory Usage Model.

The problem becomes worse as the size of the simulated neuronal network grows as well.  Each node needs to dedicate memory storage space to an “address book” listing all its neural inhabitants and their connections. And at the scale of billions of neurons it shouldn’t come as any surprise that this address book becomes a huge memory hog.

The team hacked this problem by essentially adding a zip code to the algorithm, and here’s how it works…

 

RELATED
Beyond 5G, China eyes dominance in 6G networks by 2030

 

The receiver nodes contain two blocks of information. The first is a database that stores data about all the sender neurons that connect to the nodes. Because synapses come in several sizes and types that differ in their memory consumption, this database further sorts its information based on the type of synapses formed by neurons in the node.

This setup already dramatically differs from its predecessor, in which connectivity data is sorted by the incoming neuronal source, not synapse type, and because of this, the node no longer has to maintain its address book.

“The size of the data structure is therefore independent of the total number of neurons in the network,” the authors explained.

The second chunk stores data about the actual connections between the receiver node and its senders. Similar to the first chunk, it organises data by the type of synapse, within each type of synapse, it then separates data by the source, the sender neuron.

In this way, the algorithm is far more specific than its predecessor, rather than storing all connection data in each node, the receiver nodes only store data relevant to the virtual neurons housed within.

 

RELATED
Google's AI enlisted to help crack the secret of nuclear fusion

 

The team also gave each sender neuron a target address book. During transmission the data is broken up into chunks, with each chunk containing a zip code of sorts directing it to the correct receiving nodes. Rather than a computer wide message blast here the data is confined to the receiver neurons that they’re supposed to go to. And the modifications worked beautifully.

In a series of tests, the new algorithm performed much better than its predecessors in both scalability and speed, and on the supercomputer JUQUEEN in Germany, the algorithm ran 55 percent faster than previous models on a random neural network, mainly thanks to its streamlined data transfer scheme.

At a network size of half a billion neurons, for example, simulating one second of biological events took about five minutes of JUQUEEN runtime using the new algorithm, whereas its predecessor clocked in at six times that.

“This really brings investigations of fundamental aspects of brain function, like plasticity and learning unfolding over minutes within our reach,” said study author Dr. Markus Diesmann, and as expected several scalability tests revealed that the new algorithm is far more proficient at handling large networks, reducing the time it takes to process tens of thousands of data transfers by roughly threefold.

 

RELATED
Elon Musk's Neuralink helps monkies play Pong with their minds

 

“The novel [approach] profits from sending only the relevant [simulated synaptic] spikes to each process,” the authors concluded, “and because computer memory is now uncoupled from the size of the network, the algorithm is poised to tackle brain-wide simulations.”

While revolutionary, the team notes that a lot more work remains to be done though. For one, mapping the structure of actual neuronal networks onto the topology of the individual supercomputer nodes should further streamline data transfer, and for another, brain simulation software needs to regularly save its process so that in case of a computer crash, the simulation doesn’t have to start all over again.

“Now the focus lies on accelerating simulations in the presence of various forms of network plasticity,” the authors concluded, and once that’s solved we’ll very soon be able to realise whole brain simulation within a computer system, and a true “digital brain,” that will us revolutionise neuroscience, and unlock the mysteries of the human brain, will be born.

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This