Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Memristors work like human synapses and are a building block of highly energy efficient Neuromorphic computers which make them perfect for running AI at the edge.
Recently I talked about how a new computer chip from MIT will, literally, put the equivalent of a human brain in your pocket, and now thanks to new research from France the same tech MIT used, a low-power and non-volatile technology called the memristor, has shown that it could solve one of Artificial Intelligence’s biggest issues – being able to run machine learning algorithms at the very edge of the network where both computer resources and energy are scarce. And that would open up millions of new cases for AI and help unleash it on the world in earnest.
According to the new research memristors are great at efficiently tackling AI medical diagnosis problems, and that’s an encouraging development that suggests that it could also be used to tackle other applications in other fields, especially for low-power or network “edge” applications. This may be, the researchers say, because memristors artificially mimic some of the properties of human synapses.
Memristors, or memory resistors, are a kind of building block for electronic circuits that scientists predicted roughly 50 years ago but only created for the first time a little more than a decade ago. These components, also known as Resistive Random Access Memory (RRAM) devices, are essentially highly energy efficient electric switches that can remember whether they were toggled on or off even after their power is turned off. As such, they resemble human synapses, the links between neurons in the human brain, whose electrical conductivity strengthens or weakens depending on how much electrical charge has passed through them in the past.
Memristors can act like artificial neurons capable of both computing and storing data and that’s what makes them so attractive as a technology. As such, researchers have suggested memristors could potentially greatly reduce the energy and time lost in conventional computers shuttling data back and forth between processors and memory. The devices could also work well within neural networks, which are machine learning systems that use synthetic versions of synapses and neurons to mimic the process of learning in the human brain.
One challenge with developing applications for memristors is the randomness found in these devices. The level of electrical resistance or conductivity seen in memristors depends on a handful of atoms linking up two electrodes, making it difficult to control their electrical properties from the outset, says study lead author Thomas Dalgaty, an electrical engineer at University of Grenoble.
Now Dalgaty and his colleagues have developed a way to harness this randomness for machine learning applications. They detailed their findings this month in the journal Nature Electronics.
Memristors are programmed by cycling through high-conductance on states and low-conductance off states. Usually the level of electrical conductivity seen in memristors can vary between one on state and the next due to intrinsic random processes within the devices.
However, if memristors are cycled on and off enough, the electrical conductivity of each memristor follows a pattern – “a bell curve,” Dalgaty says. The scientists revealed they could implement an algorithm known as Markov chain Monte Carlo sampling that could actively exploit this predictable behavior to solve a number of machine-learning tasks.
When compared with the performance of conventional digital CMOS electronics, the researchers’ memristor arrays achieved a stunning five order of magnitude reduction in energy. This, Dalgaty says, is because the memristors did not need to shuffle data back and forth between processors and memory. For context, that 100,000 fold discrepancy is equivalent to “the difference in height between the Burj Khalifa, the tallest building in the world, and a coin,” he explains.
One potentially exciting application for memristors would be devices capable of learning, adapting and operating at the edge of a network, also known as edge computing, where low-power devices like embedded systems, smart home gear and Internet of Things nodes sometimes reside. Indeed, Dalgaty says, memristors could help make edge learning devices a reality.
“Currently edge learning is not possible because the energy required to perform machine learning [at the edge of the network] with existing hardware is far greater than the energy that is available at the edge,” he explains. “Edge learning [using memristors] … can potentially open up completely new application domains that were not possible before.”
For example, the researchers used an array made of 16,384 memristors to detect heart rhythm anomalies from electrocardiogram recordings, reporting a better detection rate than a standard neural network based on conventional, non-memristor electronics. The team also used their array to solve image recognition tasks such as diagnosing cancerous breast-tissue samples.
Potential future edge learning memristor applications might include implanted medical early-warning systems that can adapt to a patient’s state as it changes over time.
“We are looking towards these really energy-constrained edge applications that maybe don’t or can’t exist yet because of energy [restrictions],” Dalgaty says.
The next big challenge, Dalgaty says, “will be putting all of this functionality together onto a single integrated chip that can be applied outside of the laboratory.” It may take a few years before such a chip exists, he says.