Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Future computers and AI’s will likely be biological and organic, and this breakthrough could play a role in the future of computing in a very weird way.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
Recently I talked about an Artificial Intelligence (AI) that learned to play 1980’s video games, and generally I talk about all kinds of weird sci-fi like innovations including turning humans into supercomputers, and organic computers, the first molecular assemblers, new kinds of DNA storage and DNA AI’s and liquid computers, synthetic alien lifeforms, and much more. But this is possibly by far and away the weirdest piece of research I’ve seen in a while.
Living brain cells in a dish can learn to play the video game Pong when they are placed in what researchers describe as a “virtual game world”.
“We think it’s fair to call them cyborg brains,” says Brett Kagan, chief scientific officer of Cortical Labs, who leads the research.
Many teams around the world have been studying networks of neurons in dishes, often growing them into brain-like organoids. But this is the first time that mini-brains have been found to perform goal-directed tasks, says Kagan.
See the synthetic brains at play
The “DishBrains” being created by Kagan and his colleagues each consist of between about 800,000 and 1 million living brain cells – roughly equivalent to a cockroach brain, says Kagan. Some contain mouse cells taken from embryonic brains while others contain human brain cells derived from stem cells.
The cells are grown on top of microelectrode arrays that can both stimulate the cells and read their activity. To simulate a simplified version of Pong with no opponent, the firing of electrodes on the left or right of one array tell the mini-brain – the paddle – whether the ball is to its left or right. The frequency of the signals indicates closeness.
Specific patterns of activity across the neurons are interpreted as the paddle moving left or right. The computer responds to this activity, and the feedback via the electrodes allows the mini-brains to learn how to control the paddle.
“We often refer to them as living in the Matrix,” says Kagan. “When they are in the game, they believe they are the paddle.”
The mini-brains are nowhere near as good at Pong as people or even the AIs created by companies such as DeepMind. But they do learn faster than AIs, says Kagan. It takes computer-based AIs at least 5000 rallies to get to the same point that the living systems reach after just 10 or 15 rallies.
“The amazing aspect is how quickly it learns, in 5 minutes in real time,” he says. “That’s really an amazing thing that biology can do.”
DishBrains made of human cells are much better at Pong than those made of mice cells. But because the source of the cells is different, the team can’t yet be certain this is due to their human nature.
“Great paper, fantastic results, authors are genius,” says Andrew Adamatzky, who studies unconventional computing at the University of the West of England in Bristol, UK. “I believe that in future we will have a rich spectrum of organic computers.”
“In my opinion, it is a quantum leap forward,” says Karl Friston at University College London. “The authors have managed to get a neural network to make sense of data from the world and act on the world at the same time.”
“This closure of the action-perception loop is not only an outstanding technical achievement but brings us a step closer to creating synthetic brains,” he says.
Kagan and his colleagues’ approach to training is based on a theory of how the brain works called the free energy principle, developed by Friston. The basic idea is that even neurons in a dish will try to create an internal model of their external world. They want to predict what will happen in terms of what inputs they will get, and they don’t like being surprised.
This is why the cells “play the game”, says Kagan. When they play, their inputs become more predictable. If they don’t play, they get random inputs that are aversive, he says.
“The remarkable thing emerging from this set-up is the sentient behaviour that emerges in an unsupervised fashion,” says Friston.
The long-term aim of Cortical Labs is to develop cyborg brains that Kagan thinks could eventually be more intelligent than computer-based systems.
“Dogs are still used to help someone who is visually impaired cross a road, rather than robots,” he says. “Even the simple biological intelligence of the dog can still outperform any machine we have.”
There are also more immediate applications, says Kagan. For instance, studying how neurons learn so fast and efficiently could help improve machine learning, reducing the large amounts of energy required as well as the time.
“The next generation of artificial intelligence should aim for the functional and thermodynamic efficiency of biological brains,” says Friston. “The current work is a notable – perhaps historical – step on this direction.”
Drug screening is another potential use. Giving the mini-brains experimental drugs as they play games could reveal more about how those drugs will affect the human brain than studying neurons in isolation, says Kagan.
Reference: bioRxiv, DOI: 10.1101/2021.12.02.471005