Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Today’s neural networks are getting enormous and complex, this is the smallest ever example of a neural network that managed to control a self-driving vehicle.
Artificial Intelligence (AI) can become more efficient and reliable if it’s made to mimic biological models, so the theory goes, which is why researchers around the world are building robots with worm brains, which in turn opens the door to robots that are conscious, and are using Dragonfly brains to build next generation missile defence systems.
Modern AI has produced models that exceed human performance across countless tasks, and now an international research team is suggesting AI might become even more efficient and reliable if it learns to think more like worms.
In a paper recently published in Nature Machine Intelligence journal a team from MIT, TU Wien in Vienna, and IST Austria showed off a new form of AI that was modelled on the brains of tiny animals such as threadworms and that managed to control a self-driving car using a tiny number of artificial neurons, just 19 in total.
See the new AI in action
The researchers say the system has decisive advantages over other deep learning models because it copes much better with noisy input, and, because of its simplicity its operations can be explained in detail which alleviates AI’s so called “Black Box” problem that, in short, means no one really knows how many of these AI’s make the decisions they do because of the way they work and their complexity.
Explains TU Wien Cyber-Physical Systems head Professor Radu Grosu in a project press release: “For years, we have been investigating what we can learn from nature to improve deep learning. The nematode C. elegans, for example, lives its life with an amazingly small number of neurons, and still shows interesting behavioural patterns. This is due to the efficient and harmonious way the nematode’s nervous system processes information.”
“Nature shows us that there is still a lot of room for improvement,” adds MIT CSAIL Director Daniela Rus, who says the researchers’ goal was “to massively reduce complexity and develop a new kind of neural network architecture.”
The team developed new mathematical models of neurons and synapses and combined brain-inspired neural computation principles and scalable deep learning architectures to build compact neural controllers for task-specific compartments of a full-stack autonomous vehicle control system.
The proposed system comprises two main parts. A camera input is first processed by a convolutional neural network (CNN), which extracts structural features from pixels, decides which parts of the camera image are interesting and important, then passes signals to the crucial part of the network — a neural circuit policy (NCP) control system that steers the vehicle. The NCP consists of only 19 neurons.
The team chose a standard test task for self-driving cars, staying in a lane, and they performed the tests using a camera-equipped vehicle with the system receiving images of the road and deciding whether to steer to the right or left.
State-of-the-art functional autonomous vehicle systems typically use convolutional networks as a backbone with many additional upstream, task-specific networks. The researchers found that NCP achieved comparable results with previous SOTA models including CNN, CT-RNN, and LSTM in terms of training square error and test square error. Moreover, a full-stack NCP network is 63 times smaller than the SOTA CNN network, and NCP’s control network is 970 times sparser than that of LSTM and 241 times sparser than CT-RNN.
The researchers say interpretability and robustness are the two major advantages of their proposed model. What’s more, they say these new methods can reduce training time and make it possible to implement AI in relatively simple systems. They hope their findings will encourage the AI community to consider the principles of computation in biological nervous systems as a valuable resource for creating high-performance interpretable AI.