Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Humans and animals learn many things intuitively, like walking, and now researchers have found a way to embody this same trait within AI.
Animals are lucky, after all they’re born with innate abilities and predispositions. Horses can walk within hours of birth, ducks can swim soon after hatching, and human infants are automatically attracted to faces and innately know how to scream at parents – which is a wonderful gift as any parent will tell you.
Over time animal’s brains have evolved to take on the world with little or no experience, and so should come as little surprise that researchers would like to recreate such natural abilities in Artificial Intelligence (AI), as well as in the field of robotics where researchers are increasingly trying to tap into this phenomenon by creating the world’s first so called General Purpose Robots that don’t have to be taught to do specific things, they just observe the world around them and “pick up skills” as they go.
Now new research from Google has found that artificial neural networks can be given these same innate skills and evolve to perform tasks without learning, and the new technique could ultimately lead to AI that is much more adept at a wide variety of straight off the bat, for example driving cars or even perhaps taking over the world and becoming an immortal dictator like Elon Musk recently discussed. But let’s forget the fun stuff for now and look at the science behind the breakthrough.
Artificial neural networks are arrangements of small computing elements, neurons, that pass information between them. The networks typically learn to perform tasks like playing games or recognising images by adjusting the “weights” or strengths of the connections between these neurons. A technique called neural architecture search tries lots of network shapes and sizes to find ones that learn better for a specific purpose.
The new method uses this same search technique to find networks for which the weights don’t matter. For such a network, the network’s overall shape drives its intelligence – and could make it particularly well-suited to certain tasks.
“If animals have all these innate behaviours, and some neural networks can do well without a lot of training, we wondered how far we could push that idea,” said Adam Gaier, a computer scientist who was the paper’s lead author while working at Google Brain.
The process begins with a set of very simple networks that link inputs, say data from a robot’s sensors to behavioural outputs. It evaluates the nets’ performance on a given task, keeps the networks that performed best, and mutates them, by adding a neuron, adding a link, or changing how sensitive a neuron is to the sum of its inputs. In the evaluation phase, a shared random number is assigned to all of a network’s weights – this is actually done for several random numbers, and the results are averaged.
The results are called Weight Agnostic Neural Networks (WANNs). These networks get points for performing well on the task and also for being simple. While typical networks for the tasks in this study might have thousands of neurons and weights, the WANNs had only a handful of neurons and a single weight.
Somehow the WANNs still performed respectably. The research team compared them with standard network architectures whose weights were refined through experience to master three simulated tasks, for example, driving a race car, making a bipedal robot walk, and controlling a wheeled cart to balance a pole.
WANNs achieved scores ranging from about a sixth to half those of the neural networks that had been specifically trained to perform the tasks. When the researchers assigned the best-performing weight instead of a random one, those numbers ranged from two thirds to four fifths the trained nets’ scores. And if, after evolution, the WANNs were trained in the same way as the much larger standard networks, their performance was on par. And that is a breakthrough.
On a task that involved recognising written digits these WANNs then went on to achieve greater than 90 percent accuracy, versus 99 percent for a larger network trained on the task.
The research was presented last month at the Neural Information Processing Systems (NeurIPS) conference, in Vancouver, Canada.
“The fact that they make the whole thing work is very impressive,” said Rosanne Liu, a computer scientist at Uber AI Labs who was not involved with the research. Others have tried and failed to develop networks that don’t depend on weights. Gaier says the breakthrough was originally a bug that assigned the same number to all weights, which ended up simplifying the architecture search.
While the WANNs’ performance didn’t supersede that of larger trained networks the method opens a new pathway for finding network architectures specially adapted for various tasks, in the way that parts of the brain are wired differently for specific purposes. Convolutional neural networks, featuring an architecture catered for image recognition, mirror the structure of the brain’s visual cortex, for example, and now Gaier believes many more building blocks may be out there, ready to make AIs smart from birth. And that would be as scary as it is revolutionary.