Matthew Griffin, award winning Futurist and Founder of the 311 Institute, a global futures think tank, is described as "The Adviser behind the Advisers." Regularly featured on AP, CNBC, Discovery and RT, his ability to identify and track hundreds of game changing emerging technologies, and explain their impact on global culture, industry and society, is unparalleled. Recognised for the past five years running as one of the world's foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments investors, multi-nationals and regulators around the world envision, build and lead an inclusive future. A rare talent Matthew sits on the Technology and Innovation Committee (TIAC) for Centrica, one of Europe’s largest energy companies, and his recent work includes mentoring XPRIZE teams, building the first generation of biocomputers, helping the world’s largest manufacturers companies envision the next five generations of smartphones and devices, and what comes next, and helping companies including Qualcomm envision the next twenty years of semiconductors. Matthew's clients are the who’s who of industry and include Accenture, Bain & Co, BOA, Blackrock, Bloomberg, Booz Allen Hamilton, BCG, Bentley, Dell EMC, Dentons, Deloitte, Deutsche Bank, Du Pont, E&Y, Fidelity, Goldman Sachs, HPE, Huawei, JPMorgan Chase, KPMG, Lloyds Banking Group, McKinsey, Monsanto, PWC, Qualcomm, Rolls Royce, SAP, Samsung, Schroeder's, Sequoia Capital, Sopra Steria, UBS, the UK's HM Treasury, the USAF and many others.
WHY THIS MATTERS IN BRIEF
It seems algorithms rule the world, now scientists think they have identified the algorithm responsible for human intelligence, if true then it could revolutionise artificial intelligence.
Little might you realise but as you read this article there’s a good chance your brain is running its “n=2ⁱ-1” algorithm – and you thought you were rubbish at maths!
Scientists in the USA now believe that our brains have a basic algorithm that enable us to not just recognise a meal, or words on a page, for example, but also the intelligence to ponder their broader implications.
“A relatively simple mathematical logic underlies our complex brain computations,” said Dr. Joe Tsien, a neuroscientist at the Medical College of Georgia at Augusta University.
Tsien is talking about his Theory of Connectivity, a fundamental principle for how our billions of neurons assemble and align not just to acquire knowledge, but to generalise and draw conclusions from it.
“Intelligence is really about dealing with uncertainty and infinite possibilities,” said Tsien, “it appears to be enabled when a group of similar neurons form a variety of cliques to handle each basic like recognising food, shelter, friends and foes. Groups of cliques then cluster into functional connectivity motifs, or FCMs, to handle every possibility in each of these basics like extrapolating that rice is part of an important food group that might be a good side dish for your meal. The more complex the thought, the more cliques join in.”
That means, for example, we cannot only recognise an office chair, but an office when we see one and know that the chair is where we sit in that office.
“You know an office is an office whether it’s at your house or the White House,” Tsien said of the ability to conceptualize knowledge – which is one of many things that distinguishes us from computers.
Now, for the first time, Tsien and his colleagues have documented the algorithm at work in seven different brain regions involved with those basics like food and fear in mice and hamsters. Their documentation is published in the journal Frontiers in Systems Neuroscience.
“For it to be a universal principle, it needs to be operating in many neural circuits, so we selected seven different brain regions and, surprisingly, we indeed saw this principle operating in all these regions,” he said.
Intricate organisation seems plausible, even essential, in a human brain, which has about 86 billion neurons and where each neuron can have tens of thousands of synapses, putting potential connections and communications between neurons into the trillions. On top of the seemingly endless connections is the reality of the infinite number of things that each of us can experience and learn.
Neuroscientists as well as computer experts have long been curious about how the brain is able to not only hold specific information, like a computer, but – unlike even the most sophisticated technology – to also categorise and generalise the information into abstract knowledge and concepts.
“Many people have long speculated that there has to be a basic design principle from which intelligence originates and the brain evolves, like how the double helix of DNA and genetic codes are universal for every organism,” Tsien said, “we have presented evidence that the brain may operate on an amazingly simple mathematical logic.”
“In my view, Joe Tsien proposes an interesting idea that proposes a simple organisational principle of the brain, and that is supported by intriguing and suggestive evidence,” said Dr. Thomas Südhof, a neuroscientist studying synapse formation and function and a winner of the 2013 Nobel Prize in Physiology and Medicine at the Stanford University School of Medicine.
“This idea is very much worth testing further,” said Südhof, a sentiment echoed by Tsien and his colleagues, “and it needs to be tested in additional neural circuits as well as other animal species and artificial intelligence systems.”
At the heart of Tsien’s Theory of Connectivity is the algorithm, n=2ⁱ-1, which defines how many cliques are needed for an FCM and which enabled the scientists to predict the number of cliques needed to recognise food options during their tests. N is the number of neural cliques connected in different possible ways, 2 means the neurons in those cliques are receiving the input or not, meanwhile i is the information they are receiving and -1 is just part of the math that enables you to account for all possibilities, Tsien explained. To test the theory, they placed electrodes in specific areas of the brain so they could “listen” to the response of neurons, or their action potential, and examine the unique waveforms resulting from each. They gave the animals, for example, different combinations of four different foods, such as usual rodent biscuits as well as sugar pellets, rice and milk, and as the Theory of Connectivity would predict, the scientists could identify all 15 different cliques, or groupings of neurons, that responded to the potential variety of food combinations.
The neuronal cliques appear prewired during brain development because they showed up immediately when the food choices did. The fundamental mathematical rule even remained largely intact when the NMDA receptor, a master switch for learning and memory, was disabled after the brain matured.
The scientists also learned that size matters, because while human and animal brains both have a six layered cerebral cortex – the lumpy outer layer of the brain that plays a key role in higher brain functions like learning and memory – the extra longitudinal length of the human cortex provides more room for cliques and FCMs, Tsien said. And while the overall girth of the elephant brain is definitely larger than the human brain, for example, most of its neurons reside in the cerebellum with far less in their super-sized cerebral cortex. The cerebellum is more involved in muscle coordination, which may help explain the agility of the huge mammal, particularly its trunk.