Matthew Griffin, award winning Futurist and Founder of the 311 Institute, a global futures think tank, is described as "The Adviser behind the Advisers." Regularly featured on AP, CNBC, Discovery and RT, his ability to identify and track hundreds of game changing emerging technologies, and explain their impact on global culture, industry and society, is unparalleled. Recognised for the past five years running as one of the world's foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments investors, multi-nationals and regulators around the world envision, build and lead an inclusive future. A rare talent Matthew sits on the Technology and Innovation Committee (TIAC) for Centrica, one of Europe’s largest energy companies, and his recent work includes mentoring XPRIZE teams, building the first generation of biocomputers, helping the world’s largest manufacturers companies envision the next five generations of smartphones and devices, and what comes next, and helping companies including Qualcomm envision the next twenty years of semiconductors. Matthew's clients are the who’s who of industry and include Accenture, Bain & Co, BOA, Blackrock, Bloomberg, Booz Allen Hamilton, BCG, Bentley, Dell EMC, Dentons, Deloitte, Deutsche Bank, Du Pont, E&Y, Fidelity, Goldman Sachs, HPE, Huawei, JPMorgan Chase, KPMG, Lloyds Banking Group, McKinsey, Monsanto, PWC, Qualcomm, Rolls Royce, SAP, Samsung, Schroeder's, Sequoia Capital, Sopra Steria, UBS, the UK's HM Treasury, the USAF and many others.
WHY THIS MATTER IN BRIEF
As the age of silicon based computing systems nears its natural conclusion new computing platforms are emerging that are capable of self-learning and revolutionising AI development, and which are millions of times faster and more energy efficient than today’s “antiquated” systems.
Intel recently held their annual keynote in Las Vegas and while most of the coverage was pretty much the usual symphony of marketing material and future projects I think one particular segment stood out above the rest – their announcements around their latest Neuromorphic and Quantum processors.
It is no secret that we’re nearing the cliff edge of what’s possible with today’s silicon based computing platforms, even as we see a path to 5nm, 1nm and even 0.5nm transistors, but as the economics of switching from one fabrication process to another increase almost exponentially it’s looking increasingly likely that silicon will only take us so far. This in part is one of the reasons why today we’re seeing a proliferation of new computing architectures and types that include the development of new Chemical, DNA, Liquid, Neuromorphic, Photonic, and yes, Quantum computing platforms, and it wouldn’t be too much of a stretch to suggest that the much lauded “death of Silicon” is one of Intel’s main motivators for investigating, and experimenting with, new processor types.
Recently Intel unveiled their first 17 Qubit Quantum computing chip, and now, a few months later the company has announced its first commercial foray into Neuromorphic computing, a form of computing that could one day see the awesome power of today’s biggest supercomputers condensed down into a computing package no larger than your fingernail.
The architecture of Intel’s new Loihi chip, as it’s called, which is basically a self-learning neuromorphic processor, the kind that will one day help us revolutionise Artificial Intelligence (AI) all over again, operates in a similar way to the human brain. Just like the human brain it’s designed to create new internal neural pathways over time, something that in our case gives us humans our IQ and our astounding problem solving capability, and as a result, and thanks to the presence of over 130,000 artificial neurons and the equivalent of over 130 million human synapses, Loihi will be able to learn by itself.
Loihi’s digital circuitry mimics the mechanics of the human brain which not only helps it accelerate machine learning tasks to crazy speeds but let it do so using just a thousandth of the computing power, and energy, of today’s increasingly antiquated looking systems.
Neuromorphic chip models draw inspiration from how human neurons communicate and learn, using spikes and plastic, or in laymans terms “artificial,” synapses, like the ones recently developed by MIT that operate billions of times faster than our own human neurons, that can be modulated based on timing, and it’s this trait that one day will let these new computing platforms self-organize and make decisions based on patterns and associations by themselves without the need for human input or intervention.
The Loihi test chip is based on Intels 14nm process technology and its features include a fully asynchronous neuromorphic “many core” mesh that supports a wide range of sparse, hierarchical and recurrent neural network topologies, with each neuron capable of communicating with thousands of other neurons, and each of these “Neuromorphic cores” include a learning engine that can be programmed to adapt its network parameters, or “learning methodology,” to support supervised, unsupervised, reinforcement and other AI learning “paradigms.” It also allows for the basic development and testing of several algorithms, with high algorithmic efficiency, for problems including path planning, constraint satisfaction, sparse coding, dictionary learning, and dynamic pattern learning and adaptation.
Intel now plans to spend the first half of 2018 sharing the chip with leading university and research institutions with a focus on advancing AI, where an increasing need for collection, analysis and decision making from highly dynamic and unstructured natural data is driving demand for compute that may outpace both classic CPU and GPU architectures. The future’s arriving, and it’s going to be here sooner than you think…