Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Creativity is said to be a uniquely “Human” gift but while we can argue the semantics it appears that increasingly machines share the same gift, albeit algorithmic rather than chemical.
Many people believe that creativity is a uniquely human trait but what about self awareness and intuition? Defining the line between humans and machines is becoming more blurred by the day as start ups, technology giants and academia jostle to build the next generation of advanced Artificial Intelligence but while there’s been a lot of focus on what we call “Convergent AI” (CAI) small groups, such as the ones at Columbia University and their PIX18 robot are beginning to splinter off and focus on a new area of AI we call “Divergent AI” (DAI).
Taking a page from biology they are creating not the next but the next next generation of AI – the creation of autonomous, self aware, self evolving, self replicating creative thinking machines that one day will out perform humans in almost every area.
CAI platforms ingest and distil huge quantities of information down to a single answer or action, in essence their algorithmic models are biased towards “If this then that.” The human and machine self-generated algorithms behind DAI on the other hand use “What if” algorithmic models – they start with a single idea or need and explode it to create thousands of new scenarios then these scenarios are played out, prioritised and filtered.
While the difference between the two models looks slight the outcomes will be profoundly different. In the future which, by most experts projections, we’re taking about 2040 not 2140, for example a CAI based Artificial Super Intelligent (ASI) system might be able to explain the law of physics behind interstellar travel but a DAI ASI will be able to design then build the space craft to get us there and with ASI on the scene there are no limits to what could be achieved. Immortality, limitless cold fusion energy and a million more possibilities.
Every technology has to begin somewhere though which is why so many organisations have chosen Convergent AI as their battle ground because fundamentally decision based logic machine systems are easier to design and create than Divergent AI systems and that’s not necessarily a bad thing.
In one respect we can think of CAI of the workhorse of ‘real’ AI. If we’re ever to fully realize the potential of DAI then the work that organisations and academia are doing in that space is absolutely fundamental, after all without CAI we wouldn’t have Deep Learning, Machine Learning, Machine Vision or Natural Language Processing and without these building blocks the range, type and volume of data DAI systems could ingest and interpret would be limited.
For example, think about the difference between someone describing the Mona Lisa to you, AKA Machine Learning versus seeing it for yourself, AKA Machine Vision. It would be fair to say that your conclusions, interpretation and perception of it would change significantly depending which one you were exposed to. If you had access to both and I asked you to create a masterpiece you’ve now got a much richer data set to work from in order to figure out what form and what expression you’d use to create it.
In one way DAI platforms will be lucky, by the time they become the center of the new technology arms race they’ll already have almost limitless access to all of the information on the planet, in all it’s types and formats, structured and unstructured, machine data, text, voice and video and at that point we enter into a new era. One where creative, imaginative, self modelling and self aware systems – whether those are hardware or software robots and bots or ethereal machine systems can not just ideate and innovate but also evolve themselves and self replicate.
While the arguments continue to fly about the benefits and dangers of AI to the human race when machine evolution slips out of our hands, as it inevitably will we’ll find ourselves in either a far better, or far scarier place. That said though at least we might have some wonderful art to look at when the machines hurl the Earth into the Sun.