Matthew Griffin, award winning Futurist and Founder of the 311 Institute, a global futures think tank, is described as "The Adviser behind the Advisers." Regularly featured on AP, CNBC, Discovery and RT, his ability to identify and track hundreds of game changing emerging technologies, and explain their impact on global culture, industry and society, is unparalleled. Recognised for the past five years running as one of the world's foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments investors, multi-nationals and regulators around the world envision, build and lead an inclusive future. A rare talent Matthew sits on the Technology and Innovation Committee (TIAC) for Centrica, one of Europe’s largest energy companies, and his recent work includes mentoring XPRIZE teams, building the first generation of biocomputers, helping the world’s largest manufacturers companies envision the next five generations of smartphones and devices, and what comes next, and helping companies including Qualcomm envision the next twenty years of semiconductors. Matthew's clients are the who’s who of industry and include Accenture, Bain & Co, BOA, Blackrock, Bloomberg, Booz Allen Hamilton, BCG, Bentley, Dell EMC, Dentons, Deloitte, Deutsche Bank, Du Pont, E&Y, Fidelity, Goldman Sachs, HPE, Huawei, JPMorgan Chase, KPMG, Lloyds Banking Group, McKinsey, Monsanto, PWC, Qualcomm, Rolls Royce, SAP, Samsung, Schroeder's, Sequoia Capital, Sopra Steria, UBS, the UK's HM Treasury, the USAF and many others.
WHY THIS MATTERS IN BRIEF
AI’s that can create new child AI’s, let alone ones that are better than anything humans are capable of creating, will revolutionise industry and society.
In a move that will both amaze and terrify people, probably at the same time, this week Google announced it’s developed an Artificial Intelligence (AI) that has now created its own “child,” and of course I’m talking about an AI child. After all, an AI creating a human child is still some way off, but frankly tag team an AI with the project to create the world’s first artificial human, a bioprinter, some artificial sperm and eggs, and an artificial womb and hey, who knows, one day it might be possible for an AI to create its own human child – diapers and all. Maybe it’ll outsource nappy changing to an Amazon warehouse robot… Jeff, call me, I think I discovered your next opportunity.
But that’s not all.
The “parent” AI managed to train it’s child to such a high level that its algorithms outperformed every other human built AI system like it. It’s an impressive achievement, but one that could also trigger fears about what else AI could create without human involvement.
The child AI was built using Google’s AutoML platform, a product it unveiled in June this year, that was designed with the express intent of being able to automate the design of new machine learning algorithms and models.
“In our approach a controller neural net can propose a ‘child’ model architecture, which can then be trained and evaluated for quality on a particular task,” said Google’s CEO Sundar Pichai at the time, “that feedback is then used to inform the controller how to improve its proposals for the next round. We repeat this process thousands of times, generating new architectures, testing them, and giving that feedback to the controller to learn from.”
Fast forward to now and AutoML was used to NASNet, a child AI designed specifically to detect and identify objects, and it outperformed everything out there.
To test NASNet, Google ran it against the ImageNet image classification and COCO object detection dataset, which it describes as “two of the most respected large scale academic datasets in computer vision”.
On ImageNet, NASNet achieved a prediction accuracy of 82.7 per cent, performing 1.2 per cent better than all previous published results, and on COCO NASNet achieved 43.1 percent accuracy which is 4 percent better than the previous, published state-of-the-art platforms.
“We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined,” said the researchers who have now open-sourced.
So what should we make of all this? With Microsoft DeepCoder now writing its own code, the emergence of the first breed of creative AI’s, one of which recently evolved, innovated and 3D printed its own robot, and Google’s DeepMind platform now able to learn by itself, with no intervention from humans, and “create its own knowledge,” is it time to practice the words “All hail AI?”
Nope, computers are still stupid awesome.