Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
AI’s that can create new child AI’s, let alone ones that are better than anything humans are capable of creating, will revolutionise industry and society.
In a move that will both amaze and terrify people, probably at the same time, this week Google announced it’s developed an Artificial Intelligence (AI) that has now created its own “child,” and of course I’m talking about an AI child. After all, an AI creating a human child is still some way off, but frankly tag team an AI with the project to create the world’s first artificial human, a bioprinter, some artificial sperm and eggs, and an artificial womb and hey, who knows, one day it might be possible for an AI to create its own human child – diapers and all. Maybe it’ll outsource nappy changing to an Amazon warehouse robot… Jeff, call me, I think I discovered your next opportunity.
But that’s not all.
The “parent” AI managed to train it’s child to such a high level that its algorithms outperformed every other human built AI system like it. It’s an impressive achievement, but one that could also trigger fears about what else AI could create without human involvement.
The child AI was built using Google’s AutoML platform, a product it unveiled in June this year, that was designed with the express intent of being able to automate the design of new machine learning algorithms and models.
“In our approach a controller neural net can propose a ‘child’ model architecture, which can then be trained and evaluated for quality on a particular task,” said Google’s CEO Sundar Pichai at the time, “that feedback is then used to inform the controller how to improve its proposals for the next round. We repeat this process thousands of times, generating new architectures, testing them, and giving that feedback to the controller to learn from.”
Fast forward to now and AutoML was used to NASNet, a child AI designed specifically to detect and identify objects, and it outperformed everything out there.
To test NASNet, Google ran it against the ImageNet image classification and COCO object detection dataset, which it describes as “two of the most respected large scale academic datasets in computer vision”.
On ImageNet, NASNet achieved a prediction accuracy of 82.7 per cent, performing 1.2 per cent better than all previous published results, and on COCO NASNet achieved 43.1 percent accuracy which is 4 percent better than the previous, published state-of-the-art platforms.
“We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined,” said the researchers who have now open-sourced.
So what should we make of all this? With Microsoft DeepCoder now writing its own code, the emergence of the first breed of creative AI’s, one of which recently evolved, innovated and 3D printed its own robot, and Google’s DeepMind platform now able to learn by itself, with no intervention from humans, and “create its own knowledge,” is it time to practice the words “All hail AI?”
Nope, computers are still stupid awesome.