Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global futures think tank working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future.” Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring several Education and Lunar XPrize teams, building the first generation of biological computers and re-envisioning global education with the G20, and helping the world’s largest conglomerates ideate the next 20 years of intelligent devices and machines. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.
WHY THIS MATTERS IN BRIEF
An AI that can learn by itself, with no human interaction, and that can “create its own knowledge” could give us a new perspective on issues affecting the world, and transform industry, innovation and society.
A couple of month’s ago Google’s Artificial Intelligence (AI) group, DeepMind, unveiled the latest incarnation of its Go playing program, AlphaGo Zero, an AI so powerful that it managed to cram thousands of years of human knowledge of playing the game, before inventing better moves of its own, into just three days.
Hailed as a major breakthrough in AI learning because, unlike previous versions of AlphaGo, which went on to beat the world Go champion as well as take the Go online player community to the cleaners, AlphaGo Zero mastered the ancient Chinese board game from nothing more than a clean slate, with no more help from humans than being told the rules of the game. However, and as if that wasn’t already impressive enough, it took its predecessor, AlphaGo, the AI that famously beat Lee Sedol, the South Korean grandmaster, to the cleaners as well, hammering it 100 games to nil.
AlphaGo Zero’s ability to learn for itself, and without human input, is a milestone on the road to one day realising Artificial General Intelligence (AGI), something that the same company, DeepMind, published an architecture for last year, and it will undoubtedly help us create the next generation of more “general” AI’s that can do a lot more than just thrash humans at board games.
AlphaGo Zero amassed its impressive skills using a technique called Reinforcement Learning, and at the heart of the program are a group of software “neurons” that are connected together to form a digital neural network. During each turn of the game it played the network looked at the positions of the pieces on the Go board and calculated which of the millions of possible moves, based on probability, would be the most likely to give it a win. Then, after each game it updated the network, making it stronger player for the next game, and so on and so on.
While it was far better than previous versions, AlphaGo Zero is actually a simpler program and it mastered Go faster despite training on less data and running on a smaller computer.
“Given more time, it could have learned the rules for itself too,” said Demis Hassabis, CEO of DeepMind and a researcher on the team.
Now DeepMind, which is based in London, has set it the task of working out how proteins fold – a massive scientific challenge that could give drug discovery a big shot in the arm.
“For us, AlphaGo wasn’t just about winning the game of Go,” said Habbis,”it was also a big step for us towards building these general-purpose algorithms.”
Most AIs are described as “narrow” because they perform only a single task, such as translating languages or recognising faces, but, asides from putting us firmly on the path to AGI, these general-purpose AIs could potentially outperform humans at many different tasks, and in the next decade, Hassabis believes that AlphaGo’s descendants will work alongside humans, for example, as scientific and medical experts.
“Using the new technique is more powerful than previous approaches because by not using human data, or human expertise in any fashion, we’ve removed the constraints of human knowledge and therefore it’s able to create knowledge itself,” said David Silver, AlphaGo’s lead researcher.
Let’s back up there for a moment because there’s another milestone right there lurking in that sentence – “able to create knowledge for itself.”
However, that AlphaGo Zero can only work on problems that can be simulated in a computer, making tasks such as driving out of the question, but no doubt it will master driving too one day, something that still, arguably, many humans still haven’t mastered… another milestone in the making?