Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global futures and deep futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future.” Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, GEMS, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.
WHY THIS MATTERS IN BRIEF
Imagination is thought to be a uniquely human trait, but now that AI’s are being taught how to imagine not only will it make them better at understanding the consequences of their actions but it could make them better innovators too.
Researchers at Google’s infamous DeepMind Artificial Intelligence (AI) labs, who so far have given their AI’s a human memory, and taught them to build new AI’s, dream, fight, learn like humans and play games, while at the same time drawing up a new architecture to reach Artificial General Intelligence (AGI), this week announced that they’ve begun developing an AI with an imagination that can think through the consequences of its actions before acting on them, and there could be huge ramifications for the future of innovation.
The team behind the idea, who are based out of London, think that the ability for AI’s to imagine is going to be crucial in helping develop the algorithms of the future, and help AI’s better adapt to changing conditions.
However, that said, as we start to see the rise of creative AI’s, who are becoming increasingly skilled at mastering iterative innovation, a form of innovation where you take something that already exists and incrementally improve it, and who so far have designed everything from chairs to shoes, it’s also possible that this new technique could also be used to help create the first generation of new truly “innovative” AI’s that create primary, that is to say create things noone has ever seen before, or even disruptive and radical innovations.
After all, if these AI’s can plug into big data sources, such as Facebook and Twitter to identify new human needs, and gaps in the market, and they can already do this today at scale, then being able to fashion new product recommendations and even build new products, for example, new applications – something that Microsoft’s DeepCoder platform can already do, albeit at a limited scale, or even new music or movies, then all of a sudden the leap from iterative to primary innovation doesn’t seem as big as it used to.
Furthermore if we can crack the algorithm for human intelligence, then it’s inevitable that one day, probably sooner rather than later, and with the right focus, that we’ll crack the algorithm for innovation and creativity too. After all, everything’s an algorithm, and I’m sorry to say that if you’re one of those people who’s holding on to the statement that the last jobs to be automated by AI’s will be creativity, dexterity and empathy then one of those is going to fall sooner than the others. That said though AI researchers are working on every one of those fields so don’t think any of them will be safe havens for too long, and I’d suggest that people who say that those kind of skills are decades away should probably go and visit their local AI labs and get an update.
Anyway, back to the story.
“When placing a glass on the edge of a table, for example, humans will likely pause to consider how stable it is and whether it might fall,” said the researchers in a blog post, “on the basis of that imagined consequence we might re-adjust the glass to prevent it from falling and breaking. If our algorithms are to develop equally sophisticated behaviours that we take for granted, they too must have the capability to ‘imagine’ and reason about the future, and beyond that they must be able to construct a plan using this knowledge.”
While the new design thinking has already been trialled on the quiet in a number of the DeepMind AlphaGo bots that debuted recently, and that smashed their human opponents, the team go on to say that because of the complexity of the real world their new AI must operate “at a different level.”
To realise their goal the researchers combined several existing AI approaches together, including reinforcement learning, where a system learns through trial and error, and deep learning, that’s modelled on the human brain, and what they ended up with is a system that is able to mix trial and error with simulation capabilities and that can learn about their environment, and think before they act.
One of the ways they tested the new algorithms was with a 1980’s video game called Sokoban, which you can see in the video above, where players have to push crates around to solve puzzles. Some moves can make the level unsolvable, so advanced planning is needed, and, as is the usual practise, the AI wasn’t given the rules of the game beforehand.
The researchers found their new ‘imaginative’ AI solved 85 percent of the levels it was given, compared with 60 percent for AI agents using older approaches.
“The imagination-augmented agents outperform the imagination-less baselines considerably,” said the researchers, “they learn with less experience and are able to deal with the imperfections in modelling the environment.”
The team also noted a number of improvements in the new bots – they could handle gaps in their knowledge better, they were better at picking out useful information for their simulations, and as a result they could create better strategies. But it’s not just advance planning – it’s advance planning with extra creativity, so potential future actions can be combined together or mixed up in different ways in order to identify the most promising paths forward.
Despite the success of DeepMind’s testing though, it’s still early days for the technique, and these games are still a long way from representing the complexity of the real world, but that said it’s a promising start, and if nothing else one day soon we’ll be confident that our AI infused robot overlords won’t go spilling their drinks all over the place.
“Further analysis and consideration is required to provide scalable solutions to rich model-based agents that can use their imaginations to reason about, and plan, for the future,” concluded the researchers.