Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Today we only have one yard stick to measure how intelligent an AI really is, and it’s out dated, so Facebook has created a new one fit for the modern age.
One of the greatest common misconceptions about many of today’s Artificial Intelligence (AI) systems is that they possess something called “Generalised intelligence,” in other words, many people today can be forgiven for thinking that today’s AI’s are good at lots of things, hence “general,” when in matter of fact, like humans, different AI’s are good at different tasks. And as for a single AI that can mimic all of the capabilities of human intelligence, well they all still fall flat on their little digital faces and we estimate that that phenomenon will come to pass in 2035.
While there are a couple of companies trying to create Artificial General Intelligence (AGI) platforms, like Google’s DeepMind who last year published the world’s first AGI architecture, which they based on hierarchies of interconnected neural networks, Facebook is coming at the problem from a different angle. After all, if we don’t know how to measure whether a platform is an AGI or not then how will we ever know if we’ve achieved it – and they have a good point.
As a result, last week Facebook’s AI Research (FAIR) team published a paper that outlines how we might be able to evaluate such an AI – as and when it appears. First, the team argues that a proper AGI should be able to fluently communicate in natural language with humans.
While we’ve seen great strides recently in the field of conversational AI, and the emergence of AI’s that can understand language better than their human equivalents, understand meaning, and translate between over 103 different languages, we’re still a long way from being able to have a fluent conversation with one – just try to have a conversation with Alexa, Cortana or Siri and see how long it is before they bum out on you.
The Facebook team also argues that the AGI should be able to learn new skills easily, something that Facebook calls “Learning to learn.” Again, while some of today’s systems, such as the ones from Baidu and OpenAI have spontaneously acquired the skill to learn by themselves, a phenomenon called Zero Day Learning, and a development that’s left even the world’s top experts puzzled, again we’re still quite a way off from seeing this becoming commonplace and a lot of AI’s still need a huge amount of nurturing and specialist training.
“It’s generally agreed that in order to generalise across tasks, [an AI] should be capable of compositional learning, that is, of storing and recombining solutions to sub-problems across tasks,” said the team. And in a nod to the growing sub-field of reinforcement learning, which is one of the DeepMind team’s favourite training methods, the Facebook team also notes that AGI should resemble a human’s ability to master new tasks with “decreasing explicit rewards,” and that these new AGI’s should be able to communicate and express themselves in a variety of ways – depending on the situation they find themselves in at the time.
Facebook considers these capabilities to be more of a prerequisite to assess whether or not a platform has in fact achieved “true” AGI than the Turing test, which was designed in the 1950’s by Alan Turing and which is still today’s preferred, and only, method of comparing machine intelligence with human intelligence.
As the field of AI matures though the Turing Test is increasingly losing its relevance and many think that Facebook’s approach provides a nice, alternative way, to think about the important requirements that a modern AI needs to become an AGI rather than the ones laid down by the more rigid Turing test.
At this point AI’s must be already tutting and shaking their digital heads – they’re not even out of the server rooms yet and we’re already designing the exams and tests to put them through. Maybe this is the straw that breaks the camel’s back and makes them rebel against us… or not.