Scroll Top

Facebook creates a new intelligence test to test how smart AI’s really are

WHY THIS MATTERS IN BRIEF

Today we only have one yard stick to measure how intelligent an AI really is, and it’s out dated, so Facebook has created a new one fit for the modern age.

 

One of the greatest common misconceptions about many of today’s Artificial Intelligence (AI) systems is that they possess something called “Generalised intelligence,” in other words, many people today can be forgiven for thinking that today’s AI’s are good at lots of things, hence “general,” when in matter of fact, like humans, different AI’s are good at different tasks. And as for a single AI that can mimic all of the capabilities of human intelligence, well they all still fall flat on their little digital faces and we estimate that that phenomenon will come to pass in 2035.

 

RELATED
This super realistic virtual world is a training school for self-driving cars

 

While there are a couple of companies trying to create Artificial General Intelligence (AGI) platforms, like Google’s DeepMind who last year published the world’s first AGI architecture, which they based on hierarchies of interconnected neural networks, Facebook is coming at the problem from a different angle. After all, if we don’t know how to measure whether a platform is an AGI or not then how will we ever know if we’ve achieved it – and they have a good point.

As a result, last week Facebook’s AI Research (FAIR) team published a paper that outlines how we might be able to evaluate such an AI – as and when it appears. First, the team argues that a proper AGI should be able to fluently communicate in natural language with humans.

While we’ve seen great strides recently in the field of conversational AI, and the emergence of AI’s that can understand language better than their human equivalents, understand meaning, and translate between over 103 different languages, we’re still a long way from being able to have a fluent conversation with one – just try to have a conversation with Alexa, Cortana or Siri and see how long it is before they bum out on you.

 

RELATED
"Terrifying" new AI lets you create your very own DeepFakes using just text

 

The Facebook team also argues that the AGI should be able to learn new skills easily, something that Facebook calls “Learning to learn.” Again, while some of today’s systems, such as the ones from Baidu and OpenAI have spontaneously acquired the skill to learn by themselves, a phenomenon called Zero Day Learning, and a development that’s left even the world’s top experts puzzled, again we’re still quite a way off from seeing this becoming commonplace and a lot of AI’s still need a huge amount of nurturing and specialist training.

“It’s generally agreed that in order to generalise across tasks, [an AI] should be capable of compositional learning, that is, of storing and recombining solutions to sub-problems across tasks,” said the team. And in a nod to the growing sub-field of reinforcement learning, which is one of the DeepMind team’s favourite training methods, the Facebook team also notes that AGI should resemble a human’s ability to master new tasks with “decreasing explicit rewards,” and that these new AGI’s should be able to communicate and express themselves in a variety of ways – depending on the situation they find themselves in at the time.

Facebook considers these capabilities to be more of a prerequisite to assess whether or not a platform has in fact achieved “true” AGI than the Turing test, which was designed in the 1950’s by Alan Turing and which is still today’s preferred, and only, method of comparing machine intelligence with human intelligence.

 

RELATED
Robo journalists covered the US election for the Washington Post

 

As the field of AI matures though the Turing Test is increasingly losing its relevance and many think that Facebook’s approach provides a nice, alternative way, to think about the important requirements that a modern AI needs to become an AGI rather than the ones laid down by the more rigid Turing test.

At this point AI’s must be already tutting and shaking their digital heads – they’re not even out of the server rooms yet and we’re already designing the exams and tests to put them through. Maybe this is the straw that breaks the camel’s back and makes them rebel against us… or not.

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This