0

WHY THIS MATTERS IN BRIEF

As advanced as today’s AI’s are they still can’t reason, and that’s what’s going to hold them, and us, back from realising the milestone that is Artificial General Intelligence.

 

Artificial Intelligence (AI) has become pretty good at completing specific tasks over the past couple of years, whether it’s creating fake celebrities, finding and patching cyber vulnerabilities, or predicting when people will die, and all sorts of other things besides. That said though we’re still a long way off from realising Artificial General Intelligence (AGI), an AI with the kind of all around smarts that would let it navigate and “understand” the world the same way we do, and that’s despite a new General Intelligence breakthrough and the publication of a new AGI architecture by Google DeepMind last year.

 

RELATED
USAF topguns will fight against AI drones in 2021 as era of autonomous warfare nears

 

One of the key elements of AGI is abstract reasoning – the ability to think beyond the here and now to see more nuanced patterns and relationships and to engage in complex thought, and last week researchers at DeepMind, who also recently created what amounts to a psychology test for their AI’s, published a research paper that detailed their attempt to measure their AI’s “abstract reasoning capabilities,” by creating tests that aren’t that dissimilar from the ones we use to measure our own reasoning capabilities.

In humans we measure abstract reasoning using fairly straightforward visual IQ tests. One popular test in particular, called Raven’s Progressive Matrices, features several rows of images with the final row missing its final image. It’s up to the test taker to choose the image that should come next based on the pattern of the completed rows.

The test doesn’t outright tell the test taker what to look for in the images, sometimes the progression has to do with the number of objects within each image, their colour, or their placement. It’s then up to the user to figure out what’s missing for themselves using their ability to reason abstractly.

 

RELATED
5G and Smart City sensor networks could give autonomous vehicles collective perception

 

To apply this test to its AI’s the DeepMind team created a program that could generate unique matrix problems, and then they trained various different AI’s to solve them. Finally, they tested the systems.

In some cases they used test problems with the same abstract factors as the training set, like both training and testing the AI on problems that required it to consider the number of shapes in each image. While in other cases they used test problems that used different abstract factors than those in the original training set. For example, they might train the AI on problems that required it to consider the number of shapes in each image, but then test it on ones that required it to consider the shapes’ positions to figure out the right answer.

The results of the tests weren’t great though. When the training problems and test problems focused on the same abstract factors the systems fared just “alright” correctly answering the problems 75 percent of the time. However, the teams AI’s performed very poorly if the test set and the training sets were different, even when the differences were minor, for example, training on matrices that featured dark coloured objects and then testing the AI’s using matrices that featured light coloured objects.

 

RELATED
A connected heart just helped convict a man of arson

 

Ultimately, the team’s AI “IQ test” shows that even some of today’s most advanced AI’s can’t figure out problems we haven’t trained them to solve, and that means we’re probably still a long way from AGI. But now though at least we have a straightforward way to monitor their progress and their ability to reason, and one day it’s likely they’ll ace them, and this new test will sit nicely alongside some other AI tests from other companies that will test how smart and dangerous AI algorithms are, as well as what their IQ’s might be

Source: DeepMind

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *