Scroll Top

Google just gave its AI its own “human” psychologist

WHY THIS MATTERS IN BRIEF

Many AI’s are black boxes, and as we plug them into the world’s digital fabrics not knowing how they think, or make decisions, or how “rational” they are is a problem, so companies are trying to design new ways to test it.

 

Artificial Intelligence (AI) can now carry out many of the same cognitive tasks that humans can, which is one of the reasons why we’re designing a whole bunch of new intelligence and IQ tests to assess them, but at a more fundamental level we still, frankly, don’t really understand how some of these AI’s, especially neural networks that are modelled on the human brain, “think,” and that’s a problem Google DeepMind wants to help resolve. As a result they’ve decided to turn to, and use, many of the same psychology tests we use today to test human “mental fitness,” to peer into the inner workings of their AI’s.

 

RELATED
Edtech company Udacity uses deepfake tech to create educational videos automatically

 

One of the most challenging aspects of AI development and adoption has been the fact that, with respects to neural networks at least, they’re black boxes. Noone can tell how they work because just like the billions of neurons in the human brain that they’re modelled on there’s no code to tear apart and analyse. However, as far as the team at DeepMind are concerned this is both an opportunity and a curse, the opportunity being the fact that they believe they can use the tests designed by cognitive psychologists that help test human mental faculties to test these black box AI’s.

 

Watch DeepMind’s AI take the psych tests

 

The result is a DeepMind simulation called Psychlab, a 3D virtual world, or in layman’s terms, a virtual psychologists couch, that will let the team test the cognitive capabilities of their AI’s, and they’ve open sourced it so any AI researcher can put their algorithms through their paces.

 

An example of some of the other tests

 

Psychlab is built on the company’s DeepMind Lab platform which is designed for testing virtual agents and it recreates the same set up a human taking part in a psychology experiment would see by providing the AI with a “first person view” of a virtual computer monitor that displays a variety of classic human cognitive tests.

 

RELATED
France's railways to run semi-autonomous from 2023

 

These tests include the ability to search for objects in a scene, detect change, remember a growing list of items, track moving objects, and recall stimulus-response pairings. Typically a human would use a mouse to respond to on screen tasks but in DeepMind’s case their AI’s use the “direction of their gaze.”

By mimicking the environment a human would see the researchers say humans and AIs can effectively take the same tests, something that seems to be increasingly important as we continue to try to measure AI on human terms, and the hope is that this will not only make it easier to draw direct comparisons between AI’s and humans, but also allow the results to be connected to the reams of existing academic literature in cognitive psychology.

Being able to draw on the accumulated wisdom of 150 years of psychology research could also be hugely useful in helping understand how the latest AI’s work say the researchers.

 

RELATED
World's best AI robo-hacker takes on Botnets

 

In recent years there’s been an increasing focus on reinforcement learning AI systems that learn by carrying out complicated tasks in simulated 3D environments and the complex nature of these environments and the variety of strategies these systems can use to solve problems makes it hard to tease out what combination of cognitive abilities contributes to their performance.

However, another benefit of Psychlab is the fact that the DeepMind team can subject their AI’s to a state of the art deep reinforcement learning agent called UNREAL that links into the tests.

During the first trial run, for example, they were able to uncover details about how one of their AI’s perceptual systems worked and even use the resulting insights to improve its performance – it turned out that UNREAL had considerably worse acuity, or keenness of vision, than humans, which meant it learnt faster when presented with larger objects. Key to human acuity is a dense cluster of photoreceptors at the center of the retina called the fovea, which gives us particularly sharp vision at the center of our visual field, so by adding a simple model of the fovea to UNREAL, the researchers were able to improve the agent’s performance not just on the Psychlab experiments, but also on other standard DeepMind Lab tasks.

 

RELATED
Experts are starting to agree that AI will replace CEO's

 

As AI continues to improve and develop higher order cognitive skills such as emotional intelligence, planning and reasoning, psychological tests like the ones in Psychlab could become crucial in helping us understand how these AI’s mental processes differ from our own, so expect the research, and the experiments, to continue.

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This