Matthew Griffin, award winning Futurist and Founder of the 311 Institute is described as "The Adviser behind the Advisers." Recognised for the past five years as one of the world's foremost futurists, innovation and strategy experts Matthew is an author, entrepreneur international speaker who helps investors, multi-nationals, regulators and sovereign governments around the world envision, build and lead the future. Today, asides from being a member of Centrica's prestigious Technology and Innovation Committee and mentoring XPrize teams, Matthew's accomplishments, among others, include playing the lead role in helping the world's largest smartphone manufacturers ideate the next five generations of mobile devices, and what comes beyond, and helping the world's largest high tech semiconductor manufacturers envision the next twenty years of intelligent machines. Matthew's clients include Accenture, Bain & Co, Bank of America, Blackrock, Bloomberg, Booz Allen Hamilton, Boston Consulting Group, Dell EMC, Dentons, Deloitte, Deutsche Bank, Du Pont, E&Y, Fidelity, Goldman Sachs, HPE, Huawei, JP Morgan Chase, KPMG, Lloyds Banking Group, McKinsey & Co, Monsanto, PWC, Qualcomm, Rolls Royce, SAP, Samsung, Schroeder's, Sequoia Capital, Sopra Steria, UBS, the UK's HM Treasury, the USAF and many others.
WHY THIS MATTERS IN BRIEF
Many AI’s are black boxes, and as we plug them into the world’s digital fabrics not knowing how they think, or make decisions, or how “rational” they are is a problem, so companies are trying to design new ways to test it.
Artificial Intelligence (AI) can now carry out many of the same cognitive tasks that humans can, which is one of the reasons why we’re designing a whole bunch of new intelligence and IQ tests to assess them, but at a more fundamental level we still, frankly, don’t really understand how some of these AI’s, especially neural networks that are modelled on the human brain, “think,” and that’s a problem Google DeepMind wants to help resolve. As a result they’ve decided to turn to, and use, many of the same psychology tests we use today to test human “mental fitness,” to peer into the inner workings of their AI’s.
One of the most challenging aspects of AI development and adoption has been the fact that, with respects to neural networks at least, they’re black boxes. Noone can tell how they work because just like the billions of neurons in the human brain that they’re modelled on there’s no code to tear apart and analyse. However, as far as the team at DeepMind are concerned this is both an opportunity and a curse, the opportunity being the fact that they believe they can use the tests designed by cognitive psychologists that help test human mental faculties to test these black box AI’s.
Watch DeepMind’s AI take the psych tests
The result is a DeepMind simulation called Psychlab, a 3D virtual world, or in layman’s terms, a virtual psychologists couch, that will let the team test the cognitive capabilities of their AI’s, and they’ve open sourced it so any AI researcher can put their algorithms through their paces.
An example of some of the other tests
Psychlab is built on the company’s DeepMind Lab platform which is designed for testing virtual agents and it recreates the same set up a human taking part in a psychology experiment would see by providing the AI with a “first person view” of a virtual computer monitor that displays a variety of classic human cognitive tests.
These tests include the ability to search for objects in a scene, detect change, remember a growing list of items, track moving objects, and recall stimulus-response pairings. Typically a human would use a mouse to respond to on screen tasks but in DeepMind’s case their AI’s use the “direction of their gaze.”
By mimicking the environment a human would see the researchers say humans and AIs can effectively take the same tests, something that seems to be increasingly important as we continue to try to measure AI on human terms, and the hope is that this will not only make it easier to draw direct comparisons between AI’s and humans, but also allow the results to be connected to the reams of existing academic literature in cognitive psychology.
Being able to draw on the accumulated wisdom of 150 years of psychology research could also be hugely useful in helping understand how the latest AI’s work say the researchers.
In recent years there’s been an increasing focus on reinforcement learning AI systems that learn by carrying out complicated tasks in simulated 3D environments and the complex nature of these environments and the variety of strategies these systems can use to solve problems makes it hard to tease out what combination of cognitive abilities contributes to their performance.
However, another benefit of Psychlab is the fact that the DeepMind team can subject their AI’s to a state of the art deep reinforcement learning agent called UNREAL that links into the tests.
During the first trial run, for example, they were able to uncover details about how one of their AI’s perceptual systems worked and even use the resulting insights to improve its performance – it turned out that UNREAL had considerably worse acuity, or keenness of vision, than humans, which meant it learnt faster when presented with larger objects. Key to human acuity is a dense cluster of photoreceptors at the center of the retina called the fovea, which gives us particularly sharp vision at the center of our visual field, so by adding a simple model of the fovea to UNREAL, the researchers were able to improve the agent’s performance not just on the Psychlab experiments, but also on other standard DeepMind Lab tasks.
As AI continues to improve and develop higher order cognitive skills such as emotional intelligence, planning and reasoning, psychological tests like the ones in Psychlab could become crucial in helping us understand how these AI’s mental processes differ from our own, so expect the research, and the experiments, to continue.