Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global futures and deep futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future.” Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, GEMS, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.
WHY THIS MATTERS IN BRIEF
Many AI’s are black boxes, and as we plug them into the world’s digital fabrics not knowing how they think, or make decisions, or how “rational” they are is a problem, so companies are trying to design new ways to test it.
Artificial Intelligence (AI) can now carry out many of the same cognitive tasks that humans can, which is one of the reasons why we’re designing a whole bunch of new intelligence and IQ tests to assess them, but at a more fundamental level we still, frankly, don’t really understand how some of these AI’s, especially neural networks that are modelled on the human brain, “think,” and that’s a problem Google DeepMind wants to help resolve. As a result they’ve decided to turn to, and use, many of the same psychology tests we use today to test human “mental fitness,” to peer into the inner workings of their AI’s.
One of the most challenging aspects of AI development and adoption has been the fact that, with respects to neural networks at least, they’re black boxes. Noone can tell how they work because just like the billions of neurons in the human brain that they’re modelled on there’s no code to tear apart and analyse. However, as far as the team at DeepMind are concerned this is both an opportunity and a curse, the opportunity being the fact that they believe they can use the tests designed by cognitive psychologists that help test human mental faculties to test these black box AI’s.
Watch DeepMind’s AI take the psych tests
The result is a DeepMind simulation called Psychlab, a 3D virtual world, or in layman’s terms, a virtual psychologists couch, that will let the team test the cognitive capabilities of their AI’s, and they’ve open sourced it so any AI researcher can put their algorithms through their paces.
An example of some of the other tests
Psychlab is built on the company’s DeepMind Lab platform which is designed for testing virtual agents and it recreates the same set up a human taking part in a psychology experiment would see by providing the AI with a “first person view” of a virtual computer monitor that displays a variety of classic human cognitive tests.
These tests include the ability to search for objects in a scene, detect change, remember a growing list of items, track moving objects, and recall stimulus-response pairings. Typically a human would use a mouse to respond to on screen tasks but in DeepMind’s case their AI’s use the “direction of their gaze.”
By mimicking the environment a human would see the researchers say humans and AIs can effectively take the same tests, something that seems to be increasingly important as we continue to try to measure AI on human terms, and the hope is that this will not only make it easier to draw direct comparisons between AI’s and humans, but also allow the results to be connected to the reams of existing academic literature in cognitive psychology.
Being able to draw on the accumulated wisdom of 150 years of psychology research could also be hugely useful in helping understand how the latest AI’s work say the researchers.
In recent years there’s been an increasing focus on reinforcement learning AI systems that learn by carrying out complicated tasks in simulated 3D environments and the complex nature of these environments and the variety of strategies these systems can use to solve problems makes it hard to tease out what combination of cognitive abilities contributes to their performance.
However, another benefit of Psychlab is the fact that the DeepMind team can subject their AI’s to a state of the art deep reinforcement learning agent called UNREAL that links into the tests.
During the first trial run, for example, they were able to uncover details about how one of their AI’s perceptual systems worked and even use the resulting insights to improve its performance – it turned out that UNREAL had considerably worse acuity, or keenness of vision, than humans, which meant it learnt faster when presented with larger objects. Key to human acuity is a dense cluster of photoreceptors at the center of the retina called the fovea, which gives us particularly sharp vision at the center of our visual field, so by adding a simple model of the fovea to UNREAL, the researchers were able to improve the agent’s performance not just on the Psychlab experiments, but also on other standard DeepMind Lab tasks.
As AI continues to improve and develop higher order cognitive skills such as emotional intelligence, planning and reasoning, psychological tests like the ones in Psychlab could become crucial in helping us understand how these AI’s mental processes differ from our own, so expect the research, and the experiments, to continue.