Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global futures think tank working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future.” Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring several Education and Lunar XPrize teams, building the first generation of biological computers and re-envisioning global education with the G20, and helping the world’s largest conglomerates ideate the next 20 years of intelligent devices and machines. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.
WHY THIS MATTERS IN BRIEF
As AI continues to be weaponised by military powers, getting it to explain its decision making, and actions, will be crucial, and it’s not going to be an easy task to achieve.
Intelligence agents and military operatives may come to rely heavily on Machine Learning and Artificial Intelligence (AI) to parse huge quantities of data, and to control a growing arsenal of autonomous systems, but the US Military wants to make sure that this doesn’t lead to blindly trusting algorithms, that even though there are a couple of tests to assess how dangerous they are, or could become, are still at their heart mysterious black boxes.
As a result the Defense Advanced Research Projects Agency (DARPA), a division of the US Defense Department that explores new technologies, is following the lead shown by Columbia University, MIT, and Nvidia, who have all been trying to develop new systems that read AI’s minds and get them to explain their decision making processes, and they’ve announced they’re going to be funding several new projects. The approaches range from adding further machine learning systems geared toward providing an explanation, to the development of new machine learning approaches that incorporate an “elucidation by design.”
“We now have this real explosion of AI,” says David Gunning, the DARPA program manager who is funding an effort to develop AI techniques that include some explanation of their reasoning, “the reason for that is mainly machine learning, and deep learning in particular.”
Deep learning and other machine learning techniques have taken Silicon Valley by storm, improving voice recognition and image classification significantly, and they are being used in more contexts than ever before, including areas like law enforcement and medicine, where the consequences of a mistake may be serious. But while deep learning is incredibly good at finding patterns in data, it can be impossible to understand how it reaches a conclusion. The learning process is mathematically very complex, and there is often no way to translate this into something a person would understand. And while deep learning is particularly hard to interpret, other machine learning techniques can also be challenging.
“These models are very opaque and difficult for people to interpret, especially if they’re not an expert in AI,” Gunning says.
Deep learning is especially cryptic because of its incredible complexity. It is roughly inspired by the process by which neurons in a brain learn in response to input. Many layers of simulated neurons and synapses label data and their behaviour is tuned until they learn to recognise, say, a cat in a photograph. But the model learned by the system is encoded in the weights of many millions of neurons, and is therefore very challenging to examine. When a deep learning network recognizes a cat, for instance, it isn’t clear whether the system may be focusing on the whiskers, the ears, or even the cat’s blanket in an image.
Often, it might not matter that much if a machine learning model is opaque, but this isn’t true for an intelligence officer trying to identify a potential target.
“There are some critical applications where you need the explanation,” Gunning says.
Gunning adds that the military is developing countless autonomous systems that will undoubtedly rely heavily on machine learning techniques like deep learning. Self-driving vehicles, along with swarms of autonomous aerial drones, will increasingly be used in coming years, he says, and they will become increasingly capable.
Explainability isn’t just important for justifying decisions. It can help prevent things from going wrong. An image classification system that has learned to focus purely on texture for cat classification might be fooled by a furry rug. So offering an explanation could help researchers make their systems more robust, and help prevent those who rely on them from making mistakes.
DARPA is funding 13 different research groups, which are pursuing a range of approaches to making AI more explainable.
One team selected for funding comes from Charles River Analytics, a company that develops high tech tools for various customers, including the US military. This team is exploring new deep learning systems that incorporate an explanation, such as ones that highlight areas of an image that seem most relevant to a classification. The researchers are also experimenting with computer interfaces that make the workings of machine learning systems more explicit with data, visualizations, and even natural language explanations.
Xia Hu, a professor at Texas A&M University who leads another of the teams chosen for funding, says the problem is also important in other areas where machine learning is being adopted, such as education, finance, law and medicine.
“Without some sort of explanation or reasoning, domain experts are not going to trust the results,” Hu says, “that’s the main reason why many domain experts refuse to adopt machine learning or deep learning, and hopefully this program will help go some way to helping us create AI’s that can explain their decision making.”