Scroll Top

MIT’s latest breakthrough? Getting AIs to explain their decisions

WHY THIS MATTERS IN BRIEF

Artificial intelligence systems are increasingly running more and more of our world, and its digital fabric, but in many cases just how they make their decisions is a “Black Box”, this research aims to develop a new AI architecture that will help AI’s explain their decisions.

 

Whether you like it or not artificial intelligence (AI) is here to stay, and it’s inevitably going to play a greater role in all of our lives – whether that’s as benign as helping you optimise your route into work in the morning, or as important as diagnosing disease, controlling autonomous vehicles and warships, creating new encryption schemas or running the global financial system. And there’s no denying that  brain-inspired deep learning neural networks have proven capable of making significant advances in a number of AI-related fields over the past decade. But, like us, they’re not perfect and we’ve seen time and time again how AI systems can quickly become biased, sexist and even racist.

 

RELATED
The world's first Pneumatic computer works using pressure not electricity

 

At the moment it could be argued that it is easy enough for us to take these “flawed” AI’s offline, in a similar way to how Microsoft took Tay, it’s Hilter loving sex bot, offline earlier this year, and Google is even developing an AI “Kill Switch.” But one day it’s going to get harder for us to undo these AI’s from the infrastructure of the internet and having all powerful AI’s, that will one day control energy grids and air traffic control systems, to say nothing of everything else they’ll be plugged into, that can’t explain how they came to a decision, is already freaking a lot of people out.

Increasingly just how AI’s “do their thing” is increasingly becoming a black box, one that even the systems designers don’t understand and this is already becoming a problem.

“Deep learning has led to some big advances in computer vision, natural language processing, and other areas,” says Tommi Jaakkola, a MIT professor of electrical engineering and computer science, “it’s tremendously flexible in terms of learning input/output mappings, but the flexibility and power comes at a cost and that is it that it’s very difficult to work out why it is performing a certain prediction in a particular context.”

“This becomes a real issue in any situation where there are consequences to making a prediction, or actions that are taken on the basis of that prediction,” Jaakkola said.

Fortunately, that is where a new project from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) might one day be able to help. Jaakkola and his team  have managed to show that it is possible to train neural networks in such a way that they do not just offer predictions and classifications, but also rationalize their decision.

For the study, the researchers examined neural nets that were trained using just text based data, which is easier to control and mould than the images or video data that is often used to train these sorts of systems.

The team divided their system into two modules. The first module extracted segments of text and scored them on their length and coherence, the second performed the job of predicting or classifying them. This second module was essentially the systems decision engine that then produced the result, or the prediction based on the data it’d been fed.

 

RELATED
New OpenAI project lets AI bootstrap its own intelligence and reason better

 

One of the data set the researchers tested their system on was a group of reviews from a website where users rated beers. The data the researchers used included both a text review and also a corresponding star review, ranked out of five. With these inputs and outputs, the researchers were able to fine-tune a system which “thought” along the same lines as human reviewers – thereby making its decisions more understandable. If, for example, the string of text sounded positive, then the system would figure out that the resulting rating was likely to be higher.

At the end of the experiment the new system agreed 96 percent with the human reviewers rating for appearance, 95 percent for aroma and 80 percent for palate and when the researchers quizzed the system why it came to these conclusions it was able to provide them with its reasoning.

The team also ran unpublished experiments using a range of other datasets as well, including biopsy and healthcare data.

“The question of justifying predictions will be a prevalent issue across complex AI systems,” Jaakkola said, “they need to be able to communicate with people. Whether the solution is this particular architecture or not remains to be seen. Right now, we’re in the process of revising this work and making it more sophisticated. But it absolutely opens up an area of research that is very important.”

While this research is in the very early stages it might set an interesting precedent for other researchers to follow, but, make no mistake there’s a long way to go and if the history of AI has shown us anything so far it’s that there is a ghost in the machine – and trying to tie that down might ultimately prove impossible.

Related Posts

Comments (5)

[…] become it’s own “Black Box” – albeit one that one day might be able to explain all of it’s actions – might run counter to that goal. But for now the company is driving forwards with its […]

[…] by creating AI “kill switches” for rogue AI’s or developing new technologies that help to crack open AI’s so we humans can really understand what’s going on inside the black […]

[…] Extending this technology one step further, imagine having this technology, and others like it, used on you when you’re applying for a car loan, mortgage, or passport – do you think you would pass? And just what would you be “passing”? Increasingly your future will be determined by invisible, potentially unaccountable, black box algorithms. […]

[…] as for the question of who spies on the AI spies, and the pitfalls of embedding a powerful black box technology like AI into the very heart, and the very fabric, of the world’s intelligence […]

[…] other AI based systems are essentially a black box, data is fed in one end and the result is outputted from the other, with no way to check how the […]

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This