48 views
0

WHY THIS MATTERS IN BRIEF

Neural networks are incredibly powerful and increasingly capable but they’re a black box, now a team from MIT has figured out a way to read their minds.

 

Neural networks, which learn to perform computational tasks by analysing huge sets of training data, have been responsible for most of the recent breakthroughs in Artificial Intelligence (AI), including speech recognition and automatic translation systems, but during their training neural networks continually adjust their internal settings in ways that even their creators don’t understand, and as a consequence they’re often simply referred to as black boxes. And when you’re plugging these neural networks into everything from the global banking system, to self-driving cars and healthcare, using a black box technology that noone understands how it works, adapts, evolves or learns, is a problem.

 

RELATED
Urban Aero shows off its first fully autonomous flying taxi

 

In the past year there have been a number of companies trying to develop new techniques to help them read and understand the minds of these neural network black boxes, from creating tools that help us debug them to creating tools that help us visualise and explain their decision making, but despite the focus it’s still a dark, imperfect art.

Now though a team of researchers from MIT and the Qatar Computing Research Institute have used what they call an “interpretive technique” that was originally designed to be used to analyse the neural networks trained to do machine translation and speech recognition, and what they found was that neural networks ignored certain information and prioritised others. For example, some of the systems they tested concentrated more on lower level tasks, such as sound recognition or “part of speech” recognition, before moving on to the higher level tasks, such as transcription or semantic interpretation. But they also found a surprising omission in the type of information the translation networks considered, and they were able to show that correcting those omissions improved the network’s performance, something that in the future could make our AI’s more transparent and, equally important, accurate.

 

RELATED
Second anti-aging pill that extends lifespans by a quarter heads to human trials

 

“In machine translation, historically, there was sort of a pyramid with different layers,” said Jim Glass, a MIT CSAIL senior research scientist who worked on the project with Yonatan Belinkov, an MIT graduate student, “at the lowest level there was the word, the surface forms, and the top of the pyramid was some kind of inter-lingual representation, and [the networks] had different layers where they were doing syntax, and semantics. This was a very abstract notion, but the idea was the higher up you went in the pyramid, the easier it would be to translate a new language, and then you’d go down again. So, part of what Yonatan is doing is trying to figure out what aspects of this notion are being encoded in the network.”

Neural nets are so named because they roughly approximate the structure of the human brain. Typically, they’re arranged into hierarchical layers where each layer is made up of multitudes of simple processing units, nodes, each of which are connected to multiple nodes in the layers above and below them. information is fed into the lowest layer first whose nodes process it and then pass it on to the next layer before it’s processed again, and so on, and the connections between the individual layers all have different “weights,” which determine how much the output of any one node means to the calculation performed by the next.

 

RELATED
AI is helping create realistic "Fake news!" videos and it's going to be a problem

 

During an AI’s training phase these weights are constantly readjusted, and after the network is trained, the premise is that it’s creators can determine the weight of each node and ergo determine how these black boxes do what they do. However, bearing in mind that the majority of today’s AI’s have thousands or even millions of nodes, and even more connections between them, figuring out what algorithm those weights encoded is all but impossible. And that’s the black box issue.

During their research the team trained network and, using the output of each of its layers, in response to individual training examples, used those to train another neural network to perform a particular task, and it was this technique that let them figure out what task each layer was optimised for.

 

RELATED
This new test will tell us if AI has become self-aware and gained consciousness

 

In the case of the speech recognition network, Belinkov and Glass used individual layers’ outputs to train a system to identify “phones,” distinct phonetic units particular to a spoken language.

The “t” sounds in the words “tea,” “tree,” and “but,” for instance, might be classified as separate phones, but a speech recognition system has to transcribe all of them using the letter “t.” And indeed, Belinkov and Glass found that lower levels of the network were better at recognising phones than higher levels, where, presumably, the distinction was less important.

Similarly, in an earlier paper, presented last summer at the Annual Meeting of the Association for Computational Linguistics, Glass, Belinkov, and their QCRI colleagues showed that the lower levels of a machine translation network were particularly good at recognising parts of speech and morphology, features such as tense, number, and conjugation. But in the new paper, they show that higher levels of the network are better at something called semantic tagging.

 

RELATED
CaliBurger rolls out the world's first burger flipping robot

 

As Belinkov explains, a part of speech tagger will recognise that “herself” is a pronoun, but the meaning of that pronoun, its semantic sense, is very different in the sentences “she bought the book herself” and “she herself bought the book,” and a semantic tagger would assign different tags to those two instances of “herself,” just as a machine translation system might find different translations for them in a given target language.

They also found that the best performing machine translation networks use so called encoding-decoding models, so the team used that technique as well. In such systems, the input, in the source language, passed through several layers of the network, known as the encoder, to produce a vector, a string of numbers that somehow represent the semantic content of the input. That vector passes through several more layers of the network, the decoder, to yield a translation in the target language.

 

RELATED
IEEE publishes the worlds first framework for coding ethical behaviours into AI

 

Although the encoder and decoder were trained together, they can be thought of as separate networks, and the researchers discovered that, curiously, the lower layers of the encoder are good at distinguishing morphology, but the higher layers of the decoder are not. So Belinkov and the QCRI researchers retrained the network, scoring its performance according to not only accuracy of translation but also analysis of morphology in the target language. In essence, they forced the decoder to get better at distinguishing morphology.

Using this technique, they retrained the network to translate English into German and found that its accuracy increased by 3 percent, and while that’s not a great improvement it’s an indication that looking under the hood, and being able to read the minds of neural networks could quickly become much more than just an academic exercise.

About author

Matthew Griffin

Matthew Griffin, Futurist and Founder of the 311 Institute is described as “The Adviser behind the Advisers.” Among other things Matthew keeps busy helping the world’s largest smartphone manufacturers ideate the next five generations of smartphones, and what comes beyond, the world’s largest chip makers envision the next twenty years of intelligent machines, and is helping Europe’s largest energy companies re-invent energy generation, transmission and retail. Recognised in 2013, 2015 and 2016 as one of Europe’s foremost futurists, innovation and strategy experts Matthew is an award winning author, entrepreneur and international speaker who has been featured on the BBC, Discovery and other outlets. Working hand in hand with accelerators, investors, governments, multi-nationals and regulators around the world Matthew helps them envision the future and helps them transform their industries, products and go to market strategies, and shows them how the combination of new, democratised, powerful emerging technologies are helping accelerate cultural, industrial and societal change. Matthew’s clients include Accenture, Bain & Co, Bank of America, Blackrock, Booz Allen Hamilton, Boston Consulting Group, Dell EMC, Dentons, Deutsche Bank, Deloitte, Deutsche Bank, Du Pont, E&Y, Fidelity, Goldman Sachs, HPE, Huawei, JP Morgan Chase, KPMG, Lloyds Banking Group, McKinsey & Co, PWC, Qualcomm, Rolls Royce, SAP, Samsung, Schroeder’s, Sequoia Capital, Sopra Steria, UBS, the UK’s HM Treasury, the USAF and many others.

Your email address will not be published. Required fields are marked *