70 views
0

WHY THIS MATTERS IN BRIEF

Neural networks are incredibly powerful and increasingly capable but they’re a black box, now a team from MIT has figured out a way to read their minds.

 

Neural networks, which learn to perform computational tasks by analysing huge sets of training data, have been responsible for most of the recent breakthroughs in Artificial Intelligence (AI), including speech recognition and automatic translation systems, but during their training neural networks continually adjust their internal settings in ways that even their creators don’t understand, and as a consequence they’re often simply referred to as black boxes. And when you’re plugging these neural networks into everything from the global banking system, to self-driving cars and healthcare, using a black box technology that noone understands how it works, adapts, evolves or learns, is a problem.

 

RELATED
SpaceX explosion sees Facebooks African internet project go up in smoke

 

In the past year there have been a number of companies trying to develop new techniques to help them read and understand the minds of these neural network black boxes, from creating tools that help us debug them to creating tools that help us visualise and explain their decision making, but despite the focus it’s still a dark, imperfect art.

Now though a team of researchers from MIT and the Qatar Computing Research Institute have used what they call an “interpretive technique” that was originally designed to be used to analyse the neural networks trained to do machine translation and speech recognition, and what they found was that neural networks ignored certain information and prioritised others. For example, some of the systems they tested concentrated more on lower level tasks, such as sound recognition or “part of speech” recognition, before moving on to the higher level tasks, such as transcription or semantic interpretation. But they also found a surprising omission in the type of information the translation networks considered, and they were able to show that correcting those omissions improved the network’s performance, something that in the future could make our AI’s more transparent and, equally important, accurate.

 

RELATED
Google's AI can rate photos on aesthetic appeal

 

“In machine translation, historically, there was sort of a pyramid with different layers,” said Jim Glass, a MIT CSAIL senior research scientist who worked on the project with Yonatan Belinkov, an MIT graduate student, “at the lowest level there was the word, the surface forms, and the top of the pyramid was some kind of inter-lingual representation, and [the networks] had different layers where they were doing syntax, and semantics. This was a very abstract notion, but the idea was the higher up you went in the pyramid, the easier it would be to translate a new language, and then you’d go down again. So, part of what Yonatan is doing is trying to figure out what aspects of this notion are being encoded in the network.”

Neural nets are so named because they roughly approximate the structure of the human brain. Typically, they’re arranged into hierarchical layers where each layer is made up of multitudes of simple processing units, nodes, each of which are connected to multiple nodes in the layers above and below them. information is fed into the lowest layer first whose nodes process it and then pass it on to the next layer before it’s processed again, and so on, and the connections between the individual layers all have different “weights,” which determine how much the output of any one node means to the calculation performed by the next.

 

RELATED
Microsoft unveils it's new real time AI engine

 

During an AI’s training phase these weights are constantly readjusted, and after the network is trained, the premise is that it’s creators can determine the weight of each node and ergo determine how these black boxes do what they do. However, bearing in mind that the majority of today’s AI’s have thousands or even millions of nodes, and even more connections between them, figuring out what algorithm those weights encoded is all but impossible. And that’s the black box issue.

During their research the team trained network and, using the output of each of its layers, in response to individual training examples, used those to train another neural network to perform a particular task, and it was this technique that let them figure out what task each layer was optimised for.

 

RELATED
It's electric, US to cover 25,000 miles of roads with charging stations

 

In the case of the speech recognition network, Belinkov and Glass used individual layers’ outputs to train a system to identify “phones,” distinct phonetic units particular to a spoken language.

The “t” sounds in the words “tea,” “tree,” and “but,” for instance, might be classified as separate phones, but a speech recognition system has to transcribe all of them using the letter “t.” And indeed, Belinkov and Glass found that lower levels of the network were better at recognising phones than higher levels, where, presumably, the distinction was less important.

Similarly, in an earlier paper, presented last summer at the Annual Meeting of the Association for Computational Linguistics, Glass, Belinkov, and their QCRI colleagues showed that the lower levels of a machine translation network were particularly good at recognising parts of speech and morphology, features such as tense, number, and conjugation. But in the new paper, they show that higher levels of the network are better at something called semantic tagging.

 

RELATED
DeepMinds AI based WaveNet tech makes computers sound human

 

As Belinkov explains, a part of speech tagger will recognise that “herself” is a pronoun, but the meaning of that pronoun, its semantic sense, is very different in the sentences “she bought the book herself” and “she herself bought the book,” and a semantic tagger would assign different tags to those two instances of “herself,” just as a machine translation system might find different translations for them in a given target language.

They also found that the best performing machine translation networks use so called encoding-decoding models, so the team used that technique as well. In such systems, the input, in the source language, passed through several layers of the network, known as the encoder, to produce a vector, a string of numbers that somehow represent the semantic content of the input. That vector passes through several more layers of the network, the decoder, to yield a translation in the target language.

 

RELATED
Travel to Mars for $200,000 on Elon Musk's SpaceX Express

 

Although the encoder and decoder were trained together, they can be thought of as separate networks, and the researchers discovered that, curiously, the lower layers of the encoder are good at distinguishing morphology, but the higher layers of the decoder are not. So Belinkov and the QCRI researchers retrained the network, scoring its performance according to not only accuracy of translation but also analysis of morphology in the target language. In essence, they forced the decoder to get better at distinguishing morphology.

Using this technique, they retrained the network to translate English into German and found that its accuracy increased by 3 percent, and while that’s not a great improvement it’s an indication that looking under the hood, and being able to read the minds of neural networks could quickly become much more than just an academic exercise.

About author

Matthew Griffin

Matthew Griffin, award winning Futurist and Founder of the 311 Institute is described as "The Adviser behind the Advisers." Recognised for the past five years as one of the world's foremost futurists, innovation and strategy experts Matthew is an author, entrepreneur international speaker who helps investors, multi-nationals, regulators and sovereign governments around the world envision, build and lead the future. Today, asides from being a member of Centrica's prestigious Technology and Innovation Committee and mentoring XPrize teams, Matthew's accomplishments, among others, include playing the lead role in helping the world's largest smartphone manufacturers ideate the next five generations of mobile devices, and what comes beyond, and helping the world's largest high tech semiconductor manufacturers envision the next twenty years of intelligent machines. Matthew's clients include Accenture, Bain & Co, Bank of America, Blackrock, Bloomberg, Booz Allen Hamilton, Boston Consulting Group, Dell EMC, Dentons, Deloitte, Deutsche Bank, Du Pont, E&Y, Fidelity, Goldman Sachs, HPE, Huawei, JP Morgan Chase, KPMG, Lloyds Banking Group, McKinsey & Co, Monsanto, PWC, Qualcomm, Rolls Royce, SAP, Samsung, Schroeder's, Sequoia Capital, Sopra Steria, UBS, the UK's HM Treasury, the USAF and many others.

Your email address will not be published. Required fields are marked *