Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
True unsupervised learning, that allows AI’s to learn by themselves, will represent a quantum leap in AI development and begin pushing it to the next level.
Most of today’s artificial intelligence (AI) systems rely on machine learning algorithms that can predict specific outcomes by drawing on pre-established values, but now researchers from OpenAI, a company funded by no less than Elon Musk and Peter Thiel, who are trying to democratise AI for “human good” just discovered – literally – that a machine learning system they created to predict the next character in the text of reviews from Amazon evolved into an unsupervised learning system that could learn how to read sentiment. By itself. That’s a pretty big deal, and it’s also something that, at the moment, even the researchers themselves can’t explain.
“We were very surprised that our model learned an interpretable feature, and that simply predicting the next character in Amazon reviews resulted in discovering the concept of sentiment,” said OpenAI in a blog.
According to the post OpenAI’s neural network was able to train itself and analyse sentiment accurately by classifying Amazon’s reviews as either positive or negative – and it then generated follow on text that fit with the sentiment.
The AI the team used was what’s known as a multiplicative long short-term memory (LSTM) model that was trained for a month, processing 12,500 characters a second using Nvidia Pascal GPU’s – which Nvidia’s own CEO gifted to Elon Musk last year – with “4,096 units on a corpus of 82 million Amazon reviews to predict the next character in a chunk of text.”
After training it the researchers then turned the model into a sentiment classifier using a linear combination of these units. When they noticed that their model was using just a few of the learned units, rather than all of them, they discovered that there was a single “sentiment neuron” with a highly predictive sentiment value.
The diagram below represents the character-by-character value of the sentiment neuron, displaying negative values as red and positive values as green. Note that strongly indicative words like “best” or “horrendous” cause particularly big shifts in the colour.
So far the sentiment analysis capabilities of their new AI has surpassed every other approach used in the Stanford Sentiment Treebank, which is a small, but extensively studied, sentiment analysis data set, and the OpenAI teams AI was 91.8% accurate, which beat the previous record of 90.2%.
So far every AI researcher I’ve ever met seems to dream of the day they finally manage to make a breakthrough and create an unsupervised learning system – where the AI just learns by itself without any help and that eliminates the need to feed it huge tomes of training data, and it looks like OpenAI’s mLSTM has achieved that, or is at worst very close to achieving that – depending on your view. That said though the researchers also recognise that it might not be the only machine out there at the moment that can perform unsupervised learning:
“We believe the phenomenon is not specific to our model, but is instead a general property of certain large neural networks that are trained to predict the next step or dimension in their inputs,” they said.
“Our results are a promising step towards general unsupervised representation learning,” they said, “and we found the results by exploring whether we could learn good quality representations as a side effect of language modelling, and scaled up an existing model on a carefully chosen dataset. Yet the underlying phenomena remain more mysterious than clear.”
Being able to learn unsupervised would give AIs a significant boost and dramatically reduce the amount of time it takes for them to learn new tasks, and as the team at DeepMind close in on creating the world’s first Artificial General Intelligence architecture that, combined with an AI that can learn unsupervised, would represent a step change – another step change in a long line of step changes – in the path to unleashing AI’s true potential.