Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Facebook have kick started a trend of AI’s building more AI’s and it’s only going to accelerate.
Deep Neural Networks (DNNs) are taking over the internet. DNNs, unlike their more basic pattern matching Machine Learning cousins are able to learn and replicate human like tasks by analysing vast amounts of digital data and these artificially intelligent systems are injecting online services with a power that just wasn’t viable in years past.
They’re identifying faces in photos, powering search, pulling meaning from videos, applying meaning to language and translating complex conversations from one language to another. All this we know. But what’s less discussed is how the giants of the Internet go about building these rather remarkable engines of AI.
Companies like Google and Facebook pay top dollar for some really smart people – only a few hundred souls on Earth have the talent and the training needed to really push the boundaries of Deep Learning and paying for these top minds is a lot like paying for an NFL quarterback but more expensive – Google reportedly bought DeepMind for $600 million not for it’s technology but for its twelve strong team of researchers. A $600 million acqui-hire where they managed to corner twelve of the worlds top fifty Deep Learning experts in one go and that lack of expertise has become a bottleneck that’s threatening to stifle the progress of artificial intelligence. And it’s not the only one.
Even the top researchers can’t build these services without trial and error on an enormous scale. To build a deep neural network that cracks the next big AI problem researchers first have to try countless options that don’t work, running each one across dozens and potentially hundreds of machines.
“It’s almost like being the coach rather than the player,” says Demis Hassabis, co-founder of DeepMind, the Google outfit behind the history-making AI that beat the world’s best Go player.
“You’re coaxing these things, rather than directly telling them what to do.”
These bottlenecks are why companies like Google and Facebook are now trying to automate this trial and error, or at least part of it. As the thinking goes, if you automate some of the heavily lifting you can more rapidly push the latest machine learning into the hands of rank and file engineers and you can give the top minds more time to focus on bigger ideas and tougher problems. This, in turn, will accelerate the progress of AI inside the internet apps and services that we use every day.
In other words, for computers to get smarter faster, computers themselves must handle even more of the grunt work. The giants of the Internet are building computing systems that can test countless machine learning algorithms on behalf of their engineers, that can cycle through so many possibilities on their own. Better yet, these companies are building AI algorithms that can help build AI algorithms. No joke. Inside Facebook, engineers have designed what they like to call an “automated machine learning engineer,” an artificially intelligent system that helps create artificially intelligent systems. It’s a long way from perfection. But the goal is to create new AI models using as little human grunt work as possible.
After Facebook’s $104 billion IPO in 2012, Hussein Mehanna and other engineers on the Facebook ads team felt an added pressure to improve the company’s ad targeting, to more precisely match ads to the hundreds of millions of people using its social network. This meant building DNNs and other machine learning algorithms that could make better use of the vast amounts of data Facebook collects on the characteristics and behaviour of those hundreds of millions of people.
According to Mehanna, Facebook engineers had no problem generating ideas for new AI, but testing these ideas was another matter. So he and his team built a tool called “Flow“.
“We wanted to build a machine-learning assembly line that all engineers at Facebook could use,” said Mehanna.
Flow is designed to help engineers build, test, and execute machine learning algorithms on a massive scale and this includes practically any form of machine learning. The idea is that the assembly line could easily test an endless stream of ideas across the company’s sprawling network of computer data centers. They could run all sorts of algorithmic possibilities, involving not just deep learning but other forms of AI, including logistic regression to boosted decision trees and the results could feed still more ideas.
“The more ideas you try, the better,” said Mehanna, “and the more data you try, the better.” It also meant that engineers could readily reuse algorithms that others had built, tweaking these algorithms and applying them to other tasks.
Soon, Mehanna and his team expanded Flow for use across the entire company. Inside other teams it helped produce algorithms that could choose the links for your Faceboook News Feed, recognise faces in photos posted on the social network and generate audio captions for photos that would help the blind understand what’s in them. Flow is even helping Facebook’s Project Aquila map the areas of the planet which don’t have internet access and then optimise the flight paths of its drones.
With Flow Facebook trains and tests over 300,000 machine learning models every month and while they used to roll out one new AI model every sixty or so days they can now release several new models every week.
The idea is catching on as well. Last year, Twitter acquired a startup, WhetLab, that specialises in this same area, and recently, Microsoft described how its researchers use a “similar” system to test a sea of possible AI models referring to it as “human-assisted search.”
Mehanna though, like many researchers wants to accelerate things even more and the company plans to eventually open source Flow alongside fastText and Torchnet, sharing it with the world at large, and according to Mehanna, outfits like LinkedIn, Uber, and Twitter are already interested in using it. Mehanna and team have also built a tool called AutoML that can remove even more of the burden from human engineers. Running atop Flow, AutoML can automatically “clean” the data needed to train neural networks and other machine learning algorithms – prepare it for testing without any human intervention – and Mehanna envisions a version that could even gather the data on its own. But more intriguingly, AutoML uses artificial intelligence to help build artificial intelligence.
As Mehana says, Facebook trains and tests about 300,000 machine learning models each month. AutoML can then use the results of these tests to train another machine learning model that can optimize the training of other machine learning models. And it works. The system can automatically choose algorithms and parameters that are likely to work.
“It can almost predict the result before the training,” Mehanna says.
Inside the Facebook ads team engineers even built that automated machine learning engineer, called Asimo and this too has spread to the rest of the company. And according to Facebook, there are cases where it can automatically generate enhanced and improved incarnations of existing models, models that human engineers can then instantly deploy to the net.
“It cannot yet invent a new AI algorithm,” Mehanna says “but who knows, down the road…”
It’s an intriguing idea and one that has captivated science fiction writers for decades – intelligent machines that build more intelligent machines.