Matthew Griffin, award winning Futurist and Founder of the 311 Institute, a global futures think tank working between the dates of 2020 and 2070, is described as "The Adviser behind the Advisers." Regularly featured on AP, CNBC, Discovery and RT, his ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past five years as one of the world's foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive future. A rare talent Matthew sits on the Technology and Innovation Committee (TIAC) for Centrica, Europe’s largest utility company, and his recent work includes mentoring XPrize teams, building the first generation of biocomputers and re-inventing global education, and helping the world’s largest manufacturers envision, design and build the next 20 years of devices, smartphones and intelligent machines. Matthew's clients are the who’s who of industry and include Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, the USAF and many others.
WHY THIS MATTERS IN BRIEF
Leaps in computing performance are letting AI’s produce real time, photo realistic fake video footage that has a variety of applications, from training self-driving cars to helping create fake news.
As everyone knows, or at least can suspect, the sunny weather in California is ideal for training the new generation of self-driving cars, but it does have its drawbacks, after all, as soon as you move away from California the weather, well, gets a lot more sucky. Head north to Seattle and the only sun you’ll see is on your screensaver, and head over to northern Europe and you have a range of weather to contend with, from blizzards to dense fog. And that’s on a good day.
In the early days the manufacturers developing self-driving cars would put them through their paces by driving them millions of miles around the world’s roads, but a couple of years ago all that changed and those same manufacturers began using simulators to test their self-driving car software that let them complete millions of miles of testing every day. And if you didn’t think testing was important well recently a new technique that lets us “peer” into the brain of the neural networks behind the self-driving car software discovered a bug in Nvidia’s Dave 2.0 software that would have driven you off a cliff. Yes, self-driving cars are awesome.
The whole point of testing software is obviously to see how it reacts in a variety of conditions, everything from how it handles turns to how it handles a rogue Moose on the road in Norway that’s chasing children around the road, and everything in between, and this week Nvidia released a new Artificial Intelligence (AI) framework that lets researchers mix the simulations up, turning sunny streets into snow filled ones, creating downpours where there normally aren’t any, and turning day into night. And the technology has applications far outside of the automotive market too, in the world of special effects, video games and many more besides, including fake news videos, something I’ve discussed at length before.
The research is based on an AI technique that’s particularly good at generating visual data, something called a Generative Adversarial Network (GAN) that combines two separate neural networks, one that makes the data, and another that judges it, rejecting samples that don’t look accurate or realistic enough. In this way, the AI teaches itself to generate better and better results over time and as GAN’s get more mature they’re being used more and more to create everything from photo realistic fake celebrities and images, another Nvidia parlour trick, to creating new fashion designs for Amazon.
“Let it snow, let it snow, let it snow”
Nvidia’s latest research though has one big advantage over existing GANS, it learns with much less supervision. Generally, programs of this sort need labelled datasets to generate data, and as Nvidia researcher Ming-Yu Liu explained this means that if you’re making a GAN that turns a daytime scene into a night time one, you’d need to feed it pairs of images taken at the same location at night and day. It would then study the difference between the two and generate new examples. But Nvidia’s new program doesn’t need this prep work because it works without having to use labelled data, and this is a major advantage.
“We are among the first to tackle the problem,” said Ming-Yu, “[and] there are many applications. For example, it rarely rains in California, but we’d like our self-driving cars to operate properly when it rains. We can use our method to translate sunny California driving sequences to rainy ones to train our self-driving cars.”
And the program doesn’t just work translating pictures of streets, Ming-Yu and his colleagues also tested it on pictures of cats and dogs, turning pictures of one breed into another and used it to change the expression of peoples’ faces in photographs. It’s similar to the technology used in face-changing apps like FaceApp, and, like other research in this area, raises fears about AI being used to create fake imagery that will trick people online.
“This work can be used for image editing,” suggests Ming-Yu, although he adds that there are no concrete applications for the program just yet, “we’re making this research available to our product teams and customers. I can’t comment on the speed or extent of their adoption.”
So, snow in California, whatever will happen next? Sun in the UK? That’s crazy.