Leaps in computing performance are letting AI’s produce real time, photo realistic fake video footage that has a variety of applications, from training self-driving cars to helping create fake news.


As everyone knows, or at least can suspect, the sunny weather in California is ideal for training the new generation of self-driving cars, but it does have its drawbacks, after all, as soon as you move away from California the weather, well, gets a lot more sucky. Head north to Seattle and the only sun you’ll see is on your screensaver, and head over to northern Europe and you have a range of weather to contend with, from blizzards to dense fog. And that’s on a good day.


DARPA is bringing the internet to the world's deepest oceans and mines


In the early days the manufacturers developing self-driving cars would put them through their paces by driving them millions of miles around the world’s roads, but a couple of years ago all that changed and those same manufacturers began using simulators to test their self-driving car software that let them complete millions of miles of testing every day. And if you didn’t think testing was important well recently a new technique that lets us “peer” into the brain of the neural networks behind the self-driving car software discovered a bug in Nvidia’s Dave 2.0 software that would have driven you off a cliff. Yes, self-driving cars are awesome.

The whole point of testing software is obviously to see how it reacts in a variety of conditions, everything from how it handles turns to how it handles a rogue Moose on the road in Norway that’s chasing children around the road, and everything in between, and this week Nvidia released a new Artificial Intelligence (AI) framework that lets researchers mix the simulations up, turning sunny streets into snow filled ones, creating downpours where there normally aren’t any, and turning day into night. And the technology has applications far outside of the automotive market too, in the world of special effects, video games and many more besides, including fake news videos, something I’ve discussed at length before.


Airbus will test its new "Sky taxi" later this year


The research is based on an AI technique that’s particularly good at generating visual data, something called a Generative Adversarial Network (GAN) that combines two separate neural networks, one that makes the data, and another that judges it, rejecting samples that don’t look accurate or realistic enough. In this way, the AI teaches itself to generate better and better results over time and as GAN’s get more mature they’re being used more and more to create everything from photo realistic fake celebrities and images, another Nvidia parlour trick, to creating new fashion designs for Amazon.


“Let it snow, let it snow, let it snow”


Nvidia’s latest research though has one big advantage over existing GANS, it learns with much less supervision. Generally, programs of this sort need labelled datasets to generate data, and as Nvidia researcher Ming-Yu Liu explained this means that if you’re making a GAN that turns a daytime scene into a night time one, you’d need to feed it pairs of images taken at the same location at night and day. It would then study the difference between the two and generate new examples. But Nvidia’s new program doesn’t need this prep work because it works without having to use labelled data, and this is a major advantage.


Human on a chip development signals the end of animal testing


“We are among the first to tackle the problem,” said Ming-Yu, “[and] there are many applications. For example, it rarely rains in California, but we’d like our self-driving cars to operate properly when it rains. We can use our method to translate sunny California driving sequences to rainy ones to train our self-driving cars.”

And the program doesn’t just work translating pictures of streets, Ming-Yu and his colleagues also tested it on pictures of cats and dogs, turning pictures of one breed into another and used it to change the expression of peoples’ faces in photographs. It’s similar to the technology used in face-changing apps like FaceApp, and, like other research in this area, raises fears about AI being used to create fake imagery that will trick people online.

“This work can be used for image editing,” suggests Ming-Yu, although he adds that there are no concrete applications for the program just yet, “we’re making this research available to our product teams and customers. I can’t comment on the speed or extent of their adoption.”

So, snow in California, whatever will happen next? Sun in the UK? That’s crazy.

About author

Matthew Griffin

Matthew Griffin, award winning Futurist and Founder of the 311 Institute is described as "The Adviser behind the Advisers." Recognised for the past five years as one of the world's foremost futurists, innovation and strategy experts Matthew is an author, entrepreneur international speaker who helps investors, multi-nationals, regulators and sovereign governments around the world envision, build and lead the future. Today, asides from being a member of Centrica's prestigious Technology and Innovation Committee and mentoring XPrize teams, Matthew's accomplishments, among others, include playing the lead role in helping the world's largest smartphone manufacturers ideate the next five generations of mobile devices, and what comes beyond, and helping the world's largest high tech semiconductor manufacturers envision the next twenty years of intelligent machines. Matthew's clients include Accenture, Bain & Co, Bank of America, Blackrock, Bloomberg, Booz Allen Hamilton, Boston Consulting Group, Dell EMC, Dentons, Deloitte, Deutsche Bank, Du Pont, E&Y, Fidelity, Goldman Sachs, HPE, Huawei, JP Morgan Chase, KPMG, Lloyds Banking Group, McKinsey & Co, Monsanto, PWC, Qualcomm, Rolls Royce, SAP, Samsung, Schroeder's, Sequoia Capital, Sopra Steria, UBS, the UK's HM Treasury, the USAF and many others.

Your email address will not be published. Required fields are marked *