146 views
0

WHY THIS MATTERS IN BRIEF

The advent of powerful AI tools, adaptable neural networks and the democratisation of hi definition rendering have taken fake news to a new level and this program aims to unmask the fakes.

 

We’ve all seen that video where Artificial Intelligence (AI) powered algorithms make it seem like Barack Obama is giving a speech by synthesising his voice and facial movements into a believable, and more importantly, credible, video clip, and depending on which side of the fence you sit it’s both amazing and thought provoking all at the same time. But if you thought you were the only one that felt that way then you weren’t,  as it turns out it’s also provoked some similar thoughts within the ranks of the US Department of Defense’s (DOD) bleeding edge research organisation DARPA too.

 

RELATED
Stopping the DIY bio-terrorists, US military embraces AI and Ginkgo Bioworks

 

Over the course of the summer DARPA have announced that they will fund a contest where participants will compete to create the most believable, fake AI generated photos, videos, and audio recordings, collectively referred to as “Deepfakes,” like the examples in the video below. However, the competition is also designed to go one step further and try to discover and develop new advanced tools to detect these Deepfakes which are becoming increasingly cheap and simple to make, and more sophisticated, as people, from regular researchers through to cyber criminals, get better at creating AI algorithms that are made to fool us, like in the video below.

 

We’ve already moved way beyond this crude technology…

 

In particular, DARPA is concerned by a relatively new class of AI’s called Generative Adversarial Networks (GAN’s), which are types of sophisticated algorithms that pit two neural networks against each other to eventually hone in on the ability to create something indistinguishable from those made by people, hence the inclusion of the word, adversarial. In this case, a world leader being made to say something in an AI generated fake news video versus something they actually said in a speech.

That said one of the ways to detect fakes might come from academia, specifically MIT, who a few years ago found a way to use AI to take someone’s heart rate from video, seen below. While using this technique might at first sound odd Deefakes don’t have a heart beat so this “old” and odd technique could, in the near term help to quickly weadle fakes out.

 

One way to root out the fakes, from an unexpected source

 

It’s easy to see why the DOD is concerned, right now the president of the US boasts about the nation’s nuclear arsenal over social media while the US and Korea inch towards talks of disarmament. What would definitely not help anyone right now would be having a believable, fake video of President Trump or Supreme Leader Kim Jong Un saying they’re planning to launch missiles go viral. But it’s not just internet pranksters or malicious enemies of the state who are making these videos.

 

RELATED
Scientists discover a way to delete fears from the brain

 

A quick scan through the libraries of Facebook and Google’s published AI research shows that both companies invested in learning how to develop algorithms that can processanalyze, and alter photos and videos of people, and if DARPA wants to nip this potential threat in the bud, maybe they should look into what the tech giants are doing.

Even though some of these research projects are relatively benign they could still be used to smooth out the glitches of an altered or fake video, or like another of Google’s AI projects designed to reduce the noise of videos and make them more realistic. And some projects though are, well, creepier, like Google’s AI algorithm that creates a neutral, front facing photo of a person by analyzing other pictures of them.

The problem is that AI researchers often take a “Can we?” instead of a “Should we?” approach to making the coolest stuff possible. This is particularly relevant for a Facebook research project that found a way to animate the profile photos of its users. The researchers behind the project said that they did not consider any ethical issues or potential misuse of their work while they were building it, they just wanted to create as sophisticated a product as possible.

The problem for DARPA is that fixing this problem requires a change in attitude towards how technology is developed, and inevitably as they find ways to detect and combat it, just like any war, their opponents, whether it’s sovereign states or bedroom criminals, will continue to find a way to out do them. Arguably it’s a race they won’t win.

About author

Matthew Griffin

Matthew Griffin, award winning Futurist and Founder of the 311 Institute, a global futures think tank working between the dates of 2020 and 2070, is described as "The Adviser behind the Advisers." Regularly featured on AP, CNBC, Discovery and RT, his ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past five years as one of the world's foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive future. A rare talent Matthew sits on the Technology and Innovation Committee (TIAC) for Centrica, Europe’s largest utility company, and his recent work includes mentoring XPrize teams, building the first generation of biocomputers and re-inventing global education, and helping the world’s largest manufacturers envision, design and build the next 20 years of devices, smartphones and intelligent machines. Matthew's clients are the who’s who of industry and include Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, the USAF and many others.

Your email address will not be published. Required fields are marked *