Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
We are now locked in a war as nefarious actors find new ways to weaponsise deepfakes and fake news, and defenders try to figure out how to discover and flag it.
An Artificial Intelligence (AI) produced DeepFake video could show Donald Trump saying or doing something extremely outrageous and inflammatory – just imagine that! Crazy I know, and some people might find it believable and in a worst case scenario it might sway an election, trigger violence in the streets, or spark an international armed conflict.
Now though a new digital forensics technique promises to protect President Trump, other world leaders, and celebrities against such deepfake chicanery – for not at least. The new method devised by researchers uses machine learning to analyze a specific individual’s style of speech and movement using what the researchers call a “Soft-biometric signature.”
The researchers, from UC Berkeley and the University of Southern California, used an existing tool to extract the face and head movements of individuals. They also created their own deepfakes for Donald Trump, Barack Obama, Bernie Sanders, Elizabeth Warren, and Hillary Clinton using Generative Adversarial Networks (GANs).
In their experiments the team then used machine learning to distinguish the head and face movements that characterize the real person. These subtle signals – the way Bernie Sanders nods while saying a particular word, perhaps, or the way Trump smirks after a comeback – are not currently modelled by deepfake algorithms and so their absence is a tell.
See the new technique in action
In experiments the technique was at least 92% accurate in spotting several variations of deepfakes, including face swaps and ones in which an impersonator is using a digital puppet. It was also able to deal with artifacts in the files that come from recompressing a video, which can confuse other detection techniques. The researchers now plan to improve the technique by accounting for characteristics of a person’s speech as well. The research, which was presented at a computer vision conference in California this week, was funded by Google and DARPA, a research wing of the Pentagon who is funding a parallel program to devise better detection techniques.
The problem facing world leaders, and everyone else for that matter, is that it has become ridiculously simple to generate video forgeries using AI, and false news reports, bogus social-media accounts, and doctored videos have already undermined political news coverage and discourse. Meanwhile politicians are especially concerned that fake media could be used to sow misinformation during the upcoming 2020 presidential election.
Some tools for catching deepfake videos have been produced already, but forgers have quickly adapted as you’d expect. For example, for a while it was possible to spot a deepfake by tracking the speaker’s eye movements, which tended to be unnatural in deepfakes. Shortly after this method was identified, however, deepfake algorithms were tweaked to include better blinking. And so the arms race continues.
“We are witnessing an arms race between digital manipulations and the ability to detect those, and the advancements of AI based algorithms are catalyzing both sides,” says Hao Li, a professor at the University of Southern California who helped develop the new technique, and it’s for this reason his team hasn’t yet released the code behind the new method.
Li says it will be particularly difficult for deepfake-makers to adapt to the new technique, but he concedes that they probably will eventually.
“The next step to go around this form of detection would be to synthesize motions and behaviors based on prior observations of this particular person,” he says.
Li also says that as deepfakes get easier to use and more powerful, it may become necessary for everyone to consider protecting themselves.
“Celebrities and political figures have been the main targets so far,” he says. “But I would not be surprised if in a year or two, digital humans that look indistinguishable from real ones can be synthesized by any end user.”
And he’s right on the latter point – let’s face it, at some point in the near future this tech is going to be democratised, like most tech, and shoved into an app on a smartphone – at which point we’ll be deluged by deepfakes and Donald Trump will be everywhere. Just imagine that…