0

WHY THIS MATTERS IN BRIEF

Disinformation and misinformation is its own growing pandemic, and Facebook’s latest innovation can identify who’s creating it.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, or browse my blog.

Deepfakes aren’t a big problem on Facebook right now, but they could be in the future which is why the company has been funding research into the technology to guard against future threats. Facebook’s latest work is a collaboration with academics from Michigan State University (MSU) who have been finding ways to reverse engineer deepfakes by analyzing Artificial Intelligence (AI) generated images in order to reveal identifying tell tale characteristics of the AI machine learning model that created it.

 

RELATED
New AI algorithm makes the world's most powerful microscopes even more powerful

 

The research is interesting because it could very well become the best way for Facebook not only to identify deepfakes but also pinpoint exactly who made them and what else they’ve been creating – thereby helping the company quickly uncover “nefarious” networks of bad actors who might be trying to do everything from discrediting people and rigging democratic elections.

 

Learn more about the Future of Synthetic Content, Keynote by Matthew Griffin

 

Previous studies have been able to determine which known AI model generated a deepfake, but this work, led by MSU’s Vishal Asnani, goes a step further by identifying the architectural traits of unknown models. These traits, known as hyper-parameters, have to be tuned in each machine learning model like parts in an engine. Collectively, they leave a unique fingerprint on the finished image that can then be used to identify its source.

Identifying the traits of unknown models is important, Facebook research lead Tal Hassner told The Verge, because deepfake software is extremely easy to customize. This potentially allows bad actors to cover their tracks if investigators were trying to trace their activity.

 

RELATED
MIT researchers have taught their AI to see through solid walls

 

“Let’s assume a bad actor is generating lots of different deepfakes and uploads them on different platforms to different users,” says Hassner. “If this is a new AI model nobody’s seen before, then there’s very little that we could have said about it in the past. Now, we’re able to say, ‘Look, the picture that was uploaded here, the picture that was uploaded there, all of them came from the same [AI] model [or group of people].’ And if we were able to seize the laptop or computer [used to generate the content], we will be able to say definitively, ‘This is the culprit.’” And that’s a game changer.

Hassner compares the work to forensic techniques used to identify which model of camera was used to take a picture by looking for patterns in the resulting image, or which gun fired a particular shot based on the patterns left by its unique firing pin.

“Not everybody can create their own camera or gun, though,” he says. “Whereas anyone with a reasonable amount of experience and standard computer can cook their own [AI] model that generates deepfakes.”

Not only can the resulting algorithm fingerprint the traits of a generative model, but it can also identify which known model created an image and whether an image is a deepfake in the first place.

 

RELATED
Smelling is believing as odour vision comes to VR headsets

 

“On standard benchmarks, we get state-of-the-art results,” says Hassner.

But it’s important to note that even these state-of-the-art results are far from reliable. When Facebook held a deepfake detection competition last year, the winning algorithm was only able to detect AI-manipulated videos 65.18 percent of the time. Researchers involved said that spotting deepfakes using algorithms is still very much an “unsolved problem.”

Part of the reason for this is that the field of generative AI is extremely active. New techniques are published every day, and it’s nearly impossible for any filter to keep up.

Those involved in the field are keenly aware of this dynamic, and when asked if publishing this new fingerprinting algorithm will lead to research that can go undetected by these methods, Hassner agrees.

“I would expect so,” he says. “This is a cat and mouse game, and it continues to be a cat and mouse game.”

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *