Scroll Top

Facebook develops a way to reverse engineer DeepFakes and identify their source

Futurist_deepfakenn

WHY THIS MATTERS IN BRIEF

Disinformation and misinformation is its own growing pandemic, and Facebook’s latest innovation can identify who’s creating it.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, or browse my blog.

Deepfakes aren’t a big problem on Facebook right now, but they could be in the future which is why the company has been funding research into the technology to guard against future threats. Facebook’s latest work is a collaboration with academics from Michigan State University (MSU) who have been finding ways to reverse engineer deepfakes by analyzing Artificial Intelligence (AI) generated images in order to reveal identifying tell tale characteristics of the AI machine learning model that created it.

 

RELATED
Japan's newest digital pop stars blur the line between virtual and reality

 

The research is interesting because it could very well become the best way for Facebook not only to identify deepfakes but also pinpoint exactly who made them and what else they’ve been creating – thereby helping the company quickly uncover “nefarious” networks of bad actors who might be trying to do everything from discrediting people and rigging democratic elections.

 

Learn more about the Future of Synthetic Content, Keynote by Matthew Griffin

 

Previous studies have been able to determine which known AI model generated a deepfake, but this work, led by MSU’s Vishal Asnani, goes a step further by identifying the architectural traits of unknown models. These traits, known as hyper-parameters, have to be tuned in each machine learning model like parts in an engine. Collectively, they leave a unique fingerprint on the finished image that can then be used to identify its source.

Identifying the traits of unknown models is important, Facebook research lead Tal Hassner told The Verge, because deepfake software is extremely easy to customize. This potentially allows bad actors to cover their tracks if investigators were trying to trace their activity.

 

RELATED
Two US lawyers just got fined for using ChatGPT to generate fake court cases

 

“Let’s assume a bad actor is generating lots of different deepfakes and uploads them on different platforms to different users,” says Hassner. “If this is a new AI model nobody’s seen before, then there’s very little that we could have said about it in the past. Now, we’re able to say, ‘Look, the picture that was uploaded here, the picture that was uploaded there, all of them came from the same [AI] model [or group of people].’ And if we were able to seize the laptop or computer [used to generate the content], we will be able to say definitively, ‘This is the culprit.’” And that’s a game changer.

Hassner compares the work to forensic techniques used to identify which model of camera was used to take a picture by looking for patterns in the resulting image, or which gun fired a particular shot based on the patterns left by its unique firing pin.

“Not everybody can create their own camera or gun, though,” he says. “Whereas anyone with a reasonable amount of experience and standard computer can cook their own [AI] model that generates deepfakes.”

Not only can the resulting algorithm fingerprint the traits of a generative model, but it can also identify which known model created an image and whether an image is a deepfake in the first place.

 

RELATED
US Presidential report on AI tries to prepare society for what's coming

 

“On standard benchmarks, we get state-of-the-art results,” says Hassner.

But it’s important to note that even these state-of-the-art results are far from reliable. When Facebook held a deepfake detection competition last year, the winning algorithm was only able to detect AI-manipulated videos 65.18 percent of the time. Researchers involved said that spotting deepfakes using algorithms is still very much an “unsolved problem.”

Part of the reason for this is that the field of generative AI is extremely active. New techniques are published every day, and it’s nearly impossible for any filter to keep up.

Those involved in the field are keenly aware of this dynamic, and when asked if publishing this new fingerprinting algorithm will lead to research that can go undetected by these methods, Hassner agrees.

“I would expect so,” he says. “This is a cat and mouse game, and it continues to be a cat and mouse game.”

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This