Scroll Top

Researchers clever new filter protects your likeness from being DeepFaked

futurist_deepfake_protection

WHY THIS MATTERS IN BRIEF

DeepFakes will be with us for the rest of eternity, and they’ll get easier to create, so in the future you may need ways to protect yourself.

 

Interested in the Exponential Future? Join our XPotential Communityconnect, watch a keynote, or browse my blog.

Send me a photo of yourself and I’ll turn you into a DeepFake. In fact, don’t bother I’ll grab one from your Facebook page and turn your profile photo into one like these researchers from Samsung did a while ago, and then I’ll Zoom bomb you as Elon Musk.

 

RELATED
Accenture on boarded over 150,000 new hires in the Metaverse

 

In today’s digital world where deepfakes, Fake News and even fake people, by way of Digital Humans, seem to be increasingly omnipresent it’s no wonder that people increasingly struggle to separate fact from fiction.

Thanks to Artificial Intelligence (AI) it’s become increasingly easy to convincingly manipulate images and videos of people by doctoring their speech, their body movements, and appearance. So, in response to all this fakery a team of researchers have created a new algorithm, that works very much in the same way a snapchat filter works, in order to generate what’s known as an adversarial attack against facial manipulation systems, like DeepFake software, in order to corrupt them and render them useless.

 

RELATED
US military and DARPA team up to develop tech to uncover fake news

 

In the latest breakthrough against the onslaught of deepfakes the researchers’ algorithm allows users to protect media before uploading it to the internet by overlaying an image or video with an imperceptible filter. Then, when a faker tries to alter said image or video it’s either left unchanged or ends up becoming completely distorted with the pixels rendering in such a way that the media becomes unrecognizable and unusable as a deepfake.

The researchers behind the new filter have made their open-source code publicly available, and while their paper has not yet been peer-reviewed you can read it on arXiv.

Nataniel Ruiz at Boston University and co-author of the paper, says the idea for the project came to him after he got interested in the new techniques people are using to create deepfakes and that he hit on the idea of disrupting deepfakes after talking with his doctoral advisor, Stan Sclaroff.

 

RELATED
Jack Dorsey launches BlueSky Social a completely new kind of decentralised social network

 

Deepfakes first rose to prominence years ago when it used to take a team of AI experts months to create rather bad looking deepfakes, but recent advances in computing power and tools now mean almost anyone can create their own versions with either no skill or very little skill.

Last year, for instance, the iPhone app Zao entered the zeitgeist. Created by a team in China it let anyone scan their face with their smartphone and insert their likeness into famous movies – no skill required.

The relative ease with which internet users can create deepfakes will now also mean it will become much more difficult to separate fact from fiction online, particularly in arenas like politics where they’re increasingly being used with military like precision. Detecting deepfake images, audio, or video could be one approach to solving this trust problem, although it may prove to be harder than expected. Facebook is currently holding a competition, searching for a team of researchers that can effectively detect deepfakes, and now researchers around the world are pursuing even more sophisticated techniques for disrupting deepfakes such as introducing watermarks in digital content.

 

RELATED
Elon Musk wants to buy 300,000 Nvidia Blackwells to power new xAI ambitions

 

“We covered what we call ‘white-box’ attacks in our work, where the network and its parameters are known to the disruptor,” says coauthor Sarah Adel Bargal, a research assistant professor of computer science.

“A very important next step is to develop methods for ‘black-box’ attacks that can disrupt deepfake networks [in ways] inaccessible to the disruptor… [and] we are currently working on making this a reality.”

Source: Boston University

Related Posts

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This