0

WHY THIS MATTERS IN BRIEF

Privacy is becoming a battleground, and just as we can use AI to identify people we can also use it to anonymise people.

 

Interested in the Exponential Future? Connect, download a free E-Book, watch a keynote, or browse my blog.

It might sound counter intuitive and perhaps slightly self-destructive, but researchers at the company at the center of so many global privacy scandals today, namely Facebook, says they’ve used research from MIT, that I first talked about a few months ago, to create a new Artificial Intelligence (AI) machine learning system that “de-identifies individuals in video.” In short, they’ve created an AI that anonymises you, protects your privacy, and turns you invisible online so that the companies, like Facebook, that use facial recognition to identify you can no longer identify you. And while startups like D-ID and a number of other companies have already made so called de-identification technology for still images this is the first time researchers have created one that works on video, and in initial tests the teams new method was able to thwart all of the state-of-the-art facial recognition systems it encountered.

 

RELATED
Smartphone app uses selfies to detect Pancreatic Cancer

 

Furthermore, as an added bonus the system doesn’t need to be retrained every time it sees a new video in order to be effective, and it works by mapping a slightly distorted image onto a person’s face in order to make it difficult for facial recognition technology to identify a person.

“Face recognition can lead to loss of privacy and ‘face replacement technology’ [such as DeepFakes] may be misused to create misleading videos,” a paper explaining the approach reads. “Recent world events concerning the advances in, and abuse of facial recognition technology invoke the need to understand methods that successfully deal with de-identification. Our contribution is the only one suitable for video, including live video, and presents quality that far surpasses the literature methods.”

Facebook’s system pairs an adversarial AI autoencoder with a classifier network, and as part of its training the researchers tried to fool a mixture of facial recognition networks, said Facebook AI Research engineer and Tel Aviv University professor Lior Wolf.

 

RELATED
Blackhat hackers discover a way to turn satellites into "microwave weapons"

 

“So the autoencoder tries to make life harder for the facial recognition networks, and it’s actually a general technique that can also be used if you want to create a system that masks any other type of biometric information, for example, someone’s voice or online behaviour, or any other type of identifiable information that you want to remove,” he added.

Like faceswap deepfake software, the AI uses an encoder-decoder architecture to generate both a mask and an image. During training, the person’s face is distorted then fed into the network, and then the system generates distorted and undistorted images of their face and creates an output that can be overlaid onto the video.

At the moment though=, as grand as all this might sound, Facebook has no plans to roll the technology out, said a company spokesperson, but such methods could enable public speech that remains recognisable to people while at the same time helping those people remain “anonymous.”

 

RELATED
World's first GPS Spoofing attack puts 20 ships 32km inland at an airport

 

Anonymised faces in videos could also be used for the privacy-conscious training of AI systems. In May, for example, Google used Mannequin Challenge videos to train AI systems in order to improve video depth perception systems, and elsewhere UC Berkeley researchers have been training their AI agents to dance like people or do backflips by using YouTube videos as a training data set.

Facebook’s desire to be a leader in this area though might also stem from controversy about its platforms being used to spread misinformation and its own applications of facial recognition technology, but whatever their motivation, it’s an exciting experiment that goes to show that when it comes to our loosing our privacy in the future it might not be all one sided.

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *