Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
In the future the one thing none of us will have, it seems, is privacy – online or offline – but researchers are fighting back in new ways.
What Artificial Intelligence (AI) helps take away with one hand, namely privacy, even if you’re wearing masks, and with the Chinese relatively dystopian feeling Social Credit Scoring (SCS) system being a prime example, it gives with the other. And as for what it gives back, ironically, that’s also privacy. Confused? Well, just one of the many ways that companies strip away our privacy is by using facial recognition, for example, from images and video, which they then use to track us, monitor us, and profile us all. Now, however, the same AI technology that’s behind DeepFakes could soon be used to help anonymize and hide us online and confuse these facial recognition systems.
The algorithm works by extracting information about the person’s facial expression by finding the position of the eyes, ears, shoulders, and nose. It then uses a GAN, trained on a database of 1.5 million face images, to create an entirely new face with the same expression and blends it into the original photo, retaining the same background, and voila – that’s enough to make you disappear online. Poof! Awesome. Take that Facebook, who, by the way, just released their own version of the tech.
Developed by researchers at the Norwegian University of Science and Technology, the technique is still highly experimental, and while it works on many types of photos and faces it still trips up when the face is partially occluded or turned at particular angles. The technique is also very glitchy for video.
This isn’t the first AI-based face anonymisation technique though but it almost is. A paper in February from researchers at the University of Albany used deep learning to transplant key elements of a subject’s facial expressions onto someone else. That method required a consenting donor to offer his or her face as the new canvas for the expressions.
And as for why any of this matters anonymisation is used to protect the identity of someone, such as a whistleblower, in photos and footage. But traditional techniques, such as blurring and pixelation, run the risk of being incomplete and the person’s identity can be discovered anyway, or completely stripping away the person’s personality, for example by removing facial expressions. Because GANs don’t use the subject’s original face at all, they eliminate any risk of the former problem. They can also re-create facial expressions in high resolution, thus offering a solution to the latter.
The technique also demonstrates a new value proposition for GANs, which have developed a bad reputation for lowering the barrier to producing persuasive misinformation. While this study was limited to visual media, by extension the team behind the new technique think it can also be used to help anonymise voices too – and in a world where it seems every company wants to strip away our privacy this new development gives us all a way to fight back. If we want to…