Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Can you believe what you see online? Increasingly no, and this latest high definition “DeepFake” tech is going to make it even harder.
How would you like to swap faces with people? Perhaps, like this guy, you’d like to be Elon Musk in your next company Zoom call? Up until now though face swapping technologies, or you could also call them DeepFakes, have been fairly easy to spot because the finished results are often low resolution or just plain look funky. But now a new paper published by Disney Research in partnership with ETH Zurich describes a fully automated, neural network-based method for swapping faces in photos and videos that’s high resolution.
Not only could that could make it suited for use in film and TV, which is Disney’s main reason for playing around with the tech, where high resolution results are key to ensuring that the final product is good enough to reliably convince viewers as to their reality, but it could also finally open the door to DeepFakes convincing enough to fool anyone and everyone which would then up the ante of what you feel you can trust online. Imagine, for example, the implications of having a Zoom call with a criminal who looks and sounds like your boss and tells you to transfer wads of cash into a Balkan bank account.
See the tech in action
In this case though the researchers specifically intend this tech for use in replacing an existing actor’s performance with a substitute actor’s face, for instance when de-aging or increasing the age of someone, as happened to Will Smith in Gemini Man, or potentially when portraying an actor who has passed away, as happened with Princess Leia in the last Star Wars movie.
They also suggest it could be used for replacing the faces of stunt doubles, who ironically themselves are being replaced with new technology, in cases where the conditions of a scene call for them to be used.
The new method is unique from other approaches in a number of ways, including that any face used in the set can be swapped with any recorded performance, making it possible to relatively easily re-image the actors on demand. The other is that it kindles contrast and light conditions in a compositing step to ensure the actor looks like they were actually present in the same conditions as the scene.
You can check out the results for yourself in the video, and as the researchers point out the effect is actually much better in moving video than in still images. There’s still a hint of “uncanny valley” effect going on here, but the researchers also acknowledge that, calling this “a major step toward photo-realistic face swapping that can successfully bridge the uncanny valley” in their paper.
Basically it’s a lot less nightmare fuel than other attempts I’ve seen, especially when you’ve seen the side-by-side comparisons with other techniques in the sample video. And, most notably, it works at much higher resolution, which is key for actual entertainment industry use.
The examples presented are a super small sample so it remains to be seen how broadly this can be applied. The subjects used appear to be primarily white, for instance. Also, there’s always the question of the ethical implication of any use of face-swapping technology, especially in video, as it could be used to fabricate credible video or photographic “evidence” of something that didn’t actually happen – which is a whole new can of worms.
Given, however, that the technology is now in development from multiple quarters it’s essentially long past the time for debate about the ethics of its development and exploration. Instead, it’s welcome that organizations like Disney Research are following the academic path and sharing the results of their work, so that others concerned about its potential malicious use can determine ways to flag, identify and protect us all against any bad actors when they too get their hands on this tech – which won’t be too far away.