WHY THIS MATTERS IN BRIEF
Deepfakes are fun, but what happens when you can no longer believe or trust the content you see online? Nothing good …
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, connect, watch a keynote, or browse my blog.
In a crop of viral videos featuring Tom Cruise, it’s not the actor’s magic trick nor his joke-telling that’s deceptive — but the fact that it’s not actually Tom Cruise at all. The videos, uploaded to TikTok in recent weeks by the account @deeptomcruise, have raised new fears over the proliferation of believable deepfakes — the nickname for media generated by Artificial Intelligence (AI) technology showing phony events that often seem realistic enough to dupe an audience. aND tom joins a growing list of celebrities, from Elon Musk and Donald Trump, to Barack Obama and Arnold Schwarzenegger who are literally not the stars of their own shows … That said though a whole industry is now springing up to help detect deepfakery …
Hany Farid, a professor at the University of California, Berkeley, commented that the Cruise videos demonstrate a step up in the technology’s evolving sophistication.
“This is clearly a new category of deepfake that we have not seen before,” said Farid, who researches digital forensics and misinformation.
Watch this … to learn how it’s done!
Deepfakes have been around for years, but, Farid says, the technology has been steadily advancing.
“Every three to four months a video hits Tik Tok, YouTube, whatever, and it’s just — wow, this is much, much better than before,” he said.
To the trained eye, Farid says a distortion of Cruise’s pupils in the videos was a red flag for spotting the fakery. The clues were much easier to spot in a glitchier 2018 deepfake video of an uncanny Barack Obama, later unmasked as comedian Jordan Peele impersonating the former president. The following year, a video of Nancy Pelosi, doctored to make the House speaker sound intoxicated through slowed-down speech, made for a more believable deepfake, if only technically.
In 2020, the warnings that deepfakes would be leveraged as a dominant disinformation tool during the presidential election cycle went largely unrealized. But cybersecurity experts say that was only because less sophisticated tactics, like lies, crude video edits and memes, have been working just fine as a source of deception. Plus, deepfakes are time-consuming and require some technical prowess – although as I’ve seen with the Zao app they’ll soon enough be free and anyone with thumbs will be able to create them.
Chris Ume, a visual effects artist who created the Cruise deepfakes, told The Verge that each video clip was the product of weeks of work. He also relied on the talents of actor Miles Fisher, a Cruise lookalike, to impersonate the movie star before giving Fisher a digital face transplant.
Using open-source deepfake software, existing editing tools and his own visual effects expertise, Ume said, “I make sure you don’t see any of the glitches.”
Still, he told the website, it took a couple of months to train a machine learning algorithm by feeding a trove of Hollywood footage of Cruise through high-end graphics processors.
Ume credits Fisher for nailing Cruise’s likeness, from the intense eye contact to his signature laugh. “He’s a really talented actor,” Ume told The Verge. “I just do the visual stuff.”
The artist, who told CNET that his videos are strictly a creative pursuit, also wanted to bring awareness to the advancement of deepfakes.
Ume said he is less convinced we’ve arrived at an ominous point in which the technology can be readily abused.
“It’s not like you’re sitting at home and you can just click on a button and you can create the same thing we did,” he told the tech publication.
While Ume’s videos have been made with tongue very much in cheek, there are more nefarious cases in which deepfakes have been used, including non-consensual deepfake pornography. A 2019 report from Sensity, a company that tracks visual threats, found that non-consensual deepfake pornography accounted more than 90% of all deepfake material online.
UC Berkeley’s Farid cautioned that it’s not just the content that poses risks, but the ease of how quickly misinformation can travel online through social media. Deepfakes, he said, are “now throwing jet fuel onto that already burning fire.”
He offered a hypothetical example: Say he created a deepfake video to show Amazon executive Jeff Bezos saying that the company’s profits have taken a hit. If that video goes viral, he posited, “How long does it take me to move the market to the tune of billions of dollars?”
What’s more, Farid said, AI tools that were once in the hands of academics are now widely available through apps and open-source code — as demonstrated with Ume’s videos.
“Now you have the perfect storm,” he said. “I can create this content easily, inexpensively and quickly, I can deliver it en masse to the world, and I have a very willing and eager public that will amplify that for me.” Which is exactly what I’ve been saying since 2015 in my keynotes …