0

WHY THIS MATTERS IN BRIEF

Some deepfakes are meant for fun, others are meant to spread lies, disinformation, and weaponise society against itself so new tools are needed to detect them.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

Deepfakes that can make people say things in video that they never did, or perhaps never would, have so far been used to make ads, spread misinformation, discredit people, and even to try and change the course of war as a modern day form of PsyOps.

 

RELATED
Repairing damaged photos becomes a breeze thanks to Nvidia AI

 

While there have been several advances in the development of new tools that detect deepfakes now tech firms including Adobe, Arm, BBC, Intel, Microsoft, Twitter, Sony, and Nikon have formed an alliance to create an open standard to fight against deepfakes.

 

The Future of Deepfakes and Synthetic Media, by keynote speaker Matthew Griffin

 

About a month ago, when Russia and Ukraine were at the height of their conflict, a heavily manipulated video depicting Ukrainian President Volodymyr Zelenskyy circulated on social media and was even uploaded on a Ukrainian news website by hackers, before it was debunked and removed. The video is among a variety of deepfakes that are proliferating online at a rapid clip while a handful of technology firms try to fight the trend using blockchain technology.

 

RELATED
Uber's self-driving cars, not their drivers, ran red lights in San Francisco

 

Deepfakes are not something new, in fact, deceptive technology has been around for a number of years now. But especially in this post-pandemic era, we are at a collective inflection point. In fact, there are even a number of free deepfake apps that are just a Google search away. Even Ukraine’s military intelligence agency foresaw such incidents when it released a video last month about how state-sponsored deepfakes could be used to sow panic and confusion.

Being aware of potentially grave consequences, an alliance spanning the software, chips, cameras, and social media giants aims to create standards to ensure the authenticity of images and videos shared online. Known as the Coalition for Content Provenance and Authenticity (C2PA), the group’s ultimate aim is to fight deepfakes using blockchain technology with Japanese camera makers Sony and Nikon coming in to develop an open standard intended to work with any software showing evidence of tampering, as per Nikkei.

 

RELATED
China's puts final touches to its 2,000km long unhackable quantum network

 

Adobe’s content authenticity initiative’s senior director Andy Parsons even told Nikkei that we’ll “see many of these [features] emerging in the market this year. And I think in the next two years, we will see many sorts of end-to-end [deepfake detection] ecosystems.”

C2PA unifies the efforts of the Adobe-led Content Authenticity Initiative (CAI) which focuses on systems that provide context and history for digital media, and Project Origin, a Microsoft- and BBC-led initiative that tackles disinformation in the digital news ecosystem. Hereon, the coalition also plans to reach out to more social media platforms, such as social video site YouTube, to have more on board with the standard.

In a statement from January this year, C2PA said the coalition empowers content creators and editors worldwide to create tamper-evident media, by enabling them to selectively disclose information about who created or changed digital content and how it was altered.

 

RELATED
Baltimore becomes the first US city to be watched from the air 24/7

 

“The C2PA’s work is the result of industry-wide collaborations focused on digital media transparency that will accelerate progress toward global adoption of content provenance,” it said.

Parsons reckons that broad adoption across all of these platforms is key to the success of digital provenance, “so that users can be assured that when media is uploaded with content authenticity, that it is maintained throughout the entire chain of sharing [and] publishing creation, back and forth,” he added.

“We’ve only been at this for a couple of years so it’s relatively early in the life cycle. And we have a long way further to go to make sure that all platforms can adopt this,” Parsons concluded.

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *