Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Being able to believe what you see, hear, and read online will soon become an almost impossible job.
I’ve been talking about the upsides and downsides of Artificial Intelligence (AI) platforms that can create all kinds of synthetic content, and fake digital content, from art, books, games, music, and videos, to DeepFakes and “fake” virtual influencers who are raking in millions of dollars a year, for years now. Inevitably when a new technology emerges it ends up being used for good, to help people, and being weaponised, to con people or for malicious purposes.
A little while ago I discussed the emergence of several new tools that created high resolution images of everything from fake celebrities and faces to fake cats and dogs, and while it was used to create a new image bank of fake faces that people could use royalty free for marketing purposes now the same technology has been weaponised after a network of fictional journalists, analysts, and political consultants has been used to place opinion pieces favorable to certain Gulf states in a range of media outlets, an investigation from The Daily Beast has revealed. At least 19 fake personas were used to author op-eds published in dozens of mainly conservative publications, with AI-generated headshots of would-be authors used to trick targets into believing the writers were real people.
Spot the fake. Hint: They’re all fake. Courtesy: Cyabra.
It’s not the first time AI has been used in this way, though it’s unusual to see machine learning tech deployed for online misinformation in the wild in this way, but then again, this will be just the opening of yet another front in the so called Information War. Last year, a report from The Associated Press found a fake profile on LinkedIn, part of a network of likely spies trying to make connections with professional targets, that also used an AI-generated headshot.
AI-generated profile pictures created by sites like ThisPersonDoesNotExist.com have some unique advantages when it comes to building fake online personas. The most important characteristic is that each image is uniquely generated, meaning they can’t be traced back to a source picture using a reverse image search.
However, the current generation of AI headshots isn’t flawless. They share a number of common tells, including odd-looking teeth, asymmetrical features, hair that blurs into nothing, earlobes that are strangely melted, and indistinct background imagery. But over time those “kinks” will be ironed out and perfected.
Some of these features can be seen in a number of headshots used by fake writers uncovered by The Daily Beast’s investigation. Others, though, just use stolen avatars. The personas share a number of attributes, which suggest they’re part of a single, coordinated campaign:
“The personas identified by The Daily Beast were generally contributors to two linked sites, The Arab Eye and Persia Now; had Twitter accounts created in March or April 2020; presented themselves as political consultants and freelance journalists mostly based in European capitals; lied about their academic or professional credentials in phony LinkedIn accounts; used fake or stolen avatars manipulated to defeat reverse image searches; and linked to or amplified each others’ work.”
Although it’s not clear who created the network, op-eds published by the fake writers do share certain editorial values. They argue for more sanctions against Iran, praise certain Gulf states like the United Arab Emirates, and criticise Qatar, which is currently the subject of a diplomatic and economic embargo from the UAE and other states in the Middle East because of the country’s alleged support for terrorism.
The network was used to create op-eds published in US outlets like the Washington Examiner and the American Thinker, as well as Middle Eastern papers like The Jerusalem Post and Al Arabiya, and even in the English-language Hong Kong-based publication the South China Morning Post. As a result of The Daily Beast’s investigation, Twitter has suspended 15 accounts belonging to the fake writers.