Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
As AI’s get better at creating fake content, or synthetic content, this type of attack will be the start of a new worrying trend.
After Google’s Duplex demonstration last year it’s clear that Artificial Intelligence’s ability to re-create realistic human voices, including celebs like Bill Gates, has now firmly blown past uncanny valley, the point at which people can’t tell synthetic content, or fake content, which includes everything from synthetic images and synthetic video through to synthetic audio, from the real deal. And now in what experts think could be a world first criminals have used AI based software to impersonate a CEO’s voice to demand the fraudulent transfer of €220,000 ($243,000) from a UK energy firm – and the transfer went ahead.
In the attack the CEO of the UK based firm thought he was speaking to his boss, the CEO of the firm’s German parent company, who asked him to send the funds to a Hungarian supplier. The caller said the request was urgent, instructing him to pay it within an hour, according to the company’s insurance firm, Euler Hermes Group who declined to name the victim companies.
Criminals are increasingly finding new ways to weaponise AI for malicious purposes, and whoever was behind this attack appears to have used AI based software to successfully mimic the German executive’s voice by phone.
Apparently the UK CEO “recognized his boss’ slight German accent and the melody of his voice on the phone,” said Rüdiger Kirsch, a fraud expert at Euler Hermes, a subsidiary of Allianz.
Several officials involved in the case said the voice spoofing attack is the first cybercrime they’ve heard of that used AI in this way and Euler Hermes, who covered the entire claim, added that they haven’t dealt with any other cyber crimes involving AI – yet. But as so called creative machines get better at generating synthetic content of all forms it’s only a matter of time before this type of attack becomes common place and a real problem.
“Scams using AI are a new challenge for companies,” said Kirsch, adding. “Traditional cybersecurity tools designed to keep hackers off corporate networks can’t spot spoofed voices yet even though several cybersecurity companies have developed products to detect so-called deepfakes.”
“At the moment it’s unclear whether or not this is the first attack using AI in this way or whether there have been other incidents that have gone unreported,” said Philipp Amann, head of strategy at Europol the European police agency. “In time it’s likely that hackers will use the technology if it makes their attacks more successful or profitable.”
“The attackers responsible for defrauding the British energy company called three times,” said Kirsch. “After the transfer of the $243,000 went through, the hackers called to say the parent company had transferred money to reimburse the UK firm. They then made a third call later that day, again impersonating the CEO, and asked for a second payment. Because the transfer reimbursing the funds hadn’t yet arrived and the third call was from an Austrian phone number, the executive became suspicious, and he didn’t make the second payment.”
“The money that was transferred to the Hungarian bank account was subsequently moved to Mexico and distributed to other locations. Investigators haven’t identified any suspects,” added Kirsch.
“It’s also unclear whether the attackers used bots to react to the victim’s questions or used text-to-voice based systems,” said Amann.
“A few software companies offer services that can quickly impersonate voices and all it takes is a minute’s worth of audio to copy their voices,” said Bobby Filar, director of data science at Endgame, a cybersecurity company. “You don’t need to be a PhD in mathematics to use it.”
“You can’t go around and be silent the entire time. You’re going to run into situations like this where you expose information that you never thought could be used against you,” said Filar.
“Applying machine learning technology to spoof voices makes cybercrime easier,” said Irakli Beridze, head of the Centre on AI and Robotics at the United Nations Interregional Crime and Justice Research Institute.
The UN center is researching technologies to detect fake videos, which Beridze said could be an even more useful tool for hackers. In the case at the UK energy firm, an unfamiliar phone number finally aroused suspicions.
“Imagine a video call with [a CEO’s] voice, the facial expressions you’re familiar with. Then you wouldn’t have any doubts at all,” he said – and that technology too is also arriving quickly in the form of so called life-like digital humans, several of which I’ve discussed and shown off recently that are all increasingly impressive, the net result of which means that this attack will be just the beginning of an entirely new type of spoofing attacks and you can only expect things to get crazier from here.