Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
The power of ChatGPT is both good and bad, and criminals want to gain access to it to create malicious malware, scams, and all manner of dastardly things.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
After all the recent multi-trillion lawsuits and furore that swirled when companies like Getty Images and co found out that their copyrighted commercial data had been scraped by Artificial Intelligence (AI) companies like Stability.ai to train their AI’s it’s inevitable that companies such as OpenAI, the company behind ChatGPT, GPT3, and GPT4 will be much more careful where they get their data from in the future. And, unless they buy their training data then they’ll likely end up using public open source datasets a lot more so as not to be sued in the future.
However, while that might all sound fine over the past couple of years we’ve seen an increasing number of threat actors using data poisoning techniques to compromise those very same data sets, which the means that any AI’s trained on them will be compromised too, whether the results are that they don’t work properly, code malware into programs they’re writing, wrongly classify whatever it is they see, and a myriad of other strange non-programmed behaviours.
The Future of In-Security, by keynote Matthew Griffin
For small non-critical AI’s this might be fine, however imagine what would happen if these exploits compromised the accuracy and integrity of ChatGPT’s outputs – outputs which are being used by hundreds of millions of people to do millions of things. It’d potentially what some are calling Armageddon, or an “Oppenheimer” moment.
Unsurprisingly then all manner of threat actors are trying to compromise OpenAI’s ChatGPT program in all manner of ways, according to cybersecurity company Check Point Software Technologies.
“At Check Point Research, we can see the Russians trying to break through the geo-regional restrictions put in place around ChatGPT,” said Pete Nicoletti, field chief information security officer for Check Point, in a small gathering of reporters Thursday during the company’s customer and partner event in New York City.
Nicoletti was referring to restrictions on access to the ChatGPT Application Programming Interface (API) that would block inbound requests from Russia.
Check Point Research is a threat detection facility that keeps watch for new forms of malware. Nicoletti did not go into detail about the means by which Check Point Research monitors systems to detect the attempted access.
Nicoletti said Russian hackers’ probing of the code reflects just one of an expanding number of parties trying to gain access.
“It’s going to be used by good actors and bad,” said Nicoletti of ChatGPT. And we’ve already seen it being used by bad actors to code malware and ransomware …
ChatGPT’s situation is not quite like the one that led to the 2017 EternalBlue threat that was the result of code leaked from a lab. Instead, said Nicoletti, use of ChatGPT is “the use of a generally open platform by many different parties,” similar to lots of AI model architecture use cases in the research community.
Nicoletti noted that on Reddit, an attempted exploit has popped up called DAN, short for “Do Anything Now,” which would use the chat prompt to manipulate ChatGPT into producing text that escapes the guardrails put in place by OpenAI to prevent it from producing some text such as hate speech.
Nicoletti said it is not certain if any zero-day exploits have yet appeared for ChatGPT, but the most likely vector of attack, when an exploit finally appears, will be an enhanced form of phishing attack, he said.
“It will be highly targeted to you, because they already have multiple sources of personal information they have aggregated about you from multiple hacked sites, as well as all of your social media postings, and now they can craft much more specific language to make it relevant to you, and to each victim,” said Nicoletti, comparing the approach to a “Narrow Casting” kind of data service that makes possible mass distribution of highly targeted phishing attacks.
During Check Point’s keynote presentation the same day, founder and CEO Gil Shwed declared that 2023 will be the “Year of AI” and that the company’s products will make increasingly use of AI throughout the lineup.