Scroll Top

In bizzare twist scammers are scamming scammers with scam GPT’s

WHY THIS MATTERS IN BRIEF

You aren’t the only one to get scammed by scammers – and now scammers are scamming one another with fake AI GPT’s based on ChatGPT.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

By now you should know that Artificial Intelligence (AI) systems like ChatGPT and DarkBertGPT are being used by criminals to scam people. But, apparently, in a growing number of cases would be victims of cyber crime are having the last laugh as more scammers scam scammers using these tools. As it turns out illicit clones of ChatGPT, like FraudGPT and so forth, which are purposefully built for malicious applications, aren’t very useful for cybercriminals, so say Sophos researchers. While threat actors have found some uses for them many GPTs aren’t up to the task of creating malware or finding new vulnerabilities.

 

RELATED
Pinsized sensor brings chemical identification to your smartphone

 

After exploring cybercrime forums, Sophos’ X-Ops advanced threat response team discovered much skepticism revolving around the new AI tools for black hats. Threat actors seem to be “wrestling with the same issues and problems as the rest of us,” they say.

WormGPT and FraudGPT, for example, received a lot of media coverage as the tools became popular on underground forums. It was assumed that they’d enable cyber crooks to scale their operations exponentially by delivering new strains of malware and automating tedious scammer’s work.

 

The Future of Cybercrime and Cyber Security, by Futurist Matthew Griffin

 

The bandwagon kept on rolling, with new models popping up regularly. Soon after those came XXXGPT, Evil-GPT, WolfGPT, BlackHatGPT, DarkGPT, HackBot, PentesterGPT, PrivateGPT. GPT stands for Generative Pre-trained Transformer, a form of AI Large Language Model (LLM) trained on massive datasets. Jailbroken GPTs do not have restrictions for generated content, and they can be trained on the information that cybercriminals typically use.

 

RELATED
AI generated games are starting to appear on Steam

 

After a while, WormGPT developers shut down the project, with the media attention a contributing factor.

“We found multiple GPT derivatives claiming to offer capabilities similar to WormGPT and FraudGPT,” Sophos’ report reads. “However, we also noted skepticism about some of these, including allegations that they’re scams (not unheard of on criminal forums) – hackers are unimpressed.”

Underground forum dwellers expressed the same skepticism that ChatGPT often receives. DarkGPT versions are “overrated, overhyped, redundant, and unsuitable for generating malware.”

On top of that, threat actors have concerns about the security of the final product, such as AI-generated code. Would it bypass antivirus and endpoint detection and response (EDR) detection? Therefore, real-world applications remain “aspirational.”

 

RELATED
Green Bitcoin alternative Chia causes global hard drive shortages

 

“We found only a few examples of threat actors using LLMs to generate malware and attack tools, and that was only in a proof-of-concept context,” researchers wrote. “However, others are using it effectively for other work, such as mundane coding tasks.”

There are few use cases where malicious GPTs are useful, with hackers quick to automate social engineering attacks. Some threat actors use LLMs for chatbots and auto-responses with varying levels of success.

The researchers found many discussions focused on jailbreak tactics for legitimate AI models and compromised ChatGPT accounts for sale.

“Unsurprisingly, unskilled ‘script kiddies’ are interested in using GPTs to generate malware, but are – again unsurprisingly – often unable to bypass prompt restrictions, or to understand errors in the resulting code,” the report said.

 

RELATED
Researchers unveil a new AI that can detect Covid-19 from lung X-Rays

 

Overall, AI is not that hot of a topic on any of the forums that researchers visited. For comparison, they found less than 100 posts on the topic in two of the forums but almost 1,000 posts about cryptocurrencies in the same comparative period. And many of the LLM-related posts were just compromised ChatGPT accounts for sale.

“The numbers suggest that there hasn’t been an explosion in LLM-related discussions in the forums – at least not to the extent that there has been on, say, LinkedIn. That could be because many cybercriminals see generative AI as still being in its infancy,” researchers said, also mentioning that, unlike some LinkedIn users, hackers have little to gain from speculations. In general, researchers observed a lot of skepticism. Hackers were worried about operational security, and some even had ethical concerns about using AI.

“We found little evidence of threat actors admitting to using AI in real-world attacks, which is not to say that that’s not happening. But most of the activity we observed on the forums was limited to sharing ideas, proof-of-concepts, and thoughts.”

 

RELATED
Singapore is developing a drone airspace management system

 

So, what happened to WormGPT and FraudGPT? Released in June 2023, WormGPT was a private chatbot service purportedly based on LLM GPT-J 6B model and offered as a commercial service on several criminal forums. The price of WormGPT for a short-lived period was 550 euros for yearly access and 100 euros for one month.

Like many other cybercrime tools, the launch campaign was accompanied by posters, examples, and other promotional material.

It was shut down abruptly in August 2023, with an “unknown” contribution to any real-world attacks. WormGPT developers, “with great sadness,” said they never thought to gain so much visibility, they did not want to create “something of this magnitude,” and just offered an “unrestricted ChatGPT.”

 

RELATED
The founder of China's largest online retailer envisions a "100% robotic workforce"

 

“We encountered numerous individuals from the media whose sole objective was to portray us as a dreadful, highly illegal, and above all, black-hat tool,” WormGPT creators wrote. “The five of us, who are responsible for WormGPT, have come together and decided to put an end to the project, letting go of everything we believed in and distancing ourselves a bit from a society that opposes freedom of expression and code. Thank you to all who believed in the project.”

This post was not taken lightly. While some expressed regrets over closure, other forum users were irritated. One Hackforums user noted that their license had stopped working, and users on both Hackforums and XSS alleged that the whole thing had been a scam, according to Sophos.

FraudGPT received the same accusation for misleading statements, such as being able to generate “a range of malware that antivirus software cannot detect.”

 

RELATED
India's RuPay introduces CVV-less payments for tokenised cards

 

Neither did other GPT derivates receive much love.

“None of the AI-generated malware – virtually all of it in Python, for reasons that aren’t clear – we observed on Breach Forums or Hackforums appears to be novel or sophisticated. That’s not to say that it isn’t possible to create sophisticated malware, but we saw no evidence of it on the posts we examined,” Sophos X-Ops researchers concluded.

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This