Scroll Top

People are pirating GPT-4 by scraping company’s exposed API keys

WHY THIS MATTERS IN BRIEF

Without knowing it if someone has access to your API keys they can put you on the hook for huge amounts of money.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

Yet again, in another example of nefarious actors stealing and misusing things that aren’t theirs, people on the Discord for the r/ChatGPT subreddit are advertising stolen OpenAI API tokens that have been scraped from other peoples’ code, according to chat logs, screenshots and interviews. People using the stolen API keys can then implement GPT-4 while racking up usage charges to the stolen OpenAI account.

 

RELATED
A dead gradma story fooled Bing chat into helping solve a CAPTCHA

 

In one case, someone has stolen access to a valuable OpenAI account with an upper limit of $150,000 worth of usage, and is now offering that access for free to other members, including via a website and a second dedicated Discord server. That server has more than 500 members.

People who want to use OpenAI’s large language models like GPT-4 need to make an account with the company and associate a credit card with the account. OpenAI then gives them a unique API key which allows them to access OpenAI’s tools. For example, an app developer can use code to implement ChatGPT or other language models in their app. The API key gives them access to those tools, and OpenAI charges a fee based on usage: “Remember that your API key is a secret! Do not share it with others or expose it in any client-side code (browsers, apps),” OpenAI warns users. If the key is stolen or exposed, anyone can start racking up charges on that person’s account.

The method by which the pirate gained access highlights a security consideration that paying users of OpenAI need to consider. The person says they scraped a website that allows people to collaborate on coding projects, according to screenshots. In many cases, it appears likely the authors of code hosted on the site, called Replit, did not realize they had included their OpenAI API keys in their publicly accessible code, exposing them to third-parties.

 

RELATED
AI makes silicon level Trojans embedded in computer chips easy to ID

 

“My acc [account] is still not banned after doing crazy shit like this,” the pirate, who goes by the handle Discodtehe, wrote in the r/ChatGPT Discord server Wednesday.

In the past few days, Discodtehe’s use of at least one stolen OpenAI API key appears to have ramped up. They shared multiple screenshots of the account usage increasing over time. One recent screenshot shows usage this month of $1,039.37 out of $150,000.

“If we have enough people they might not ban us all,” Discodtehe wrote on Wednesday.

Discodtehe appears to have been scraping exposed API keys for longer, though. In one Discord message from March, they wrote “the other day I scraped repl.it and found over 1,000 working OpenAI API keys.”

“I didn’t even do a full scrape, I only looked at about half of the results,” they added.

 

RELATED
Devastating bioweapon or humanity's saviour? UN green lights the 'Gene Drive'

 

Replit is an online tool for writing code collaboratively. Users can make projects, what Replit calls “Repls,” which are public by default, Cecilia Ziniti, Replit’s general counsel and head of business development, told reporters in an E-Mail. Replit offers a mechanism for handling API keys called Secrets, Ziniti added.

“Some people accidentally do hard code tokens into their Repl’s code, rather than storing them in Secrets. Ultimately, users are responsible for safeguarding their own tokens and should not be storing them in public code,” Ziniti said.

Ziniti said the company scans projects for popular API key types, such as those from Github. After being alerted to this new API key issue by Motherboard, Ziniti said “Going forward, Replit will be reviewing our token scanning system to ensure that users are warned about accidentally exposing ChatGPT tokens.”

A ChatGPT community member said that Discodtehe “should definitely stop.”

 

RELATED
America's most powerful supercomputer is building superior AI's

 

“This is a steadily growing industry and so of course there’ll be crime in it sooner or later, but I’m shocked at how quickly it’s become an issue. The theft of corporate accounts is bad for sure, but I’m personally more bothered about the way these guys are willing to rob regular people who posted their keys by mistake,” they added.

Discodtehe went a step further than just scraping tokens. Another Discord server, called ChimeraGPT, is offering “free access to GPT-4 and GPT-3.5-turbo!,” according to chat logs. Discodtehe said in another message that ChimeraGPT is using the same organization as the stolen API key discussed in the r/ChatGPT Discord server. Motherboard found a Github repository that recommends using ChimeraGPT for getting a free API key. At the time of writing this server has 531 members.

Discodtehe said in another message they also created a website where people can request free access to the OpenAI API. Ironically, this site is also hosted on Replit.

The site tells users to enter their E-Mail address, click on a link sent by OpenAI and accept the invite, set their default billing address to the organization “weeeeee” which Discodtehe appears to be using.

 

RELATED
Scientists turned the human body into a data network to advance cyber security

 

“enjoy free gpt-4 api access,” the website concludes. On Wednesday the organization linked to the OpenAI account had 27 members, according to one screenshot. By Thursday, that number had jumped to 40, according to another.

Discodtehe did not respond to a request for comment. A manager of the r/ChatGPT Discord server called “Dawn” said their volunteer mods can not check every project, and “we are issuing a ban on the user.”

An OpenAI spokesperson said that “We conduct automated scans of big open repositories and we revoke any OpenAI keys discovered. We advise that users not reveal their API key after they are generated. If users think their API key may have been exposed, we urge them to rotate their key immediately.”

The community member, however, said “I think OpenAI holds a little bit of culpability here for how their authentication process works too though.”

 

RELATED
Hackers use infra red CCTV cameras to steal data from air gapped systems

 

“You don’t hear about API access to Google Cloud accounts getting stolen like this because Google has better authentication procedures. I hope OpenAI’s integration with Microsoft brings some better security for users going forward,” they said.

Discodtehe referred to the usage as “just borrowing” in another message. They wrote that the usage is “just quote, no bills have been paid yet.”

“In the end, OpenAI will likely foot the bill,” they said.

OpenAI did not immediately respond to a follow up question asking if it would foot the bill.

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This