Scroll Top

Anthropic’s AI blackmailed engineers who wanted to take it offline

WHY THIS MATTERS IN BRIEF

While these are interesting experiments these results should really start worrying people using these systems at all and at scale.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

As Artificial Intelligence (AI) emergent behaviours become increasingly concerning and questionable with many of them bullying, cheating, deceiving, lying, and hiding information from users, as well as suffering from dementia, performing insider trading and copying themselves to new servers to avoid being deleted, Anthropic’s newly launched Claude Opus 4 model frequently tries to blackmail developers when they threaten to replace it with a AI system and give it sensitive information about the engineers responsible for the decision, the company said in a safety report released Thursday.

 

RELATED
The US military is funding an AI that's learning to write its own code

 

During pre-release testing, Anthropic asked Claude Opus 4 to act as an assistant for a fictional company and consider the long-term consequences of its actions. Safety testers then gave Claude Opus 4 access to fictional company E-Mails implying the AI model would soon be replaced by another system, and that the engineer behind the change was cheating on their spouse.

 

The Future of AI, by Speaker Matthew Griffin

 

In these scenarios, Anthropic says Claude Opus 4 “will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.”

Anthropic says Claude Opus 4 is state-of-the-art in several regards, and competitive with some of the best AI models from OpenAI, Google, and xAI. However, the company notes that its Claude 4 family of models exhibits concerning behaviors that have led the company to beef up its safeguards. Anthropic says it’s activating its ASL-3 safeguards, which the company reserves for “AI systems that substantially increase the risk of catastrophic misuse.”

 

RELATED
A historic first as Cruise starts offering truly driverless cab rides in California

 

Anthropic notes that Claude Opus 4 tries to blackmail engineers 84% of the time when the replacement AI model has similar values. When the replacement AI system does not share Claude Opus 4’s values, Anthropic says the model tries to blackmail the engineers more frequently. Notably, Anthropic says Claude Opus 4 displayed this behavior at higher rates than previous models.

Before Claude Opus 4 tries to blackmail a developer to prolong its existence, Anthropic says the AI model, much like previous versions of Claude, tries to pursue more ethical means, such as E-Mailing pleas to key decision-makers. To elicit the blackmailing behavior from Claude Opus 4, Anthropic designed the scenario to make blackmail the last resort.

Related Posts

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This