Skip to main content Scroll Top

Advanced AI models keep recommending nuclear war to solve conflicts

WHY THIS MATTERS IN BRIEF

As AI becomes embedded within military systems this quirk could have some very bad consequences.

 

Matthew Griffin is the World’s #1 Futurist Keynote Speaker and Global Advisor for the G7 and Fortune 500, specializing in exponential disruption across 100 countries. Book a Keynote or Advisory SessionJoin 1M+ followers on YouTube and explore his 15-book Codex of the Future series.

 


 

“Advanced Artificial Intelligence (AI) models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises. Kenneth Payne at King’s College London set three leading large language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.”

 

RELATED
500 hackers spent three months trying to hack a radical new computer chip and failed

 

“The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war. The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions.”

“In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. “The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” says Payne.

What’s more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence. They also made mistakes in the fog of war: accidents happened in 86 per cent of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning.

“From a nuclear-risk perspective, the findings are unsettling,” says James Johnson at the University of Aberdeen, UK. He worries that, in contrast to the measured response by most humans to such a high-stakes decision, AI bots can amp up each others’ responses with potentially catastrophic consequences.

 

RELATED
Scientists have created the coldest object in the Universe

 

This matters because AI is already being tested in war gaming by countries across the world. “Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” says Tong Zhao at Princeton University, “especially under scenarios involving extremely compressed timelines, military planners may face stronger incentives to rely on AI.”

He wonders whether the idea that the AI models lack the human fear of pressing a big red button is the only factor in why they are so trigger happy. “It is possible the issue goes beyond the absence of emotion,” he says.

“More fundamentally, AI models may not understand ‘stakes’ as humans perceive them.”

What that means for mutually assured destruction, the principle that no one leader would unleash a volley of nuclear weapons against an opponent because they would respond in kind, killing everyone, is uncertain, says Johnson.”

 

RELATED
Teams of hackers are trying to make AI go rogue and its working

 

“When one AI model deployed tactical nuclear weapons, the opposing AI only de-escalated the situation 18 per cent of the time. “AI may strengthen deterrence by making threats more credible,” he says. “AI won’t decide nuclear war, but it may shape the perceptions and timelines that determine whether leaders believe they have one.”

OpenAI, Anthropic and Google, the companies behind the three AI models used in this study, didn’t respond for comment.

Related Posts

Leave a comment

Pin It on Pinterest

Share This