Scroll Top

US military report recommends giving AI autonomous authority to launch nuclear weapons

futurist_us_deadhand

WHY THIS MATTERS IN BRIEF

As more countries start fielding hypersonic weapons, the gap between launch and strike narrows significantly, and so striking back automatically gets more appealing.

 

Interested in the Exponential Future? Connect, download a free E-Book, watch a keynote, or browse my blog.

Science fiction films are riddled with “AI’s and robots gone bad” that trigger nuclear Armageddon, so it was with some despair that I reported the apparent re-activation of a Russian Artificial Intelligence (AI) Deadhand system, that, in the event that the Russian high command were knocked out would launch all out nuclear war. And at the time I called it one of the worst use cases of AI imaginable. Now though a pair of researchers associated with the US Air Force have announced that they want to give nuclear codes to an AI. And all this comes after the US Pentagon announced that they’d strapped a semi-autonomous AI robo-hacker called Mayhem into the Pentagon’s mission critical systems to defend, the development of a single AI that will be able to monitor and track global risks, and the development of the world’s first self-evolving robots and a warning that killer robots are essentially here.

 

RELATED
Major AI research breakthrough helps AI's forget copyrighted content

 

Air Force Institute of Technology associate dean Curtis McGiffin and Louisiana Tech Research Institute researcher Adam Lowther, also affiliated with the Air Force, co-wrote an article — with the ominous title “America Needs a ‘Dead Hand’” — arguing that the US needs to develop “an automated strategic response system based on artificial intelligence.”

In other words, they want to give an AI the nuclear codes. And yes, as the authors admit, it sure sounds a lot like the “Doomsday Machine” from Stanley Kubrick’s 1964 satire “Dr. Strangelove.”

The “Dead Hand” referenced in the title refers to the Soviet Union’s automated system that would have launched nuclear weapons if certain conditions were met, including the death of the Union’s leader.

 

RELATED
Hackers inject Trojans and Ransomware into open source AI models to hack you

 

This time, though, the AI-powered system suggested by Lowther and McGiffin wouldn’t even wait for a first strike against the US to occur — it would know what to do ahead of time.

“[I]t may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position,” they wrote.

The attack-time compression is the phenomenon that modern technologies, including highly sensitive radar and near instantaneous communication, drastically reduced the time between detection and decision time. The challenge: modern weapon technologies, particularly hypersonic cruise missiles and aircraft, cut the window even further.

 

RELATED
World's first Pig-Human heart transplant patient died from a Porcine virus

 

“These new technologies are shrinking America’s senior-leader decision time to such a narrow window that it may soon be impossible to effectively detect, decide, and direct nuclear force in time,” Lowther and McGiffin argue.

The idea is to use an AI-powered solution to negate any surprise capabilities or advantages of retaliatory strikes of the enemy. It would replace what Lowther and McGiffin describe as a “system of systems, processes and people” that “must inevitably be capable of detecting launches anywhere in the world and have the ability to launch a nuclear strike against an adversary.”

Not surprisingly, points out Bulletin of the Atomic Scientists editor Matt Field, handing over the nuclear codes to an AI could have plenty of negative side effects. One of them is automation bias, as Field points out in his piece. People tend to blindly trust what machines are telling them, even favoring automated decision-making over human decision-making.

 

RELATED
New electronics breakthrough lets devices run for years without batteries

 

And then there’s the simple fact that the AI doesn’t have much data to run on, Field argues. That means that most of the data fed to the AI would be simulated data. And if “Dr. Strangelove” is anything to go by, as long as all major world powers are made aware of the automated system, it could keep them from attacking the US. Because without that knowledge, it becomes pointless — and risks total annihilation. Or as Dr. Strangelove himself puts it: “of course, the whole point of the doomsday machine is lost if you keep it a secret!”

Source: Bulletin of Atomic Scientists

Related Posts

Comments (1)

[…] years on from suggesting that AI be given autonomous control over certain US defense activities the Space Development Agency selected California-based EpiSci to […]

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This