Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
As more countries start fielding hypersonic weapons, the gap between launch and strike narrows significantly, and so striking back automatically gets more appealing.
Science fiction films are riddled with “AI’s and robots gone bad” that trigger nuclear Armageddon, so it was with some despair that I reported the apparent re-activation of a Russian Artificial Intelligence (AI) Deadhand system, that, in the event that the Russian high command were knocked out would launch all out nuclear war. And at the time I called it one of the worst use cases of AI imaginable. Now though a pair of researchers associated with the US Air Force have announced that they want to give nuclear codes to an AI. And all this comes after the US Pentagon announced that they’d strapped a semi-autonomous AI robo-hacker called Mayhem into the Pentagon’s mission critical systems to defend, the development of a single AI that will be able to monitor and track global risks, and the development of the world’s first self-evolving robots and a warning that killer robots are essentially here.
Air Force Institute of Technology associate dean Curtis McGiffin and Louisiana Tech Research Institute researcher Adam Lowther, also affiliated with the Air Force, co-wrote an article — with the ominous title “America Needs a ‘Dead Hand’” — arguing that the US needs to develop “an automated strategic response system based on artificial intelligence.”
In other words, they want to give an AI the nuclear codes. And yes, as the authors admit, it sure sounds a lot like the “Doomsday Machine” from Stanley Kubrick’s 1964 satire “Dr. Strangelove.”
The “Dead Hand” referenced in the title refers to the Soviet Union’s automated system that would have launched nuclear weapons if certain conditions were met, including the death of the Union’s leader.
This time, though, the AI-powered system suggested by Lowther and McGiffin wouldn’t even wait for a first strike against the US to occur — it would know what to do ahead of time.
“[I]t may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position,” they wrote.
The attack-time compression is the phenomenon that modern technologies, including highly sensitive radar and near instantaneous communication, drastically reduced the time between detection and decision time. The challenge: modern weapon technologies, particularly hypersonic cruise missiles and aircraft, cut the window even further.
“These new technologies are shrinking America’s senior-leader decision time to such a narrow window that it may soon be impossible to effectively detect, decide, and direct nuclear force in time,” Lowther and McGiffin argue.
The idea is to use an AI-powered solution to negate any surprise capabilities or advantages of retaliatory strikes of the enemy. It would replace what Lowther and McGiffin describe as a “system of systems, processes and people” that “must inevitably be capable of detecting launches anywhere in the world and have the ability to launch a nuclear strike against an adversary.”
Not surprisingly, points out Bulletin of the Atomic Scientists editor Matt Field, handing over the nuclear codes to an AI could have plenty of negative side effects. One of them is automation bias, as Field points out in his piece. People tend to blindly trust what machines are telling them, even favoring automated decision-making over human decision-making.
And then there’s the simple fact that the AI doesn’t have much data to run on, Field argues. That means that most of the data fed to the AI would be simulated data. And if “Dr. Strangelove” is anything to go by, as long as all major world powers are made aware of the automated system, it could keep them from attacking the US. Because without that knowledge, it becomes pointless — and risks total annihilation. Or as Dr. Strangelove himself puts it: “of course, the whole point of the doomsday machine is lost if you keep it a secret!”
Source: Bulletin of Atomic Scientists