Scroll Top

Gates, Hawking and Musk and thousands more warn against the dangers of AI

WHY THIS MATTERS IN BRIEF

The research, development and weaponisation of AI has made experts uneasy about its impact on humanity’s future, by voicing their concerns they hope to prompt a global debate on the future development, proliferation and regulation of lethal, autonomous weapons systems.

 

Over a thousand high profile artificial intelligence (AI) experts and leading researchers have signed an open letter warning of a “military artificial intelligence arms race” and calling for a ban on “offensive autonomous weapons”.

The letter, which can be seen below, was presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, and was signed by Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking along with at least another thousand AI and robotics researchers.

 

RELATED
Elon Musk says advanced AIs will take down the internet

 

The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The authors argue that AI can be used to make the battlefield a safer place for military personnel, but that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.

Should one military power start developing systems capable of selecting targets and operating autonomously without direct human control, it would start an arms race similar to the one for the atom bomb, the authors argue. Unlike nuclear weapons, however, AI requires no specific hard to create materials and will be difficult to monitor.

“The endpoint of this technological trajectory is obvious – autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” said the authors.

Toby Walsh, professor of AI at the University of New South Wales said: “We need to make a decision today that will shape our future and determine whether we follow a path of good. We support the call by a number of different humanitarian organisations for a UN ban on offensive autonomous weapons, similar to the recent ban on blinding lasers.”

 

RELATED
NASA explores brain training to help astronauts live on the Moon

 

Musk and Hawking have warned that AI is “our biggest existential threat” and that the development of full AI could “spell the end of the human race”. But others, including Wozniak have recently changed their minds on AI, with the Apple co-founder saying that robots would be good for humans, making them like the “family pet and taken care of all the time”.

At a UN conference in Geneva earlier this month that discussed the future of weaponry, including so called “killer robots”, the UK opposed a ban on the development of autonomous weapons, despite calls from various pressure groups, including the Campaign to Stop Killer Robots.

Here’s a copy of the letter, which can be found and signed on the Futureoflife.org website:

 

“Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

Related Posts

Comments (2)

[…] And Goose isn’t the only one concerned about the path that some nation states are travelling down with Bill Gates, Elon Musk and Stephen Hawking being just a few of the international luminaries to also voice their concerns. […]

[…] However, like Elon Musk, Son thinks humans can partner with Artificial Super Intelligence (ASI), and even interface with it to augment ourselves – the interfaces for which, such as the Neural Lace, a nanotech device that wraps around your brain, and other Brain Machine Interfaces, are already emerging. That said though he also, like Musk, believes that humanity just needs to figure out what its role in the world will be before AI becomes singular, and before it becomes a threat, like so many experts, including Musk, are warning. […]

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This