Scroll Top

World’s top AI experts met to discuss Doomsday scenarios, and solutions

WHY THIS MATTERS IN BRIEF

Showcasing how AI will destroy the world is easy but finding a solution isn’t.

 

Today’s press is full of column inches on the future impact of artificial intelligence (AI) on society. On one side of the fence AI optimists predict a brave new world of cancer cures, interstellar travel, new forms of energy and sky taxis. On the other, however, pessimists predict all out nuclear war and a world where humans are enslaved by autonomous self-evolving, self-replicating machines and robots, like the descendants of this sausage robot, with intellects that far exceed our own – and in a future where the microchip in your connected sneaker will have an IQ of 10,000 in 2047 they might not be far wrong. Although I for one hope they are – that is of course unless I get run over by a rogue self-driving car before things hit the fan.

 

RELATED
This AI detects art forgeries by analysing artists brushstrokes

 

The conversation over whether AI will save, or destroy, humanity isn’t anything new – it’s been raging since the early 1960’s – but as AI continues to break new ground and increasingly find its way into every corner of our world’s digital fabric the conversations, and debates have been getting louder and more heated. The real answer here, of course, is that no one knows what will happen for definite – both scenarios are possible but the likelihood is that the reality will be somewhere in the middle.

Now two scientists, veteran AI scientist Eric Horvitz, who’s the managing director of Microsoft’s Research Lab in Redmond and Doomsday Clock chairman Lawrence Krauss seem to have had enough and they’ve decided to do something about it – they decided to have a workshop. Yes. That’s right – it’s workshop time. But this is no ordinary workshop. It’s a, cue terrifying music, “Doomsday Workshop,” and that’s not just because they ran out of coffee…

The workshop dubbed “Envisioning and Addressing Adverse AI Outcomes,” you have to have a catchy name, took place late last week at the Arizona State University (ASU) and was funded by SpaceX, Tesla and OpenAI founder Elon Musk, who thinks an advanced AI will take down the internet, and Skype founder Jaan Tallinn.

A kind of AI doomsday games the workshop bought together forty of the world’s top scientists, cybersecurity experts and policy makers and organised them into two teams.

The Red Team were the attackers and the Blue Team the defenders, and between them all they got to play out their worst scenarios – everything from mass stock market manipulation to full out autonomous, global nuclear warfare.

 

RELATED
Doctors will soon be able to track your health through walls using your wifi

 

Horvitz is an optimist, which is probably a good thing bearing in mind he’s made AI his life’s work, but when the game’s first started, two years ago, many of his dystopian minded colleagues thought his views were too optimistic. That said though Horvitz believes that for people to embrace AI it first has to earn the public’s confidence and trust, and that means airing and addressing the issues, for him at least, in a public forum.

“There is huge potential for AI to transform so many aspects of our society in so many ways. At the same time, there are rough edges and potential downsides, like any technology,” said Horvitz, “to maximally gain from the upside we also have to think through possible outcomes in more detail than we have before and think about how we’d deal with them.”

The participants had to submit entries for their worst case scenarios but the scenarios had to be realistic and grounded in today’s technologies as well as those technologies on the horizon between five and twenty five years out. The participants with the “winning” doomsday scenarios were chosen to lead the panels, which featured about four experts each to discuss the attack and how to prevent it.

As it turned out many of the participants seem to have missed their calling as science fiction writers and many of them would have made Arthur C. Clarke proud. In many cases though little imagination was required as scenarios such as using AI to sway elections or using AI to unleash crippling cyberattacks were thrown onto the table. Horvitz, for example, cited research that shows how to alter the way a self-driving car sees traffic signs so that the vehicle misreads a “stop” sign as “yield.”

Doesn’t that sound fun?

 

RELATED
Researchers create a kill switch to terminate rogue AI agents

 

The possibility of intelligent, automated cyberattacks, something we’re already beginning to see the first signs of, was the one that most worried John Launchbury, a director at the US Defense Advanced Research Projects Agency (DARPA), and Kathleen Fisher, chairwoman of the computer science department at Tufts University, who led that session and who was also involved in helping to create the world’s first hack proof code, the most.

What happens if someone constructs a cyber weapon designed to hide itself and evade all attempts to dismantle it? Now imagine it spreads beyond its intended target to the broader internet. Think Stuxnet, the computer virus created to attack the Iranian nuclear program that got out in the wild, but stealthier, more autonomous and self-evolving. Again, something that, ironically thanks to Horvitz’s lab in Cambridge in the UK we again might already be starting to see.

“We’re talking about malware on steroids that is AI enabled,” said Fisher, who is an expert in programming languages. Fisher presented her scenario under a slide bearing the words “What could possibly go wrong?”

While the members of the Red Team it seemed had no limit to the number of doomsday scenarios they could come up with the Blue Team, worryingly struggled, and as for their efforts to counter a “wild” AI? According to Launchbury their answers were disappointing.

The Blue Team, for example, argued that the advanced AI needed to create a “wild” AI would require a lot of computing power and communication, so it would be easier to detect. But the Red Team felt that it would be easy for it to hide behind innocuous activities, Fisher said. For example, attackers could get innocent users to play an addictive video game to cover up their work.

To prevent a stock market manipulation scenario dreamed up by University of Michigan computer science professor Michael Wellman, the Blue Team members suggested treating attackers like malware by trying to recognize them via a database on known types of hacks. Wellman, who has been in AI for more than 30 years and calls himself an old timer on the subject, said that approach could be useful in finance, but again it raised more questions than answers.

 

RELATED
US Government unveils CAUSE, a program to predict cyber attacks before they happen

 

Beyond trying to come up with actual solutions, which in these sessions weirdly seemed to be horribly lacking, the organisers hope the Doomsday Workshop will at least help to kick start conversations on what needs to happen, raise awareness and help encourage different communities and disciplines to come together for a common good and the organisers are planning to publish the materials from the closed door sessions in the next few months says Krauss.

In the meantime DARPA’s Launchbury hopes the presence of policy figures among the participants will foster concrete steps, like agreements on rules of engagement for cyber war, automated weapons and robot troops – something that’s being voted on this year by the United Nations.

For now though Krauss, who’s the chairman of the board of sponsors of the group behind the Doomsday Clock, a symbolic measure of how close we are to global catastrophe, said some of what he saw at the workshop “informed” his thinking on whether the clock ought to shift even closer to midnight.

“Some things we think of as cataclysmic may turn out to be just fine,” he said.

As for me though, I always think I have a good imagination, but a rogue, wild, self-evolving, self-replicating malicious AI would be hard to beat as a doomsday scenario. However, in a world where Mark Zuckerberg wants to turn Facebook into the world’s largest telepathic network, and where Elon Musk wants to make a Neural Lace to help us all interface with AI what about a malicious AI virus that can infect humans and interrupt our biological machinery? Or an AI that infects a machine that can create tidal waves? And I’m only getting started… but alas, my doomsday scenarios will have to wait for another time – I have a coffee machine to mend. Noooooooooooo!

Related Posts

Comments (4)

Yes but running out of coffee will have us all shouting at our AIs to do something…and they’ll get sick of those cranky bioforms…

[…] (AI) is already spooky enough. Whether we’re dreaming up new ways in which we think it can destroy us, or just because of the fact that it’s gaining on us in new and exciting ways, but now it seems […]

[…] And with artificial intelligence experts themselves warning that AI could be even more dangerous than nuclear weapons and could bring about the apocalypse, we’re not the least bit surprised that they, too, are preparing for doomsday. […]

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This