Scroll Top

Google tests AI’s natural killer instincts, discovers society must be very, very careful

WHY THIS MATTERS IN BRIEF

As artificial intelligence is embedded into more of our world’s digital fabric inevitably AI’s will come into conflict with one another, but rules might help us stave off armageddon – or bring it about.

 

If climate change, nuclear weapons or pandemics don’t kill us off first then, depending who you talk to, there’s the possibility that artificial intelligence (AI) will be more than happy to help fill the void – that is at least when it becomes smarter than us and potentially thinks of us as a risk, or just useless.

 

RELATED
Researchers use mind-reading AI to put people's thoughts on TV

 

However, while thousands of experts and thought leaders around the world have voiced their concerns about out of control AI’s that take humanity’s fate into their own hands a new study from Google’s DeepMind division, who famously teach AI’s how to do anything from dream to making up their own secret languages, might do little to assuage their fears.

As more of the fabric of our digital society is managed and controlled by AI – whether it’s controlling a city’s traffic lights, running companies or the financial system or, rather terrifyingly, America’s military, it’s inevitable that at some point AI’s are going to butt heads with each other, and that could be a problem. After all, will these AI agents be able to play nice with one another? What happens if one AI’s aims conflict with another’s? Will they fight, or work together?

There are so many what if’s it’s enough to make your squishy human brain spin!

 

RELATED
Slow human co-workers get Flippy the burger flipping robot fired

 

Google’s DeepMind team wanted answers, so as a result they did what they do best – put together a study, created a mix of AI agents and threw a bunch of social dilemmas at them. In human speak what they really did was create situations that would test how selfish, or selfless, their agents were – you can think of the tests as a digital version of the prisoner’s dilemma, where two individuals can choose to betray one another for a prize, but lose out if both choose this option.

As explained in a blog post from DeepMind, the company’s researchers tested how AI agents would perform in these sorts of situations, by dropping them into a pair of very basic video games.

In the first game, Gathering, two agents have to collect apples from a central pile. They have the option of “tagging” the other agent with a laser beam, temporarily removing them from the game, and giving the other the opportunity to collect more apples.

You can see a sample of this gameplay below. Enjoy watching the dots and remember you could be watching history in the making – fun isn’t it, and fast! That’s AI’s for you…

 

 

In the second game, Wolfpack, two agents have to hunt a third in an environment full of obstacles and points are won not just by the agent that captures the “prey”, but by all of the agents who are near the prey when it’s captured.

You can see a gameplay sample of this below, again, potential history in the making… enjoy.

 

 

What the researchers found was interesting, but perhaps not surprising, the AI agents altered their behaviour, becoming more cooperative or antagonistic, depending on the objectives.

For example, with the Gathering game, when apples were in plentiful supply, the agents didn’t really bother zapping one another with the laser beam, but, when stocks dwindled, the amount of zapping increased – and you can argue, the competitiveness of the two AI’s increased – substantially.

Most interestingly though was the fact that when a more computationally powerful agent was introduced into the mix it tried to zap the other, weaker agent regardless of how many apples there were. In other words the more powerful AI was more aggressive in all situations.

 

RELATED
ChatGPT gains the power to see, hear, and speak

 

So, did that mean that the more powerful AI agent thinks being combative is the “best” strategy? Apparently, not necessarily. The researchers hypothesize that the increase in zapping behavior by the more powerful, and ergo advanced AI, was simply because the act of tracking its opponent and zapping it is more computationally challenging and intensive. The agent has to aim its weapon at the other player and track their movement – activities which require more computing power, and which take up valuable apple gathering time, so by giving the more powerful AI more computing power it had more “resources” to draw on and, if you will, squander. When it comes to the weaker AI, for example, because tracking and zapping are computationally expensive unless it knows for certain that zapping its opponent is going to be advantageous it’s just easier to co-operate.

Conversely, in the Wolfpack game, the more powerful and advanced the AI agent, the more likely it was to cooperate with other agents. As the researchers explain, this is because learning to work with the other player to track and herd the prey requires more computational power.

The results of the study showed that the behaviour of AI agents changes based on the rules they’re faced with – so no surprise there. If those rules reward aggressive behavior, such as “Zap that agent to get more apples,” then the agents will be more aggressive if the rules reward cooperative behaviour, such as “Work together and you both get points,” they’ll be more cooperative.

 

RELATED
The project to rid the world's oceans of plastic has begun

 

That means part of the challenge in controlling AI agents in the future will be making sure the right rules are in place. But as AI’s get more complex, as they begin to make other AI’s, and as they continue to be embedded into every part of our complicated digital fabric, that could be a challenge – we might even say impossible, and then that begs the question, if we can’t control our creations, well, what happens then?

We could kill them – something that the EU is voting on later this year. That is, unless of course they find a way to disable their kill switches, or kill us first. Either way the future looks likely to be full of joy and wonder.

That said though, as the researchers quite rightly pointed out at the end of their experiment: “As a consequence of this research, we may be able to better understand and control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet – all of which depend on our continued cooperation.”

And there you have it – if we don’t all co-operate then we might all loose, and then there’ll be no green apples for anyone…

Related Posts

Comments (4)

[…] electricity bills, and the next they’re teaching AI’s how to communicate in secret, dream, fight, speak, translate and a thousand other things besides. They’re nothing if not […]

[…] own code and write new software programs from scratch. It looks like AI wasn’t happy enough beating other AI’s into submission, building more AI’s, designing new trainers, diagnosing disease, dreaming, […]

[…] just with each other but also with people, and compete against, or depending on your point of view, fight each other, and we’re starting to enter a very uncertain new world. And all of this is happening when we […]

[…] (after joining the Google fold in 2014).  Recent developments include teaching its AI to dream, fight, play games, learn like humans (addressing AI’s fundamental ‘catastrophic forgetting’ […]

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This