Scroll Top

Google AIs learnt how to encrypt their communications to stop people spying on them

WHY THIS MATTERS IN BRIEF

Artificial intelligence is being embedded into every device and service, AIs that can dynamically encrypt their own communications could help to increase the security of those services exponentially.

 

If you were worried about the news that AIs are already building other AIs then the news that Googles AIs have managed to figure out how to have an encrypted conversation with each other using a completely new style of encryption that they just whipped up on the spot and which is not immediately recognisable to us feeble humans probably won’t do anything for your demeanour. And, to potentially make matters worse, they whipped it up after they were told to by researchers in the Google Brain team. So, when the day comes that AIs can lock us out of their conversations at will, we’ll only have ourselves to blame – or, at least – the Google Brain team to blame.

 

RELATED
Scientists ran an experiment to prove a super intelligent AI couldn't be controlled

 

As explained in the teams paper, published this week on ArVix, in order to make the breakthrough Google set up three relatively vanilla, and completely separate, “mix and transform” neural networks – Alice, Bob, and Eve – and gave each a very specific goal. Alice’s job was to send a secure encrypted message to Bob, Bob’s job was to decrypt it and Eve’s job was to eavesdrop and try to decrypt it. Then they began the experiment with a shared secret key.

There was one condition though, the Google Brain team measured the success, or failure, of the task by using a “loss function,” also known as a “Failure condition,” for each party. Eve and the recipient Bob’s plaintext original message had to be as close to the original plaintext as possible and Eve’s results also had to be as close to the original message as possible. This created a “generative adversarial network” among the robots.

 

RELATED
As Arctic tensions heat up DARPA wants to control ice

 

Importantly, and most crucially as far as we humans are concerned – the AIs were not told how to encrypt stuff – or what crypto techniques to use. They were just given the loss function and told to get on with it. Each AI was then scored on its ability to complete its task successfully, and over the course of tens of thousands of trial and error runs each one inched towards their goal. And eventually, Alice and Bob won.

The researchers describe one such scenario in the paper: After a period of time, Alice and Bob started to communicate quite effectively, but in a way that allows Eve to improve its understanding as well, gradually. Then, around step 10,000, Alice and Bob counter Eve’s progress. By about step 15,000 the training goals were effectively achieved and the remaining steps merely increase Eve’s reconstruction error slightly.

The key to making this happen was letting Bob and Alice share a file that was invisible to Eve – what’s generally called a secret key – that let them have a basis from which to start encoding or decoding their messages.

In the grand scheme of encryption tech, this is a pretty basic technique with plenty of downsides. Not only can you not read messages without the secret key, you can’t even send them. What’s more, anyone who has the secret key can read or write, so the whole system can be compromised by stealing or reverse-engineering that key. Suffice to say, if AI’s ever did want to keep their communications secret from humans then this would not be the best way to do it.

 

RELATED
AI's new trick is predicting when you're going to die

 

That said though what this study did show is that, given the proper incentives, neural networks can – literally – just whip up new encryption codes and systems on the fly, and that over time those codes will get exponentially better, more “secure” and harder to crack. It is also inevitable that – given the speed at which they operate and the way in which they can test millions of different permutations per second that it will become either extremely difficult, or potentially impossible, for humans to debug and reverse engineer the solutions they come up with, meaning that one day AIs will be able to communicate in secret with each other.

As to what that means for us? Well, that depends on what they’re doing and what their intentions are, and that’s an entirely different subject altogether and it’s also why we already have very smart engineers working hard to ensure that neural nets never learn how to mislead humans or learn – or want to – circumvent our kill switches or “interruption systems.”

In conclusion, the researchers said that neural networks can indeed learn to protect their communications, just by telling Alice to value secrecy above all else – and importantly, that secrecy can be obtained without prescribing a certain set of cryptographic algorithms. The consequences of all of this is that one day – ironically just as AIs are using their smarts to understand and translate our languages they could also use their smarts to lock us out of theirs. And then of course, the game changes again when they can harness the awesome power of quantum computing to create and morph their codes… but that’s another story.

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This