WHY THIS MATTERS IN BRIEF
Being able to hide secret messages in AI chats is one thing, but it could also be a new cyber security threat vector.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
Secret messages can be hidden inside fake conversations generated by Artificial Intelligence (AI) chatbots. The technique could give people a way to communicate online without arousing the suspicion of oppressive governments.
When messages are encrypted for secure transmission, the resulting cipher text – an unusual string of garbled characters – stands out like a sore thumb. That is fine if you are keeping secrets in a country where secrets are allowed, but under brutal dictatorships, this could land a citizen in hot water. Even the UK government recently demanded that Apple install a backdoor in its encrypted system that would allow security services to access private data.
To give citizens a safe way to share secrets, Mayank Raikwar at the University of Oslo, Norway, and his colleagues created a system to insert encrypted messages into plausible-sounding chats created by a Large Language Model (LLM) such as ChatGPT. These passages of text could then be sent through any platform such as Facebook, WhatsApp or E-Mail, where they would seem innocuous.
Raikwar says his approach is similar to how spies embedded abroad during the Cold War broadcast encrypted messages openly via shortwave radio on so-called numbers stations, but the new method is sneakier because governments wouldn’t realise the message existed.
“An eavesdropper would not be able to understand whether there’s a hidden message or not,” says Raikwar, who has tested the idea in laboratory conditions, but hasn’t yet released the code for use in the real world. “There might be problems if you start doing it – it’s just a theory. But it would be nice to see if it actually works.”
LLMs work by piecing together text one word at a time, selecting by statistical analysis which one makes the most sense to use next. Raikwar and his colleagues altered this part of the AI and instructed it to embed the next character of an encrypted message at regular intervals in the generated text.
In this way, the AI picks the next word in a sentence based not only on whether it makes sense in context, but also so that the next encrypted character from the secret message appears in the correct spot in the sentence. If the AI gets to a point where it can’t place the next character without creating suspicious or contrived sentences, then it backtracks a few characters and tries again.
Bill Buchanan at Edinburgh Napier University, UK, says the approach would work, but would be a cumbersome way to communicate.
“While covert channels have been around for centuries, they are hardly used in practice as there are normally better ways to pass secret information,” he says. “One would assume that someone who might want to use this type of method would be under surveillance.”