AI is the electricity that powers the world’s digital fabric, and stealing another companies AI is easier than you might think.


Interested in the Exponential Future? Join our XPotential Community, future proof yourself with courses from our XPotential Academyconnect, watch a keynote, or browse my blog.

In the field of computer science known as machine learning, engineers often refer to the Artificial Intelligences (AI) they create as “Black box” systems – once a machine learning engine has been trained from a collection of example data to perform anything from facial recognition to malware detection, it can take in queries such as “Whose face is that?” “Is this app safe?” and so on, and spit out answers without anyone, not even its creators, fully understanding the mechanics of the decision making inside that box.


Dubai to replace real police with robocops in 2017


But researchers are increasingly proving that even when the inner workings of those machine learning engines are inscrutable, they aren’t exactly secret, which is why, for example, IBM recently showed off a new technique that lets you watermark your AI to protect it from being stolen or being sabotaged.

In fact, they’ve found that the guts of those black boxes can be reverse-engineered and even fully reproduced – stolen, as one group of researchers puts it – with the very same methods used to create them.

In a paper titled “Stealing Machine Learning Models via Prediction APIs,” a team of computer scientists at Cornell Tech, the Swiss institute EPFL in Lausanne, and the University of North Carolina detail how they were able to reverse engineer machine learning-trained AIs based only on sending them queries and analysing the responses. By training their own AI with the target AI’s output, they found they could produce software that was able to predict with near-100 percent accuracy the responses of the AI they’d cloned, sometimes after a few thousand or even just hundreds of queries.


Empatica's smartwatch predicts Epileptic seizures


“You’re taking this black box and through this very narrow interface, you can reconstruct its internals, reverse engineering the box,” says Ari Juels, a Cornell Tech professor who worked on the project. “In some cases, you can actually do a perfect re-construction.”

The trick, they point out, could be used against services offered by companies like Amazon, Google, Microsoft, and BigML that allow users to upload data into machine learning engines and publish or share the resulting model online, in some cases with a pay-by-the-query business model. The researchers’ method, which they call an extraction attack, could duplicate AI engines meant to be proprietary, or in some cases even recreate the sensitive private data an AI has been trained with.

“Once you’ve recovered the model for yourself, you don’t have to pay for it, and you can also get serious privacy breaches,” says Florian Tramer, the EPFL researcher who worked on the AI-stealing project before taking a position at Stanford.


New Facebook AI de-identifies you in videos to protect you from facial recognition tech


In other cases, the technique might allow hackers to reverse engineer and then defeat machine learning based cyber security systems, as I’ve discussed before, meant to filter spam and malware, Tramer adds.

“After a few hours’ work…you’d end up with an extracted model you could then evade if it were used on a production system.”

The researchers’ technique works by essentially using machine learning itself to reverse engineer machine learning software. To take a simple example, a machine learning trained spam filter might put out a simple spam or not-spam judgment of a given email, along with a “confidence value” that reveals how likely it is to be correct in its decision.

That answer can be interpreted as a point on either side of a boundary that represents the AI’s decision threshold, and the confidence value shows its distance from that boundary. Repeatedly trying test emails against that filter reveals the precise line that defines that boundary. The technique can be scaled up to far more complex, multidimensional models that give precise answers rather than mere yes-or-no responses – the trick even works when the target machine learning engine doesn’t provide those confidence values, the researchers say, but requires tens or hundreds of times more queries.


The world's first fully autonomous warship aces sea trials


The researchers tested their attack against two services – Amazon’s machine learning platform and the online machine learning service BigML. They tried reverse engineering AI models built on those platforms from a series of common data sets. On Amazon’s platform, for instance, they tried “stealing” an algorithm that predicts a person’s salary based on demographic factors like their employment, marital status, and credit score, and another that tries to recognize one-through-ten numbers based on images of handwritten digits. In the demographics case they found that they could reproduce the model without any discernible difference after 1,485 queries and just 650 queries in the digit-recognition case.

On the BigML service, they tried their extraction technique on one algorithm that predicts German citizens’ credit scores based on their demographics and on another that predicts how people like their steak cooked – rare, medium, or well done – based on their answers to other lifestyle questions. Replicating the credit score engine took just 1,150 queries, and copying the steak preference predictor took just over 4,000.

Not every machine learning algorithm is so easily reconstructed though, says Nicholas Papernot, a researcher at Penn State University who worked on another machine learning reverse engineering project earlier this year. The examples in the latest AI-stealing paper reconstruct relatively simple machine learning engines. More complex ones might take far more computation to attack, he says, especially if machine learning interfaces learn to hide their confidence values.


America launches the worlds first fully autonomous submarine hunter


“If machine learning platforms decide to use larger models or hide the confidence values, then it becomes much harder for the attacker,” Papernot says. “But this paper is interesting because they show that the current models of machine learning services are shallow enough that they can be extracted.”

In a statement, BigML’s vice president of predictive applications Atakan Cetinsoy downplayed the research, writing that “it does not expose or represent any security or privacy threat to BigML’s platform at all.” He argued that while BigML does allow users to share black-box AI engines on a pay-per-query basis, none of the service’s users are currently charging for their shared AI engines. He also echoed Papernot’s point that many of the machine learning models hosted on BigML would be too complex to reverse engineer, and pointed out that any theft of the service’s models would also be illegal.

Amazon meanwhile apparently declined to comment on the researchers’ work, but when the researchers contacted the companies, they say Amazon responded that the risk of their AI-stealing attacks was reduced by the fact that Amazon doesn’t make its machine learning engines public, instead only allowing users to share access among collaborators. In other words, the company warned, take care who you share your AI with.


DARPA's king Robo-hacker "Mayhem" proves no match for humans


Aside from merely stealing AI, the researchers warn that their attack also makes it easier to reconstruct the often-sensitive data it’s trained on. They point to another paper that showed it’s possible to reverse engineer a facial recognition AI that responds to images with guesses of the person’s name.

That method would send the target AI repeated test pictures, tweaking the images until they homed in on the pictures that machine learning engine was trained on and reproduced the actual face images without the researchers’ computer having ever actually seen them. By first performing their AI-stealing attack before running the face-reconstruction technique, they showed they could actually reassemble the face images far faster on their own stolen copy of the AI running on a computer they controlled, reconstructing 40 distinct faces in just 10 hours, compared to 16 hours when they performed the facial reconstruction on the original AI engine.

The notion of reverse engineering machine learning engines, in fact, has been advancing in the AI research community for months. In February another group of researchers showed they could reproduce a machine learning system with about 80 percent accuracy compared with the near-100 percent success of the Cornell and EPLF researchers. Even then, they found that by testing inputs on their reconstructed model, they could often learn how to trick the original. When they applied that technique to AI engines designed to recognize numbers or street signs, for instance, they found they could cause the engine to make incorrect judgments in between 84 percent and 96 percent of cases.


Researchers want to revolutionise AI by combining Quantum computers and Neural Networks


The latest research into reconstructing machine learning engines could make that deception even easier. And if that machine learning is applied to security- or safety-critical tasks like self-driving cars or filtering malware, the ability to steal and analyse them could have troubling implications. Black box or not, it may be wise to consider keeping your AI out of sight.

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *