0

WHY THIS MATTERS IN BRIEF

As more of the world comes to rely on AI adversarial attacks pose a particular challenge to companies and people in critical industries who are responsible for the safety of people and systems.

 

Interested in the Exponential Future? Join our XPotential Communityconnect, watch a keynote, or browse my blog.

Adversarial attacks, where hackers use the way that Artificial Intelligence (AI) sees the world against itself in order to trick into doing things it shouldn’t, are getting more widespread – whether it’s tricking Tesla’s into speeding into oncoming traffic or tricking hospital systems into giving patients the all clear when in fact they have terminal cancer.

 

RELATED
As Arctic tensions heat up DARPA wants to control ice

 

Now, in response to this increasing threat, a team of researchers from University of Illinois (UI) have devise a new way to train AI’s in an attempt to try and protect them from these attacks, and as we rely on AI more and more in our daily lives it’s no exaggeration to say that their work could help save lives.

Today, most adversarial research focuses on image recognition systems, but deep learning based image reconstruction systems have also shown themselves to be vulnerable to adversarial attacks as well. This is particularly troubling in healthcare where the latter are often used to reconstruct medical images like CT or MRI scans from X-Ray data where a targeted adversarial attack could cause such a system to reconstruct a tumor in a scan where there isn’t one, or, as mentioned, vice versa giving doctors and patients false data which could be fatal.

 

RELATED
OpenAI's GPT-3 moves us closer to realising computers that write their own code

 

As part of their research Bo Li and her colleagues at UI are have proposed a new method for training these deep learning systems so they’re more failproof and therefore trustworthy in what they call “safety critical scenarios.”

During their research they pitted the neural networks responsible for image reconstruction against other neural networks responsible for generating adversarial examples, in a style similar to GAN algorithms. Throughout iterative rounds the adversarial network attempts to fool the reconstruction network into producing things that aren’t part of the original data, or “ground truth.” The reconstruction network then continuously tweaks itself to avoid being fooled, which in turn makes it safer to use in the real world.

And as for the results when the researchers tested their adversarially trained neural network on two popular image data sets it was able to reconstruct the ground truth better than other neural networks that had been “fail proofed” with different methods.

 

RELATED
State actors weaponise DeepFakes to try and change the course of the Ukraine war

 

The results still aren’t perfect though obviously and this is a work in progress but it’s a start. The work will be presented next week at the International Conference on Machine Learning.

 

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *