Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global futures think tank working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future.” Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring several Education and Lunar XPrize teams, building the first generation of biological computers and re-envisioning global education with the G20, and helping the world’s largest conglomerates ideate the next 20 years of intelligent devices and machines. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.
WHY THIS MATTERS IN BRIEF
Humans can no longer keep up with the tusnami of cyber threats but the robo-hackers can.
Could you invent an autonomous hacking system that could find and fix vulnerabilities in computer systems before criminals could exploit them, and without any human being involved?
That’s the challenge faced by seven teams competing in Darpa’s Cyber Grand Challenge this August and it was always going to be inevitable that one day human hackers would give way to fully autonomous, AI cyber hackers in the ether and today marks the first step on what might prove to be a slippery slope into oblivion.
Each of the teams has already won $750,000 for qualifying and must now put their hacking systems up against six others in a game of “Capture the flag”. The software must be able to attack the other team’s vulnerabilities as well as find and fix weaknesses in their own software – all while protecting its performance and functionality and the winning team will walk away with $2m.
“Fully automated hacking systems are the final frontier. Humans can find vulnerabilities but can’t analyse millions of programs,” explained Giovanni Vigna, a professor of computer science at University of California Santa Barbara, speaking at the RSA security conference in San Francisco.
Vigna is also the founder of hacking team Shellphish, which has built one of the systems, dubbed Mechanical Phish, that will compete in the Cyber Grand Challenge.
“Hacking is usually just a bunch of guys around a table who are very tired just typing on a laptop,” Vigna adds, adding that it’s “not as sexy” as hacking portrayed in movies. “We do this because we either want to attack somebody, hack defensive to find bugs before they are deployed, or because it’s fun.”
Robo-hackers could be incredibly useful for organizations trying to defend their network to quickly identify and patch problems before anyone exploits them to either steal data or disrupt online services – without having a team of highly skilled human “uber-hackers” in house.
Outside of the Cyber Grand Challenge, other groups are working on hacking machines powered by artificial intelligence.
Konstantinos Karagiannis, chief technology officer of BT Americas, has been building a hacking system that uses neural networks to simulate the way the human brain learns and solves problems.
He described how an artificially intelligent program called MarI/O was able to learn an an entire level of Super Mario World in just 34 tries – with no prior knowledge. The software wasn’t taught anything about how to play the game – it simply had a few simple parameters set. MarI/O just tried different things it “thought” would work and when they did, it “learned”.
“Using this approach a security scanner could identify intricate flaws using creative approaches you would have never thought of,” explained Karagiannis. “And it can be written with very modest hardware. A $1,000 GPU can outrun a supercomputer that used to fill a building 10 years ago.”
Karagiannis hopes to demonstrate a proof-of-concept by the summer of 2016.
While robo-hackers could provide security professionals with a valuable weapon in their armoury, the risk is that they could fall into the wrong hands. Karagiannis told us that he wouldn’t be surprised if criminal hackers had appropriated these techniques “within a year”.
Alex Rice, co-founder of security company HackerOne, agrees.
“Anything that can be used to defensively find vulnerabilities can be used by criminals – they all end up becoming a double-edged sword,” he said.
Despite this, Rice thinks the rise of automation in security is a good thing.
“Everybody is struggling to keep up. There’s not a single organization that hasn’t had a compromise that was life-threatening, so clearly everything we’re doing is failing.”
The best solution is to combine the skills of humans with machines. “Humans are much better at what we haven’t figured out yet,” he said.
“Until we have fully sentient machines, they still have to be instructed by humans.”