Matthew Griffin, award winning Futurist working between the dates of 2020 and 2070, is described as “The Adviser behind the Advisers” and a “Young Kurzweil.” Regularly featured in the global press, including BBC, CNBC, Discovery and RT, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew sits on several boards and his recent work includes mentoring Lunar XPrize teams, building the first generation of biological computers and re-envisioning global education with the G20, and helping the world’s largest manufacturers ideate the next 20 years of intelligent devices and machines. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.
WHY THIS MATTERS IN BRIEF
With cybersecurity at the top of most people’s minds how do you know just how good the cybersecurity product you just bought to protect your company or home devices really is?
Cybersecurity is obviously front of mind for a lot of people, which is one of the reasons why the US DOD just installed the world’s first “autonomous capable” defensive AI “Robo-Hacker” at the heart of their critical systems, and it was only last year when the US Government finally got round to creating a “Cyber DEFCON” scale. But when the founders of a new non-profit, Cyber Independent Testing Lab (Cyber ITL), started trying to assess the effectiveness of cybersecurity software in order to try to develop a scoring system that would rate programs depending on which security features they used, they encountered a “mind-blowing” problem – no one had ever measured how well such features actually worked.
“There haven’t been a lot of studies that look at how effective the safety measures that we use and trust actually are,” said Sarah Zatko, co-founder of the Cyber ITL, at a recent DEF-CON session, and the gap, she said, “helped create space for the relatively high proportion of ‘snake oil’ products in the cybersecurity market.”
“In most other industries that sort of data, about how well different security measures worked relative to each other, would be pretty fundamental — something you could take for granted,” said Zatko, whose husband is co-founder Peiter Zatko. But when the Zatkos began trying to assemble a single numeric score from the various security factors they were measuring they realized they had no basis to weigh the significance of the different measures against one another — because no one ever seemed to have rated their effectiveness.
The 100 plus technical measures Cyber ITL were looking at included things like Address space layout randomization, or ASLR, a memory protection process for computer operating systems that guards against buffer overflow attacks by randomising the location where system executables are loaded into memory, data execution prevention, or DEP, which prevents certain memory sectors from being executed, protecting the operating system kernel from attack, and stack hardening, which helps protect against certain kinds of denial of service attacks.
“I had never realized what a weird blind spot this was until I needed that data for this effort,” Zatko said, “it’s just sort of mind-blowing,” adding that filling this gap was now the top priority for the non-profit.
“The end goal is of course to make the software industry safer and easier for people to navigate,” she explained.
Using their own data from both static and dynamic software analysis, the husband and wife team will conduct “studies about how impactful the different security elements that we’re looking at are and how much should they affect the final score.”
That way, she said, when they do get around to publishing their security scores they will be “numbers we’re ready to get into a fight over … things that are really solid and that we really stand behind.”
“It will be a big deal,” she promised, explaining that there were many in the security field who shared their concern about the lack of quantitive metrics, which means it’s hard to show the value of a particular security control, technique or program.
“We’re not the only ones who are frustrated about the lack of quantification of impact for any of the security measures we hope to make industry standards,” she said, and for a very good reason.
“I believe that this gap in the body of existing research is at least partially responsible for the success of snake oil salesmen in the cybersecurity industry, because for the stuff that really has value and substance and works, and for the snake oil, the same argument is being made – ‘I’m an expert and this works, trust me.’ Without authoritative metrics, non-experts don’t have the data they need for a rational decision making process and so whichever one is better marketed and prettier is the one they go for.”
Zatko also said that measuring the effectiveness of security was hard, and wouldn’t earn you any friends.
“It’s not sexy, it’s not exciting, and there are enough other problems to work on, so people don’t do it.”