Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global futures think tank working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future.” Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring several Education and Lunar XPrize teams, building the first generation of biological computers and re-envisioning global education with the G20, and helping the world’s largest conglomerates ideate the next 20 years of intelligent devices and machines. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.
WHY THIS MATTERS IN BRIEF
As criminals continue to weaponise artificial intelligence they are inevitably going to find new ways to exploit weaknesses in existing security systems.
Just like any lock can be picked if you have the right key any biometric scanner can be fooled. For years now researchers have that some of the most popular biometric sensors such as the ones found in Samsung’s popular Galaxy range of smartphones can be tricked by using fingerprints lifted from photographs, or even by simply printing out a photo of someone’s eyes, or by morphing two different faces together, but a new discovery from computer scientists at New York University raises the stakes significantly after the group used Artificial Intelligence (AI) to create fake fingerprints, called DeepMasterPrints, that not only dupe smartphone sensors but can successfully masquerade as prints from numerous different people. In short they used AI to create the fingerprint equivalent of a master key, and needless to say it’s got some people worried about AI’s potential to crack a range of biometric security systems, including millions of passwords per minute as happened recently after another team elsewhere created an AI hacking tool called PassGAN.
The work builds on research into the concept of a “Master print” that combines the traits commonly found in most fingerprints. Last year the team began exploring the possibility of creating master prints by manually identifying the individual fingerprint traits that could be combined together to make a single master fingerprint that authenticates multiple people. And this year they’ve vastly increased the speed and the number of different combinations that can be created by their algorithm to churn out master prints like a dairy farmer churns out butter in the summertime.
“Even if a biometric system has a very low false acceptance rate for real fingerprints, they now have to be fine tuned to take into account synthetic fingerprints, too,” says Philip Bontrager, a PhD candidate at NYU who led the research. “Most systems haven’t been hardened against a synthetic or artificial fingerprint attack, so it’s something new on the algorithmic side of things that people designing sensors have to be aware of now.”
The research works by capitalising on the shortcuts that mobile devices take when scanning a user’s fingerprint. The sensors are small enough that they can only “see” part of your finger at any given time. As such, they make some assumptions based on a snippet, which also means that fake fingerprints likely need to satisfy fewer variables in order to be able to trick them.
The researchers trained neural networks on thousands of images of real fingerprints so the system could begin to output a variety of realistic snippets. Then they used a technique called “Evolutionary optimisation” to assess what would pass as a master print – with every characteristic as familiar and convincing as possible – to guide the output of the neural networks.
The researchers then tested their synthetic fingerprints against the popular VeriFinger matcher which is used in a number of consumer and government fingerprint authentication schemes worldwide, and two other commercial matching platforms, to see how many identities their synthetic prints matched with.
Fingerprint matchers can be set with different levels of security in mind. A top secret weapons facility would want the lowest possible chance of a false positive, while a regular consumer smartphone fingerprint sensor would want to keep obvious frauds out, but not be so sensitive that it frequently rejects the actual owner.
Against a strict setting, for example like the ones used by the NSA or the US Government, the researcher team’s master prints matched with anywhere from 2 or 3 percent of the records, and in the different commercial platforms they matched up to about 20 percent, depending on which prints they tested.
Overall though the master prints got 30 times more matches than the average real fingerprint – even at the highest security settings, where the master prints didn’t perform particularly well. Think of a master print attack, then, like a password dictionary attack, in which hackers don’t need to get it right in one shot, but instead systematically try common combinations to break into an account.
The researchers note that they did not make capacitive printouts or other replicas of their master prints, which means they didn’t attempt to unlock real smartphones which Anil Jain, a biometrics researcher at Michigan State University who did not participate in the project, obviously sees as a real shortcoming. But despite that he says the strength of the work is in the machine learning techniques it developed.
“The proposed method works much better than the earlier work and it points to new and interesting ways that biometric systems might one day be compromised,” Jain says.
The NYU researchers plan to continue refining their methods. They hope to raise awareness in the biometrics industry about the importance of defending against synthetic biometric attacks. They also suggest that developers should start testing their devices against synthetic prints as well as real ones to make sure the proprietary systems can spot the phonies and the group notes that it has only begun to scratch the surface of understanding how exactly master prints succeed in tricking scanners. It’s possible that sensors could increase their fidelity or depth of analysis in order to defeat master prints.
“Even as these synthetic measures get better and better, if you’re paying attention to it you should be able to design systems with higher and higher resolution that aren’t easily attacked,” Bontrager says. “But it will affect those systems cost and design.”