Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global futures and deep futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future.” Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, GEMS, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.
WHY THIS MATTERS IN BRIEF
The greatest danger we face is underestimating the power of todays, let alone tomorrows, technology. As we enter an era of persistent online and offline surveillance it is becoming easier to see how one day police forces around the world could create a Minority Report like “Pre-crime” unit – or, how governments could use technology to “enforce” a status quo.
You know when you look at someone in the street, and, even though you know nothing about them, something about their look makes you cross the road to avoid them? Or when you look at police mug shots and subconsciously think “Yeah, they look like the criminal type”?
Well now a pair of Chinese researchers have dived straight into the controversial area by trying to see if artificial intelligence (AI) can determine who’s a criminal and who isn’t – using nothing more than a photograph.
So, can an AI tell if you are a criminal just by analysing your face? And then, the more dangerous question – if it can could it go one step further and determine an innocent persons pre-disposition to criminality and how likely they are to, eventually, commit a crime?
The answer, even though it comes with a lot of questions and horrible moral implications, seems to be yes, and if it’s true then that could have a huge impact on society, security and policing in the future.
In their paper entitled “Automated Inference on Criminality using Face Images” Xiaolin Wu and Xi Zhang from China’s Shanghai Jiao Tong University investigated whether an AI could tell whether a person was a convicted criminal just by analysing his or her facial features from a photograph and the results came back positive. The pair even say that they have discovered a new “law” governing “the normality for faces of non-criminals.”
They described the idea of using algorithms that can match and exceed a human’s capability to recognise facial patterns to infer criminality as an “irresistible challenge.”
The images the pair used in the research were standard ID photographs of Chinese males between the ages of 18 and 55, with no facial hair, scars, or other markings and Wu and Zhang stressed that the ID photos used weren’t police mugshots and that out of the sample of 1,856 photos 730 of the individuals were convicted criminals and 235 of them had committed violent crimes “including murder, rape, assault, kidnap, and robbery.” The pair also removed photos that contained “any subtle human factors” out of the assessment process to try to eliminate human bias.
“In fact, we got our first batch of results a year ago. We went through very rigorous checking of our data sets, and also ran many tests searching for counterexamples but failed to find any,” said Wu.
Here’s how it worked.
Wu and Zhang fed the images into a machine learning algorithm and then watched to see if any of their four classifiers, which each used a different method of analysing facial features, could infer, or detect, criminality.
They found that all four of their different classifiers were mostly successful and that the faces of criminals and those not convicted of crimes differed in key ways that were perceptible to a computer program. Moreover, “the variation among criminal faces is significantly greater than that of the non-criminal faces,” they said.
“All four classifiers performed consistently well and produced evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic,” they said, “also, we find some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so called ‘nose-mouth angle’.”
The best classifier, known as the Convolutional Neural Network, achieved 89.51 percent accuracy in the tests.
“By extensive experiments and vigorous cross validations we have demonstrated that via supervised machine learning, data-driven face classifiers are able to make reliable inference on criminality.”
While the pair admit in their paper that they are not qualified to discuss or to debate on societal stereotype they also admit that ethically and morally they’re on shaky ground.
“We have been accused on internet of being irresponsible socially,” Wu said and in the paper they even go on to quote philosopher Aristotle: “It is possible to infer character from features.”
One major concern going forward, the obvious elephant in the room, is that in an era where machine vision and persistent surveillance are both becoming increasingly pervasive this work, and work like it, might one day be taken seriously and adopted by governments to begin cataloguing people. And if you think that this “handy” algorithm – if it could be proved that it didn’t yield false-positives could just be used by authoritarian governments to try to establish criminal intent – along the lines of the Minority Reports “Pre-crime” unit then you might want to think again because governments, including the UK government have been testing the theory since 1995 about whether your genes make you a criminal.