Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global futures and deep futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future.” Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, GEMS, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.
WHY THIS MATTERS IN BRIEF
There are many ways to use bots, they can spread fake news, or as in this case, be used to catch paedophiles and keep children safe from online sexual predators.
Interested in the future and want to experience even more?! eXplore More.
Everyone knows that it’s getting harder to tell Bots from real people on the internet, in fact, in a recent survey in the US over 60 percent of adults said they could no longer tell the difference. And oddly, as far as this latest piece of news goes, that’s a good thing. A very good thing. Because the same bots that are out there trolling fake news, malware and other nefarious wares are now being used to catch paedophiles.
As many people know it’s a sad fact, a very sad fact, that paedophiles often hang out in online chatrooms, looking to strike up conservations with unsuspecting children, and in the worst cases, they arrange face-to-face meetings, resulting in sexual assault. But now a new Artificial Intelligence (AI) algorithm, however, has been designed to help keep that from happening.
Known as the Chat Analysis Triage Tool (CATT), the algorithm was created by a team from Indiana’s Purdue University led by assistant professor Kathryn Seigfried-Spellar. It was developed by analyzing 4,353 messages in 107 chat sessions that involved subsequently arrested sex offenders. More specifically, the researchers utilized a process known as statistical discourse analysis to identify different trends in word usage and conversation patterns.
Among other things, CATT’s able to detect a tactic commonly used by offenders seeking a meeting, in which they first attempt to gain trust by disclosing something about themselves – this usually takes the form of a negative personal story, such as their being the victim of parental abuse.
Additionally, before suggesting a meeting, offenders often chat with the child for a period of weeks or even months, essentially “grooming” them. By contrast, paedophiles who are only interested in chatting typically quickly move on from child to child.
By detecting these factors and others, it is now hoped that CATT could be used to sort through the plethora of suspicious conversations in chatrooms, alerting police to ones that might be leading up to a real-world encounter. To that end, the university now plans to turn the tool over to several law enforcement departments for a test run – it could be in use by the end of the year.
“If we can identify language differences, then the tool can identify these differences in the chats in order to give a risk assessment and a probability that this person is going to attempt face-to-face contact with the victim,” says Seigfried-Spellar. “That way, officers can begin to prioritize which cases they want to put resources toward to investigate more quickly.”
A paper on the research is being published in the journal Child Abuse and Neglect.