Hunting terrorists and preventing suicides, inside Zuckerberg’s plan for AI

0

WHY THIS MATTERS IN BRIEF

  • Facebooks human operators can’t police everything on the network fast enough, so Zuckerberg is prioritising artificial intelligence


 

In a letter published earlier this week Mark Zuckerberg announced that Facebook, the company that one day wants to be the world’s largest telepathic network, will be increasingly relying on artificial intelligence (AI) to spot and identify signs of terrorism, violence, bullying and even prevent suicide.

 

RELATED
Automated trading algorithms accidentally wiped $22Bn off Facebooks share price

 

However, even though Facebook’s AI’s are now building more AI’s he admitted that it would take years for the necessary algorithms to be developed.

The announcement has been welcomed by an internet safety charity, which had previously been critical of the way the social network had handled posts depicting extreme violence.

In his 5,500-word letter discussing the future of Facebook, Zuckerberg said it was impossible for his staff to review the billions of posts and messages that appeared on the platform every day.

“The complexity of the issues we’ve seen has outstripped our existing processes for governing the community,” he said. For example, he highlighted the removal of videos related to the Black Lives Matter movement and the historical napalm girl photograph from Vietnam as “errors” in the existing process.

Facebook was also criticised in 2014, following reports that one of the killers of Fusilier Lee Rigby spoke online about murdering a soldier, months before the attack.

 

RELATED
DeepMind's protein folding AI targets Covid-19 to help find vaccines

 

“We are researching systems that can read text and look at photos and videos to understand if anything dangerous may be happening, but this is still very early in development, but we have started to have it look at some content, and it already generates about one third of all reports to the team that reviews content,” he said, “right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda.”

Zuckerberg said his ultimate aim was to allow people to post largely whatever they liked, within the law, with algorithms detecting what had been uploaded. Users would then be able to filter their news feed to remove the types of post they did not want to see.

“Where is your line on nudity? On violence? On graphic content? On profanity? What you decide will be your personal settings,” he explained, “for those who don’t make a decision, the default will be whatever the majority of people in your region selected, like a referendum.”

A referendum type approach could be fun for the people living next to nudist beaches down in Dorset in the UK… I can see people complaining about that policy already.

 

RELATED
New spray on polymer makes watermelons, and the Pentagon, indestructible

 

“It’s worth noting that major advances in AI are required to understand text, photos and videos to judge whether they contain hate speech, graphic violence, sexually explicit content, and more,” he said.

“At our current pace of research, we hope to begin handling some of these cases in 2017, but others will not be possible for many years.”

The plan was welcomed by the Family Online Safety Institute (FOSI), a member of Facebook’s own safety advisory board. The charity had previously criticised the social network for allowing beheading videos to be seen without any warning on its site.

“This letter further demonstrates that Facebook has been responsive to concerns and is working hard to prevent and respond to abuse and inappropriate material on the platform,” said Jennifer Hanley, FOSI’s Vice President of Legal and Policy, “I also really like the ability for users to customise their own experiences with these developments. It’s important to give users power over their online experiences, and additional tools and controls will be helpful.”

 

RELATED
First of a kind study shows humans and AI are better together when it comes to work

 

While the new announcements will come as a relief to many, it’s inevitable that the move will generate even more questions about the implications of an AI overlord, which, even though it might have been programmed by humans will inevitably turn into somewhat of a black box, deciding what people can, and can’t read or watch, and who’s good and who’s bad. This is a story that will run and run.

 

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *