24 views
0

WHY THIS MATTERS IN BRIEF

Being able to detect and weed out fake news is quickly rising up the political agenda, and companies are in an arms race to try to find solutions fast.

 

Any fact-checker who works in the media has a straightforward but challenging job – make sure all the claims in an article are true. Are simple facts, like the distance between two cities, accurate? Are the quotes correct? Are broader statements true? It’s an important task, and in an era of outright fake news, especially considering the 2016 US election and the recent midterms, it’s becoming even more crucial.

 

RELATED
Darktrace's new AI automatically stops cyber attacks

 

To tackle this larger issue, researchers from MIT as well as institutions in Qatar and Bulgaria have been working on a way to use Artificial Intelligence (AI)  to help humans make sense of the complicated media landscape. And they realised that an important step they needed to take before developing an AI that can fact-check individual claims was to analyse how reliable different news websites are themselves in the first place.

So they set out to make an AI that could evaluate how factually strong different sites are, and their political bias.

To train their AI system, they first used data from 1,066 websites listed in a source called Media Bias Fact Check. Then, the AI analysed information about news websites, considering sources like articles on the sites themselves, their Wikipedia pages, Twitter accounts, and even URLs. Using information like this, the AI had about a 65 percent accuracy at predicting how factual the website was, and was about 70 percent accurate at detecting its bias. One of the best resources for the AI is one that humans rely on, too.

 

RELATED
Devastating bioweapon or humanity's saviour? UN green lights the 'Gene Drive'

 

“It turns out that Wikipedia is very important,” says Ramy Baly, a postdoc at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and the paper’s first author. That’s because the information you need to know about a news source might be right there – the Wikipedia page for The Onion, for example, labels it as satirical right up top. The Drudge Report’s Wikipedia page meanwhile labels it as conservative. Wikipedia was important for another reason.

“Not having the Wikipedia page is associated with a website not being very reliable,” Baly adds.

Keeping in mind the overall trustworthiness of the website itself – and checking its Wikipedia page, if it has one – is a good step for regular people, too. For example, in August, Facebook and a cybersecurity firm announced they’d uncovered “inauthentic” news coming out of Iran. One of the websites associated with Iran was called the Liberty Front Press; they called themselves “independent” but appeared to actually be pro-Iran. And tellingly, the site does not appear to have a Wikipedia page.

Of course, the MIT research group aren’t the only ones using AI to analyse language like this – a Google made AI system called Jigsaw automatically scores the toxicity of reader comments, and Facebook has turned to AI to help augment its efforts to keep hate speech at bay in Myanmar.

 

RELATED
IBM injected a virus into a neural net to create an undetectable cyberweapon

 

Another source was even more important than Wikipedia for the MIT researchers’ AI system though, namely articles on the websites themselves. The AI was able to analyse between 50 to 150 articles on each news site and examine the language in them.

“Extremely biased websites try to appeal to the emotions of the readers,” Baly says. “They use a different kind of language,” when compared to a mainstream, down-the-middle site.

Balys says they’d still like to make their system more sophisticated. Their goal at this stage was to “initiate a new way of thinking of how to tackle this problem.”

So will we one day see an end to fake news and bias? Absolutely not, but maybe we’ll see less of it…

About author

Matthew Griffin

Matthew Griffin, award winning Futurist and Founder of the 311 Institute, a global futures think tank working between the dates of 2020 and 2070, is described as "The Adviser behind the Advisers." Regularly featured on AP, CNBC, Discovery and RT, his ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past five years as one of the world's foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive future. A rare talent Matthew sits on the Technology and Innovation Committee (TIAC) for Centrica, Europe’s largest utility company, and his recent work includes mentoring XPrize teams, building the first generation of biocomputers and re-inventing global education, and helping the world’s largest manufacturers envision, design and build the next 20 years of devices, smartphones and intelligent machines. Matthew's clients are the who’s who of industry and include Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, the USAF and many others.

Your email address will not be published. Required fields are marked *