Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Being able to detect and weed out fake news is quickly rising up the political agenda, and companies are in an arms race to try to find solutions fast.
Any fact-checker who works in the media has a straightforward but challenging job – make sure all the claims in an article are true. Are simple facts, like the distance between two cities, accurate? Are the quotes correct? Are broader statements true? It’s an important task, and in an era of outright fake news, especially considering the 2016 US election and the recent midterms, it’s becoming even more crucial.
To tackle this larger issue, researchers from MIT as well as institutions in Qatar and Bulgaria have been working on a way to use Artificial Intelligence (AI) to help humans make sense of the complicated media landscape. And they realised that an important step they needed to take before developing an AI that can fact-check individual claims was to analyse how reliable different news websites are themselves in the first place.
So they set out to make an AI that could evaluate how factually strong different sites are, and their political bias.
To train their AI system, they first used data from 1,066 websites listed in a source called Media Bias Fact Check. Then, the AI analysed information about news websites, considering sources like articles on the sites themselves, their Wikipedia pages, Twitter accounts, and even URLs. Using information like this, the AI had about a 65 percent accuracy at predicting how factual the website was, and was about 70 percent accurate at detecting its bias. One of the best resources for the AI is one that humans rely on, too.
“It turns out that Wikipedia is very important,” says Ramy Baly, a postdoc at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and the paper’s first author. That’s because the information you need to know about a news source might be right there – the Wikipedia page for The Onion, for example, labels it as satirical right up top. The Drudge Report’s Wikipedia page meanwhile labels it as conservative. Wikipedia was important for another reason.
“Not having the Wikipedia page is associated with a website not being very reliable,” Baly adds.
Keeping in mind the overall trustworthiness of the website itself – and checking its Wikipedia page, if it has one – is a good step for regular people, too. For example, in August, Facebook and a cybersecurity firm announced they’d uncovered “inauthentic” news coming out of Iran. One of the websites associated with Iran was called the Liberty Front Press; they called themselves “independent” but appeared to actually be pro-Iran. And tellingly, the site does not appear to have a Wikipedia page.
Of course, the MIT research group aren’t the only ones using AI to analyse language like this – a Google made AI system called Jigsaw automatically scores the toxicity of reader comments, and Facebook has turned to AI to help augment its efforts to keep hate speech at bay in Myanmar.
Another source was even more important than Wikipedia for the MIT researchers’ AI system though, namely articles on the websites themselves. The AI was able to analyse between 50 to 150 articles on each news site and examine the language in them.
“Extremely biased websites try to appeal to the emotions of the readers,” Baly says. “They use a different kind of language,” when compared to a mainstream, down-the-middle site.
Balys says they’d still like to make their system more sophisticated. Their goal at this stage was to “initiate a new way of thinking of how to tackle this problem.”
So will we one day see an end to fake news and bias? Absolutely not, but maybe we’ll see less of it…