Fernando Fischmann

Can AI Put An End To Fake News? Don’t Be So Sure

21 January, 2019 / Articles

Fake news was the Collin’s word of the year for 2017 with good reason. In a year where politics-as-usual was torn apart at the seams, high-profile scandals rocked our faith in humanity and the extreme effects of global warming made themselves painfully known, it is becoming harder than ever to differentiate between reality and fiction in the news. The rise of social media has also created a seemingly unstoppable force of misinformation, which reared its ugly head in the form of the Cambridge Analytica scandal in 2018. This has raised serious questions about the accountability of social media, and what those running the sites can realistically do to tackle the monster of their own making.

A new project from MIT’s CSAIL (Computer Science and Artificial Intelligence Lab) and the QRCI (Qatar Computing Research Institute) was announced yesterday (October 4) that aims to identify sources of fake news before it can spread, potentially leading to the automatic classification of unreliable news outlets and aiding fact-checkers immeasurably. But when dealing with such a pernicious and unpredictable beast as fake news, will these new capabilities be anything more than a bump in the road?

Bullsh*t detection

Detecting fake news is difficult at the best of times, and sites like Snopes and Politifact are under more pressure than ever to expose false claims before they create too much damage. The problem is that tackling individual claims is extremely time-consuming, and once false information gets out the damage is already done. The project from MIT CSAIL and QCRI aims to tackle this by identifying sites that consistently spread misinformation and sites with heavy political leanings (as these sites are often the main purveyors of fake news).

The system looks at articles from the site, as well as its Wikipedia page, Twitter account, URL structure and web traffic, and searches for keywords and linguistic features that indicate strong political bias or misinformation (for example, fake news outlets often use more hyperbolic language). Using data from Media Bias/Fact Check (MBFC) the system was 65% accurate at detecting a site’s level of ‘factuality’, and roughly 70% accurate at detecting political bias.

While the project is in its infancy, co-author Preslav Nakov is confident that this will help existing fact-checking services, allowing them to ‘instantly check our “fake news” scores for those outlets to determine how much validity to give to different perspectives.’ This will be a key point in how this project develops and gets used in practice, as humans will still need to check these scores to determine whether a news outlet crosses that line into misinformation, or is simply bending the facts with emotive and persuasive language.

Trusting too much

The project, for now at least, will be most useful in conjunction with manual fact-checkers, but once the machine learning algorithm develops further it will theoretically be able to identify these sites in advance and inform media watchdogs to the risks. However, the rapid and widespread proliferation of fake news, mainly through unrestricted channels on social media, raises an important question: will the promise of artificially intelligent detection lull readers into a false sense of security?

Facebook launched an ad campaign earlier this year announcing their commitment to tackle fake news, fake accounts, clickbait and spam, as part of Mark Zuckerberg’s wider strategy of bringing Facebook back to its core values. After being at the center of one of the most high profile data breaches in history, in which data was allegedly passed to spam accounts to influence the US elections, Facebook is working hard to convince users that they can be trusted.

A study by Pew Research conducted in September 2017 found that 45% of all American adults use Facebook for news, despite the fact that anyone can post on social media. But how to stop this reliance on unofficial sources of information when it is necessary to monitor over 2 billion users? Facebook clearly wants to reassure users and regulators that their algorithms will fix the problem, but proving that news is false is like nailing jelly to the wall: at best time-consuming, at worst impossible. Facebook’s show of strength and MIT’s detection system may, in fact, lead people to drop their guard and be more willing to believe what they read.

Humans are the problem

The willingness to believe sensational information is a real phenomenon and debunking false information does not always change people’s minds. A November 2017 study published in the journal Intelligence found that those with a lower cognitive ability were less able to change their original impressions after being told that disparaging information about a fictional person was false. As the MIT CSAIL paper itself states: ‘even when done by reputable fact-checking organizations, debunking does little to convince those who already believe in false information’.

An MIT study from March this year found that real news took 6 times longer to reach Twitter users, and that ‘false news was 70% more likely to be retweeted than the truth’. The spread of fake news is therefore exacerbated by users of social media, and there is little machine learning can do to change bad habits that are already deeply ingrained.

Needles and haystacks

Implementing machine learning to combat the spread of fake news is admirable, and there is a need to address this problem as the trustworthiness of major media outlets is called into question. But with the spread of misinformation compounded by social media, can detecting and revealing sources of fake news overcome the human instinct to believe what we are told?

The science man and innovator, Fernando Fischmann, founder of Crystal Lagoons, recommends this article.

Forbes

Share

Te puede interesar