HomeInsightsFact, fiction and fake news—automated fact-checking

Contact

This article was first published on Lexis®Library IP & IT 26 April 2017.

IP&IT analysis: Can automated fact-checking services help combat the spread of fake news online? Alan Owens, partner, and Adelaide Lopez, associate, both at Wiggin, explain that while automated fact-checking systems can and are being deployed to tackle fake news, the processes do have their limitations, including the languages the automated systems operate in and the motivations of the organisation or individual deploying them.

What are the different approaches that can be taken to fact-checking?

Fact-checking now exists on a spectrum that ranges from exclusively human fact-checking at one end, to human fact- checking with the support of an automated fact-checking system, through to fully-automated fact-checking.

Human fact-checking systems are as old as news reporting itself and still form the backbone of current fact-checking efforts by numerous media outlets, including Facebook, the BBC, Google, and The Washington Post, among numerous others. However, the combination of the volume of news, the commercial pressures to report on the news as soon as it happens, and the plurality of news sources, mean that automated systems are now also being deployed and are in constant development.

Automated fact-checking systems seek to identify facts in text (or even live speech) and check them against a list of publicly available resources, past news stories, and other sources, in order to provide an assessment of whether or not the reported fact might be objectively supported.

What are automated fact-checking tools and plug-ins and which ones are currently available?

Automated fact-checking systems can be employed prior to publication to aid the author, and also after the event to identify incorrect, exaggerated or baseless facts contained within the content (for example, articles, posts or interviews). Also in development are ‘tools for fact-checkers’ that show how far a claim has been spread online, which will help fact- checkers know where to seek corrections.

There are many developers working on different approaches at the moment. RealDonaldContext, B.S. Detector and FiB, for example, are website extensions or plug-ins that provide a running commentary on the veracity of claims and stories on social media and appear next to the messages. International Business Machines Corporation, meanwhile, is using its cognitive technology, Watson, to build an app that checks stories against 55 million previously published news articles.

Dow Jones is actively selling its fact-checking product, Factiva, to companies across the world. The University of Michigan has created RumourLens, a project that looks at rumours and then scans Twitter to see if they are spreading or being corrected by people on the social network. And in the UK, the charity Full Fact is aiming to build two products—Trends and Robocheck—by the end of 2017. Robocheck will aim to provide accurate information in real-time (for instance, during Prime Minister’s question time), while Trends will be a tool for tracing how far a dubious or incorrect fact may have spread online.

Currently, however, even tech giants like Facebook are still relying on user notifications to identify content concerns. If these concerns are substantive, Facebook sends the details to external fact-checking organisations, such as Poynter, Snopes.com and The Washington Post. If a story is verified as being false, it is pushed to the bottom of a user’s feed and a notice appears for those who open the story or (as is common) share the story based only on the headline.

How do they work and what sources do they track?

These tools generally rely on a mixture of statistical analysis and Natural Language Processing (NLP). The latter is the umbrella term for technology that helps improve a bot, plug-in or algorithm’s ability to act on the human instruction provided in a more sophisticated and nuanced way. Once an instruction is translated into a language the programme can understand, statistical analysis combs a set of data (for example, government statistics or high volume news stories) and compares it against the claim made. An algorithm will then produce an assessment of the claimed fact, and may offer more reliable sources.

What are the limitations of these approaches?

There are a number of limitations to the technology itself. NLP, for example, is not available in all languages and the scope of its efficacy is limited by the data it collects and the stories it can read. Furthermore, NLP is not yet sophisticated enough to recognise the nuance of satire, humour and innuendo, which can lead it to incorrectly categorise such stories as fake. However, as machine learning becomes increasingly sophisticated, NLP will perform improved automated scrapes of a platform’s posts to triage inaccurate claims, allowing human intervention to be limited to spot-checking articles and posts labelled as ‘fake’.

Statistical analysis is also limited to data provided in a consistent and structured way, meaning that it is listed in a specific format that is readable to the algorithm. Furthermore, statistical analysis relies on databases, especially the data provided by governments, which is a problem when those databases are not publicly available.

The other problems lie not so much in the technology, but in who runs it, how and when it is chosen to be deployed, and who deploys it. Fact-checking services run on algorithms written by people, not machines, who may bring their own bias. The services may be funded by organisations that have their own agenda. In some jurisdictions any attempt at press censorship is seen as toxic.

Fact-checking is, to an extent, also driven by consumer demand. If consumers do not notify anyone of the facts they want checking, then, under the current systems, those facts won’t be checked. Furthermore, it is evident that some consumers are ambivalent or even hostile to fact-checking by any external agency, and would not require or even want their favoured news sources to be fact-checked.

What financial incentives are there for those who spam fake news stories and does more need to be done to prevent and regulate these incentives?

Financial incentives for fake news are not new. The ‘yellow press’ was a term coined in the mid-1890s to characterise the sensational journalism that used yellow ink in the circulation war between Pulitzer’s New York World and Hearst’s New York Journal. Both papers were accused by critics of sensationalising the news, with both exaggeration and entirely fake reporting, in order to drive up circulation.

However, that was confined to two newspapers in New York before 1900. Today, fake news stories can have global reach, with clickbait specifically aimed at generating advertising revenue. The regulation of this financial incentive for fake news producers is complicated and depends on the platform. For example, when a fake news story is shared through Google or Facebook, the platform may receive some advertising revenue generated by and for the creators of the fake news, theoretically disincentivising the platforms from making any effort to regulate in this sphere.

However, for Google, the credibility of its search engine is still highly material to its success, and Facebook has increasingly—and publicly—made attempts to improve its own credibility as a media platform. Also key are the advertisement buyers—many brands will seek associations only with credible media outlets, platforms and, increasingly, stories.

Regulation could be contemplated, but it would, by necessity, be limited to individual jurisdictions (or wider jurisdictional areas, such as the EU). Any attempt to regulate or censor the media output of platforms, as well as the traditional media, is likely to be highly contentious in any jurisdiction.

Are there any regulatory issues that arise with the use of automated fact-checking and what is the current legal framework for dealing with this? What if automatic fact-checking goes wrong, where does liability lie?

As it currently stands, automated fact-checking is a tool used by someone wishing to check facts; this could be a news organisation, platform, private individual or fact-checking service. However, if the automated fact-checker makes the wrong call, it may cause its user to repeat or commit a libel (the latter by marking a true story as fake), at which point the current libel legal framework (in the UK) would be engaged, as it would be in respect of any libellous story. But that framework is not nimble, cheap or accessible.

Platforms, meanwhile, may be protected in the EU by Regulation 19 of the E-Commerce Regulations 2002,  SI 2002/2013 and in the US by Section 230 of the Communications Decency Act 1996. Both pieces of legislation provide a ‘safe harbour’ for platforms that provide a notice and takedown procedure but otherwise remain passive ‘facilitators of expression’. The analogy here is with telecoms companies, who have no liability for what is sent or said on their handsets. Therefore, in the case of news that is flagged by users as ‘fake’ or personal data that is unlawfully accessed (even in error)—unless and until the site receives notification of such wrongdoing, the site is immune from liability for any damage caused.

Is the law keeping apace with these issues, and if not, what more needs to be done?

There is no direct government regulation or industry regulation of social media or online news providers in the UK. Ofcom (the UK regulator of broadcasting, telecommunications and post) also does not currently have any statutory powers over digital media channels in this country.

However, different jurisdictions may take different approaches. In Germany, for example, the Social Democratic Party singled out Facebook for particular criticism over fake news and hate messages and for failing to take adequate measures to prevent their spread. It has been suggested that Facebook may be required to set up an office, staffed around the clock, to deal with notifications. Facebook would then have 24 hours to take down the offending piece of content, failing which it would be liable to pay a fine of €500,000 and compensate the injured party.

Any other interesting trends or developments in this area?

The ever-increasing public and political awareness of fake news will drive the major platforms and news outlets to make significant investments in the new technologies that will assist them in tackling it.

We expect to see continued support for independent, not-for-profit entities acting as fact-checkers. They could play a corrective role by presenting truthful content and publishing their results in an accessible form, taking advantage of the same digital communication platforms and developments as news publishers. Funding will be a particularly difficult issue, however, given that fact-checkers themselves will come under scrutiny, especially if they are paid for by governments or private organisations.

Platforms may be required to address complaints very quickly in the face of large fines in future, as in the proposed German model. They may also need to allow independent fact-checking services to be directly accessible from their site, or carry live notifications on the site itself. We might also see public information campaigns and school programmes aimed at educating citizens about the issue, including how to identify and report problems, and help prevent the spread of inaccurate news.

We also expect to see different solutions proposed in different jurisdictions, creating significant friction for global businesses. The largest of those businesses—those keen to show they are doing all that they can to address these issues—will spend heavily to create effective automated systems in the hope of heading-off government intervention and regulation.

Interviewed by Giverny Tattersfield.