HomeInsightsFact, fiction and fake news—exploring the impact of fake news

Article by

This article was first published on Lexis®PSL IP & IT on 24 April 2017.

IP & IT analysis: What is ‘fake news’ and what kind of an impact does it have on a democratic society? Adelaide Lopez, associate at Wiggin LLP, outlines what constitutes fake news, as well as explaining the dangers it poses and the tools that can be utilised to combat it.

What constitutes ‘fake news’? What if the person publishing or writing a so called ‘fake news’ story genuinely believes it to be true?

There are two categories of ‘fake news’:

  • false or inaccurate stories usually circulated to further a political or social agenda, or
  • accurate news reporting mischaracterised information because someone disagrees with or dislikes the content

President Trump and many in his administration have been roundly accused of the second kind of ‘fake news’—declaring that long established and credible news outlets such as the New York Times are peddling ‘fake news’ about his administration’s ties with Russia, for example, or that the White House’s calculation of the number of attendees at the inauguration was an ‘alternative fact’ to that reported by the national park service and D.C. metro.

The creation of straight-up ‘fake news’ (ie purported ‘news’ containing completely fabricated facts) is now its own cottage industry. There are professionals who write these sensationalist and deeply inaccurate articles and sell them to the highest bidder or post the stories personally to drive traffic to their website or profile. The story furthers an agenda, whether it’s to discredit an opponent, or to scare the audience into supporting a polic y position. The person writing the ‘fake news’ knows that it’s fake or, at the very least, should have known, given the shallow depths of their fact checking. A publisher, or more likely the person sharing the story with their friends online, may be unaware that the news is fake (because these days truth is stranger than fiction), but that doesn’t make the story any less untrue or any less dangerous.

There is also the brand of ‘fake news’ that contains accurate information buried beneath misleading headlines and editorial commentary (masquerading as journalism) so as to distort the facts beyond all recognition—however, the nuggets of truth contained within the article do not absolve the writer or publisher of its sin. It is still a piece with wholly inaccurate ‘fake news’ because it is designed to mislead the audience into supporting a particular social or political agenda by deceiving the reader into believing the distorted version of the fact.

Why has a parliamentary inquiry been launched by MPs into the phenomenon?

Only 4% of UK adults can correctly identify whether a news story is fake or real, according to a recent YouGov poll. The confusion is primarily due to the fact that ‘fake news’ and real news appear side by side in social media. Without training (and some say more formal education), it is very difficult for the average reader to notice a difference, although knowing the difference wouldn’t necessarily help. Studies show that visitors to ‘fake news’ sites also regularly check credible news sites. But this is not typically because they are seeking accurate news to form a balanced and reasoned opinion on an issue—rather they are checking out the ‘lies’ spread by the ‘mainstream media’.

The press is our fourth estate. It is vital to our democracy that the press is able to provide the public with an unbiased record of our society, to hold people to account. Distrust of the press is not only damaging to its vitality, it poses a real threat to the rule of law. If there is no sense of the difference between what is true and untrue, how can a line be drawn between right and wrong? It is that fundamental understanding that is at the heart of the rule of law.

We are encouraged, although not surprised, that the UK Parliament has recognised the danger of ‘fake news’ and launched an inquiry. Facebook has already revealed that it has an unofficial task force addressing the problem and that it is introducing new ‘signals’ to allow the algorithm to better identify and rank ‘authentic’ content. Even for us mere humans, there are tell-tale markers of ‘fake news’—we can look out for articles that include studies and cite polls but do not provide them, and pay attention to the agenda of the piece. The curtain can be pulled aside and people can be taught to detect what is real and what is fake, but it will be a struggle. Those who create ‘fake news’ know how to make their version of events appealing, and that, combined with its ubiquity, makes it the worst kind of enemy to fight.

What is the scope of the inquiry and what will it examine?

The government launched its inquiry into ‘fake news’ on 30 January 2017 to solicit data and solutions for what it labels ‘a threat to democracy’. The committee said that it was prompted to launch the inquiry as the influence of fake news on the outcome of the 2016 elections in the United States became increasingly apparent. The inquiry is therefore looking closely at the definition of ‘fake news’, its impact on society and government, the agendas behind it, and practical solutions to identifying and eliminating it. The inquiry clearly stems from the perspective that the eroding credibility of the press is damaging to our democracy, creates a misinformed electorate, and has potential to be the death knell to the legitimate press.

The committee will be considering the commercial and political agendas behind the circulation of ‘fake news’. Much of the ‘fake news’ circulating on social media is created to further a political or social agenda. However, some is created simply as a tool to drive people to a social media profile or website; the increased number of clicks translating to an increase in ad revenue. Either way, the inquiry calls out ‘fake news’ for what it is: propaganda. This being the starting place for the inquiry, the committee will be looking for solutions from a variety of angles, from algorithms that detect ‘fake news’, to revisiting advertising practices and regulations, and considering who is responsible for implementing reform.

Of particular interest to our firm, which has been a pioneer in blocking online pirated content, is the fact that Damien Collins MP, the chair of the culture, media and sports committee pointed to the fact that since:

‘Tech companies have accepted they have a social responsibility to combat piracy online and illegal sharing of content, they also need to help address the spreading of ‘fake news’ on social media platforms.’

The committee is obviously very clued-in to the fact that fighting the battle against ‘fake news’ will require a multi-pronged attack:

  • educating news consumers
  • educating the general public, and
  • increasing accountability of social media companies

Is there a danger that social media and search engine algorithms are merely providing echo chambers for news stories which shock as oppose to inform?

Yes. Social media replicates the provincial mentality that used to be reserved to remote small towns. We are fooled into thinking that we are worldly because we are ‘friends’ with someone in Australia, or we ‘follow’ a news organisation in Dubai. But algorithms lead us to follow those users who reflect our own patterns—who read what we read, who follow what we follow, who are really already in our circle—so the echo chamber is inevitable.

As ‘fake news’ stories tend to tap into the fears, assumptions and beliefs of a particular group, a particular story will appear in your feed time and again, reinforcing its credibility because you think that it is ‘everywhere’, but it’s not. Its appearance in your feed adds no more to its credibility than if it were gossip at the local pub, which it is.

What guidelines are currently available for news organisations and journalists? Is there a need for clear statements to be issued by a regulatory body about misleading claims as soon as they appear in the public domain?

Legitimate news organisations, such as The Times and Channel 4, are already subject to regulations that would prevent the publication of ‘fake news’. The UK print media, for example, predominantly subscribes to the Independent Press Standards Organisation (IPSO), which is a self-regulating body. Those major publishers who do not subscribe to IPSO, such as The Financial Times and The Guardian, nevertheless have strict internal policies and guidelines to ensure their journalism meets the same standards required by IPSO. The broadcast media, meanwhile, are regulated by Ofcom, a regulator established and operating under a number of Acts of Parliament, including the  Communications Act 2003.

Both IPSO and Ofcom have strict guidelines regarding accuracy, which include a requirement that the press make clear distinctions between conjecture and fact and that they provide a fair opportunity to respond to inaccuracies. The problem is that the writers and publishers of ‘fake news’ are not members of the legitimate press and do not subscribe to IPSO. Generally they are anonymous, creating this content with impunity for one nefarious purpose or another.

Regardless, like online piracy, going after the seller of the content is not the answer. Instead the government needs to (and the inquiry will bear this out):

  • educate users to detect ‘fake news’ and the very real dangers of spreading it, and
  • go after the system that makes the business of ‘fake news’ possible by making social media, and other sites circulating ‘fake news’, responsible for the better quality control of the ‘news’ made available through their site

The first stage will require considerable money and effort on the part of the government to develop a strategy that will penetrate the general consciousness. Developing such a strategy will lessen some of the impact. However, as long as ‘fake news’ is a valuable revenue stream, the creators of ‘fake news’, like the perpetrators of online piracy, will constantl y adapt their methods in order to evade detection. This is why the second prong of the attack is so important, although it poses a potential moral and legal quagmire for sites.

The initial hurdle for these tech companies is a matter of principle. Remember that ‘fake news’ is not just strictly inaccurate information, it is also commentary dressed up as news that has misconstrued and misrepresented facts through editorialising and conjecture. If social media begins policing its content for this kind of ‘fake news’, it enters a grey area where it becomes the arbiter of what is ‘right-thinking’ versus ‘wrong-thinking’. These are private companies with their own terms and conditions so, on the one hand, if someone breaches their terms, it is a strict question of contract law whether or not they have grounds to remove the content. However, as the news platform for over 40% of adults in the UK, such actions by social media sites like Twitter and Facebook would be seen by many as censorship, even if removing the content and blocking the user is within its rights. This in turn would cause an inevitable chilling effect with regards to free speech as news providers wonder if they will be considered ‘right-thinking’ enough to publish certain content on their site.

Liability is the other hurdle for tech companies providing a platform for news. Facebook insists that it is a curator of news, not a traditional media company. Clearly they are still coming to terms with their position as a news outlet, and are trying to protect their legal position and limit liability for the content posted by users on its site. Social media sites and other online news platforms are protected by legislation in the EU and the US, which limit the sites’ liability for the material they host, as long as they are passive players and do not interfere with or edit the content in any way. This safe haven has allowed chat rooms, social media, and online commentary to thrive over the last 15 years.

In the EU, Regulation 19 of the E-Commerce Regulations 2002  SI 2002/2013, provides that where the site is storing information provided by a recipient of the service and in doing so is only passively providing the opportunity for publication (eg the comments section to a news article) and provides a notice and takedown procedure, it will not be liable for postings made on the site absent notification. In the US, Section 230 of the Communications Decency Act 1996 provides that the site will not be treated as a publisher of material for the purposes of determining liability for statements posted on its site as long as it is ‘merely a facilitator of expression’. These companies increase their exposure considerably if they start policing the news provided on their site beyond the mere function of an algorithm, making them understandably wary of doing so, and eager to find other solutions to the problem of ‘fake news’.

Are there any other interesting trends or developments worthy of mention?

The future of ‘fake news’ is less techy and more traditional. The trends we are currently seeing in this area include teaching children how to identify ‘fake news’ and a return to conventional news sources. There is a surge of civil engagement and a recognition that legitimate reporting and analysis are necessary in a democratic society. Across the US and the UK lessons in how to differentiate between ‘fake news’ and real news are being taught from primary school through to university. Educators recognise that ‘fake news’ has been around for a while—think of all of the hoax news stories of celebrity deaths—so their job is to teach kids the importance of maintaining a reliable press, and to give them the tools to recognise ‘fake news’ when they see it.

Online subscriptions to major mainstream newspapers have increased considerably in the past three to six months. This means that more readers are going straight to the source for the news, instead of finding it through social media or other curated platforms where they are more likely to encounter ‘fake news’. Hopefully trends such as these will continue to grow and ‘fake news’ will become the footnote instead of the headline.

Interviewed by Giverny Tattersfield.