HomeInsightsCombating fake news: the EU Code of Practice on Disinformation

This article was first published in Entertainment Law Review, Issue 2, 2019.

The ubiquity of fake news and disinformation and its disruptive and dangerous impact on democratic processes is increasingly clear. Understandably, there have been strong calls for something to be done. On 28 September 2018 online platforms and the advertising industry took a step towards answering these demands by unveiling a self-regulatory EU Code of Practice on Disinformation. With signatories including Google, Facebook, Twitter and Mozilla, and commitments ranging from increased transparency in political advertising to the closure of fake accounts and demonetisation of purveyors of disinformation, the EU Commissioner for Digital Economy and Society, Mariya Gabrial, declared the Code to be “the first time that the industry has agreed on a set of self-regulatory standards to fight disinformation worldwide, on a voluntary basis.”

Background

In May 2018, a multi-stakeholder forum was charged with drafting a self-regulatory Code of Practice on Disinformation for online platforms and the advertising industry.  The signatories have committed to take action focussed on:

  • Disrupting advertising revenues of certain accounts and websites that spread disinformation;
  • Making political advertising and issue-based advertising more transparent;
  • Addressing the issue of fake accounts and online bots;
  • Empowering consumers to report disinformation and access different news sources, while improving the visibility and findability of authoritative content and;
  • Empowering the research community to monitor online disinformation through privacy-compliant access to platforms’ data.

Definition of “disinformation”

Disinformation is by no means an easy concept to define. For the purpose of the Code, it is “verifiably false or misleading information” which is (i) created, presented and disseminated for economic gain or to deceive the public and (ii) may cause public harm, which is to say threats to democratic political and policy-making processes as well as public goods such as the protection of EU citizens’ health, the environment or security.

Significantly, disinformation does not include misleading advertising, reporting errors, satire and parody, or clearly identified partisan news and commentary.

Commitments and corresponding best practice examples

Scrutiny of ad placements

The Code includes a commitment to take action to disrupt advertising and monetisation incentives for purveyors of disinformation in order to reduce their revenues. This might include, for example, restricting advertising services or paid placements, promoting verification tools, and providing advertisers with access to client-specific accounts to help them monitor the placement of ads and make choices as to where ads are placed.

The Best Practice Principles published alongside the Code encourage platforms to “follow the money” in order to ensure that bad actors do not receive remuneration. Examples of good practice in this regard include Facebook’s false news and ads policies and Google’s policy on misrepresentation, the latter of which prohibits the placement of Google ads on pages that misrepresent, misstate or conceal information about themselves, their content, or the primary purpose of their web properties.

Political advertising and issue-based advertising

The Code also seeks to address the problem of the perceived lack of transparency surrounding political and so-called ‘issue-based’ advertising. The signatories commit to follow all relevant laws to ensure that advertisements are clearly distinguishable from editorial content, including news, whatever their form and whatever the medium used.  Equally, they agree to enable public disclosure of political advertising. As regards ‘issue-based’ advertising, more work needs to be done to arrive at a working definition and to ensure that fundamental rights to freedom of expression are not jeopardised, but there are commitments here too to use reasonable efforts to devise approaches to disclose publicly such advertising.

Examples of best practice in this area include Facebook’s ‘Why am I seeing this ad’ service which allows users to determine what ads they see and why, and the political advertising policies of Facebook and Google.

Integrity of Services

The abuse of platforms through bots and fake accounts is another area that the Code focusses on. In addition to ensuring that efforts are made to close fake accounts and that any actions by bots cannot be confused with human interactions, the Code includes commitments to devise and publish clear policies regarding the misuse of automated bots and other systems.

The Code cites with approval the impersonation and spam policies adopted by the likes of Google, YouTube, and Twitter, some of which include provisions for the permanent suspension of accounts that portray another person in a confusing or deceptive manner.

Empowering consumers

The Code’s signatories reject the idea of deleting content or messages simply on the basis that they are thought to be ‘false’, whether as part of a voluntary code or in response to governmental decree. Instead, they commit to invest in technological solutions to help users discover diverse perspectives and make informed decisions when they encounter potentially false news. This might include developing indicators of trustworthiness or prioritising content that is relevant and authoritative in search results, feeds, or other automatically ranked distribution channels.

Providing users with information and tools to empower them in their online experience is encouraged.  For example, Mozilla’s Information and Trust Initiative seeks to “develop products, research, and communities to battle information pollution and disinformation” whilst its Coral Project provides a variety of open source tools to help news organisations engage more closely with their audiences and assists journalists to identify false information.

Empowering the research community

Finally, the Code states that its signatories will commit to support research into the impact of disinformation, and encourage research into political advertising. They commit to sharing datasets, undertaking joint research, and convening an annual event to bring together those in academia, the fact-checking community, and other relevant stakeholders.

The Code points to a variety of initiatives as examples of best practice in this area, including Facebook’s Elections Research Council which support independent and credible research on the role of social media in elections and democracies more broadly, and the datacommons.org project which shares fact-check data with academic researchers.

Monitoring the Code’s effectiveness

The Code includes provisions to monitor its own effectiveness: signatories commit to analysing its progress and to producing publicly available annual accounts of their work to counter disinformation. They will work with other bodies such as the World Federation of Advertisers and the European Association of Communication Agencies to understand relevant activity in various fields and will commit to cooperate with the European Commission in assessing the reporting on the functioning of the Code.

Comment

The Code represents an important step in addressing a pervasive problem that has gained prominence globally. Online manipulation and disinformation has been employed during elections in at least 18 countries in recent years and more than 3,900 examples of pro-Kremlin disinformation contradicting publicly available facts have been identified.

As with any regime of self-regulation there will of course be detractors, including those who sat on the Sounding Board who argue that the Code does not go far enough. However, EU Commissioner for Digital Economy and Society, Mariya Gabrial, has urged online platforms and the advertising industry to “immediately start implementing the actions to achieve significant progress and measurable results in the coming months”.

The stakes are high and Ms Gabrial is clear: “Online platforms need to act as responsible social players especially in this crucial period ahead of elections.  They must do their utmost to stop the spread of disinformation.” The Code represents a clear step forward in this quest.