Insights Ofcom publishes research into the impact of online hate

Ofcom says that the qualitative research report, commissioned from independent research organisation Traverse, examines the impact of exposure to online hate and hateful abuse on people with protected characteristics. It focuses on content found on user-to-user services.

Ofcom has carried out this research in line with its duty to promote and research media literacy in the UK. The research report is one in a series of research studies into online safety that will inform Ofcom’s preparations for implementing the new online safety laws. As part of these preparations, Ofcom is building a robust evidence base, bringing together internal and external data, collected using different methods, from a variety of different sources. This programme of research further develops Ofcom’s understanding of online harms and how it can help to promote a safer user experience.

In the report, online hate is defined as hateful content directed at a group of people based on a particular protected characteristics and hateful abuse is defined as hateful content directed at an individual based on a protected characteristic they have or are perceived to have.

Key findings from the report include:

  • being exposed to online hate was stated as a common feature of online experience;
  • frequency and types of hateful abuse experienced were strongly determined by context, including how often participants used different platforms and how they used them;
  • impacts tended to be more pronounced where content targeted characteristics;
  • in terms of behaviours, anxiety and fear could lead to participants limiting what they shared/expressed or where they went online; it also had effects offline;
  • strategies to cope included blocking and reporting, challenging and engaging, seeking support, and self-censoring/retreating;
  • despite the pervasiveness of online hate and abuse, participants often wanted to protect free speech and it was felt almost unanimously that mandatory user verification via uploading a form of ID was not a good idea; however, whilst freedom of speech was valued it was common to say that there should not be freedom from consequences; harming and threatening others was often seen as the “red line” in terms of free speech and could have a chilling effect on others;
  • hateful content experienced was mostly seen as not compliant with platform’s policies; participants called for platforms to have more active and consistent moderation;
  • participants felt that platforms had the primary responsibility to moderate/remove hateful content in line with their policies and the law;
  • it was felt that a regulator should ensure that platforms are following rules and are taking robust action to enforce their own policies or removing any illegal content;
  • participants also thought a regulator should be promoting best practice by sharing examples of how best to tackle online hate and abuse; and
  • there were also calls for a greater emphasis on education and awareness-raising to shift negative behaviours amongst offending users, alongside guidance and making improvements to platform functionality to help people to minimise exposure to online hate (e.g. creating more private circles or the filtering out of non-verified user content).

Ofcom says that the findings should not be considered a reflection of any final policy position that Ofcom may adopt when it takes up its role as the online safety regulator. To access the research report, click here.