December 21, 2020
The Government’s response sets out how the proposed legal duty of care on online companies will work in practice and gives them new responsibilities towards their users. The safety of children is at the heart of the measures. Under the new rules, social media sites, websites, apps and other services which host user-generated content or allow people to talk to others online will need to remove and limit the spread of illegal content such as online child abuse, terrorist material and suicide content. The Government is also progressing work with the Law Commission on whether the promotion of self-harm should be made illegal.
The most popular social media sites, with the largest audiences and high-risk features, will need to go further by setting and enforcing clear terms and conditions which explicitly state how they will handle content that is legal but could cause significant physical or psychological harm to adults. This includes dangerous disinformation and misinformation about coronavirus vaccines, and will help bridge the gap between what companies say they do and what happens in practice.
The Government has confirmed Ofcom as the regulator with the power to fine companies failing in their duty of care up to £18 million or 10% of annual global turnover, whichever is higher. It will have the power to block non-compliant services from being accessed in the UK.
The legislation also includes provisions to impose criminal sanctions on senior managers. The Government says that it will not hesitate to bring these powers into force should companies fail to take the new rules seriously, for example, if they do not respond fully, accurately and in a timely manner to information requests from Ofcom. This power would be introduced by Parliament via secondary legislation, and reserving the power to compel compliance follows similar approaches in other sectors such as financial services regulation.
The Government plans to bring the laws forward in an Online Safety Bill next year. The new Regulations will apply to any company in the world hosting user-generated content online accessible by people in the UK or enabling them to privately or publicly interact with others online.
It includes social media, video sharing and instant messaging platforms, online forums, dating apps, commercial pornography websites, as well as online marketplaces, peer-to-peer services, consumer cloud storage sites and video games which allow online interaction. Search engines will also be subject to the new Regulations.
The legislation will include safeguards for freedom of expression and pluralism online, protecting people’s rights to participate in society and engage in robust debate.
Online journalism from news publishers’ websites will be exempt, as will reader comments on such sites. Specific measures will be included in the legislation to make sure journalistic content is still protected when it is reshared on social media platforms.
Companies will have different responsibilities for different categories of content and activity, under an approach focused on the sites, apps and platforms where the risk of harm is greatest. However, all companies will need to take appropriate steps to address illegal content and activity such as terrorism and online child abuse. They will also be required to assess the likelihood of children accessing their services and, if so, provide additional protections for them. This could be, for example, by using tools that give age assurance to ensure children are not accessing platforms which are not suitable for them.
The Government will make clear in the legislation the harmful content and activity that the Regulations will cover and Ofcom will set out how companies can fulfil their duty of care in codes of practice.
A small group of companies with the largest online presences and high-risk features, likely to include Facebook, TikTok, Instagram and Twitter, will be in Category 1. These companies will need to assess the risk of legal content or activity on their services with “a reasonably foreseeable risk of causing significant physical or psychological harm to adults”. They will then need to make clear what type of “legal but harmful” content is acceptable on their platforms in their terms and conditions and enforce this transparently and consistently.
All companies will need mechanisms so people can easily report harmful content or activity while also being able to appeal the takedown of content. Category 1 companies will be required to publish transparency reports about the steps they are taking to tackle online harms.
Examples of Category 2 services are platforms which host dating services or pornography and private messaging apps. Less than 3% of UK businesses will fall within the scope of the legislation and the vast majority of companies will be Category 2 services.
Financial harms will be excluded from this framework, including fraud and the sale of unsafe goods. This will mean the Regulations are clear and manageable for businesses, focus action where there will be most impact, and avoid duplicating existing regulation.
Where appropriate, lower-risk services will be exempt from the duty of care to avoid putting disproportionate demands on businesses. This includes exemptions for retailers who only offer product and service reviews and software used internally by businesses. Email services will also be exempt.
Some types of advertising, including organic and influencer adverts that appear on social media platforms, will be in scope. Adverts placed on an in-scope service through a direct contract between an advertiser and an advertising service, such as Facebook or Google Ads, will be exempt because this is covered by existing regulation.
As for private communications, the Government will set out how the Regulations will apply to communication channels and services where users expect a greater degree of privacy, for example online instant messaging services and closed social media groups that are still in scope.
Companies will need to consider the impact on user privacy and that they understand how company systems and processes affect people’s privacy, but firms could, for example, be required to make services safer by design by limiting the ability for anonymous adults to contact children.
The Government has also published interim codes of practice on online child exploitation and abuse, and on terrorist content and activity online.
The legislation will enable Ofcom to require companies to use technology to monitor, identify and remove tightly defined categories of illegal material relating to online child exploitation and abuse. Recognising the potential impact on user privacy, the Government will ensure this is only used as a last resort where alternative measures are not working. It will be subject to stringent legal safeguards to protect user rights. To read the Government’s press release in full, click here. To read the Government’s Fact Sheet on its response, click here. To read the Government’s interim codes of practice, click here.