Insights Online Safety Act 2023: Ofcom explains how size and risk matter in their approach to online safety

Contact

On 9 November 2023, Ofcom issued a consultation focusing on the illegal content duties under the Online Safety Act (“OSA”) (previously reported by Wiggin) which runs until 23 February 2024. The OSA lists a number of “priority offences” and user-to-user service providers must act to prevent, and search services must minimise the risk of, users encountering content which amounts to any of those offences. This consultation, which runs to hundreds of pages, seeks feedback on Ofcom’s Register of Risks (Ofcom’s analysis of the causes and impacts of online harm), guidance for conducting risk assessments including record keeping and review, how risks are to be mitigated (to be captured in a Code of Practice), guidance on how to make illegal content judgements and Ofcom’s approach to enforcement. It is worth noting that Codes of Conduct are voluntary, but service providers that follow them will be deemed to have complied with the relevant OSA duties.

On 30 January 2024, Ofcom published a short document responding to questions raised during the consultation as to how it will apply the rules of the OSA to services which are larger and/or pose greater risk. Ofcom reiterates its proportionate approach whereby a core set of measures should apply to all service providers while more demanding measures will apply to services that are larger and/or riskier. Ofcom proposes to define a large service as one which has more than seven million monthly UK users, roughly 10% of the UK population, an approach like that adopted under the EU Digital Service Act for the definition of “very large online platforms.”

Ofcom gives as examples: the requirement to allow users to block people will apply to services that are large and risky, an annual Board review of a service’s risk management activities will apply to large services even if they have not identified any significant risks, and the measures proposed in Ofcom’s Codes of Practice to combat grooming, and the use of automated hash-matching (matching images to a database of known illegal images) to detect child sexual abuse material (“CSAM”), will apply even to a small service where there is a high risk of grooming or CSAM. The justification for extra obligations on large services is the uncertainty of whether a measure would reduce harm materially on a small service or whether the costs and inconvenience to small services might be disproportionate to the harm. Ofcom does not rule out changes to its Codes in the future to recommend measures for a wider range of services.

As for how to determine risk, this is the purpose of the OSA requirement to conduct a risk assessment and Ofcom is seeking views on its proposed risk assessment guidance.

It is not clear which parts of the lengthy consultation document Ofcom is seeking to address and clarify by publishing this note. A major part of the consultation (370 pages) explores the proposed mitigations for illegal harms which may have been identified during a risk assessment. When finalised, these will be written into a Code of Practice, drafts of which are Annexed to the consultation (another 100+ pages). It explains that mitigations (which include moderation (human and automated), user reporting, default settings, user controls, etc.) will differ depending on whether the service provider is large, whether it is considered (based on the risk assessment) medium or high risk for a specific type(s) of harm and/or whether it is considered (based on the risk assessment) multi risk (medium or high risk for at least two of the 15 priority harms set out in the OSA). As reported by Wiggin previously, that document already provides a proposed definition for a large service (average user base of greater than seven million per month in the UK) and, in fact, proposes another size category (700,000 monthly UK users) to be considered when determining what mitigations might be reasonable for particular harms.

For example, in relation to automated moderation, the consultation proposes that hash matching, to assess whether content consists of CSAM, should be used by large services which are at medium or high risk of image-based CSAM based on their risk assessment, other services at high risk of image-based CSAM as per their risk assessment which have more than 700,000 monthly UK users, and services which are at high risk of image-based CSAM and which are file-sharing or file storage services that have more than 700,000 monthly UK users.  Other similar categories of services would also be required to implement URL detection to determine whether the content contains a CSAM URL and large services which are at a medium or high risk of fraud based on their risk assessment should implement standard keyword detection technology used to detect articles for use in fraud (e.g. stolen personal or financial credentials).

For more information, click here.