January 26, 2026
Ofcom has announced that it has opened an investigation into an ‘AI character companion chatbot service’, Novi Ltd, to assess whether it has complied with its requirement to use highly effective age assurance measures.
The announcement follows a series of high-profile incidents involving online chatbots, including cases of users impersonating dead people and instances of chatbots encouraging users to self-harm. In the United States, a number of lawsuits have been brought against major AI companies in relation to such events. Meanwhile, in the UK, the Secretary of State for Science, Innovation and Technology, Liz Kendall, has singled out chatbots as being an area of particular concern.
For its part, Ofcom has recently set out its approach to regulating AI chatbots, making clear what services are covered by the Online Safety Act 2023 (OSA), and reminding service providers of their obligations. It also has referred to a previous open letter to online service providers which set out in more detail how the OSA applies to generative AI and chatbots.
Importantly, Ofcom has sought to remind service providers that AI-generated content is highly likely to fall within the scope of the OSA and Ofcom’s regulatory ambit, even if it is not explicitly mentioned in the Act itself. Whilst some have pointed to this omission as evidence of the OSA already being out of date (as discussed, for example, here), Ofcom is clear that “any AI-generated content shared by users on a user-to-user service is classed as user-generated content and would be regulated in the same way as content generated by humans. For example, a social media post that includes harmful imagery produced by AI, is regulated in the same way as similar content created by a human”.
How effective Ofcom will be at regulating the proliferation of harmful AI-generated content remains to be seen, although the Secretary of State warned last year that Ofcom risks losing public trust if it fails to use its powers to tackle online harms. As for chatbots in particular, she has said that she is “really worried about [them]” and warned that “if chatbots aren’t included or properly covered by the legislation, and we’re working though that now, then they will have to be”.
To read more, click here.
Expertise