HomeInsightsAI in newsrooms: Regulator publishes guidance

Contact

The independent press regulator, Impress, has published new guidance for journalists and publishers on the use of AI in newsrooms.

The Guidance recognises the growing role that AI is having in newsrooms, but points to the challenges associated with its use, including the potential for misinformation, disinformation, and bias. According to Impress, the Guidance has been written “to highlight the common pitfalls of AI and to help publishers consider whether to adopt AI tools into their workflows”.

According to Impress, AI must be used in a way that is “accurate, transparent, respects privacy, and does not discriminate. Its use must also consider the rights of content creators, like journalists and photographers and should not knowingly infringe their copyright. Any use of AI tools in a newsroom requires the balancing of human editorial judgement with machine efficiency to tell the stories that really matter”.

The Guidance develops on these themes in detail, and sets out the following headline pieces of advice:

  1. Publishers should not rely solely on AI tools for research purposes;
  2. Publishers should exercise human editorial review to ensure the accuracy of the content produced by an AI tool;
  3. Publishers should not use AI tools to generate photorealistic images, videos or speech to depict real-life people or events;
  4. Publishers using AI tools to generate content should ensure its use is accurate and does not knowingly mislead users. Publishers should consider how to attribute their use of AI in an appropriate way;
  5. Publishers should be aware of the plagiarism risk when using AI tools to generate content and should take reasonable steps to identify specific examples of plagiarism and attribute it accordingly;
  6. Publishers should be aware of potential prejudicial or pejorative biases in AI generated content;
  7. Publishers should not input confidential, sensitive or commercial/proprietary information into AI tools unless using secure in-house AI models with strict data protection measures;
  8. Publishers should be publicly transparent about their use of AI tools;
  9. Publishers should ensure human editorial review and clear labelling of AI generated content to avoid materially misleading users.

To read the Guidance in full, click here.