Insights ICO sets out eight questions that developers and users of generative artificial intelligence (AI) and large language models (LLMs) need to ask


In a blog post, Stephen Almond, Executive Director, Regulatory Risk at the ICO, has said that following publication of a letter signed by almost two thousand academics and technology experts calling for a six-month moratorium on the development of generative AI and LLMs (such as Chat GPT), it is important to “take a step back” and consider how personal data is being used in this context.

Mr Almond notes that while the technology is novel, the principles of data protection law remain the same. In Mr Almond’s view, there is “a clear roadmap for organisations to innovate in a way that respects people’s privacy”.

Mr Almond says that organisations developing or using generative AI should be considering their data protection obligations from the outset, taking a data protection by design and by default approach, remembering that data protection law still applies when the personal data being processed comes from publicly accessible sources.

Mr Almond says that those developing or using generative AI that processes personal data should be asking the following questions:

  • What is the lawful basis for processing personal data? Those processing personal data must identify an appropriate lawful basis, such as consent or legitimate interests.
  • Are you a controller, joint controller or a processor? If you are developing generative AI using personal data, you have obligations as the data controller. If you are using or adapting models developed by others, you may be a controller, joint controller or a processor.
  • Have you prepared a Data Protection Impact Assessment (DPIA)? You must assess and mitigate any data protection risks via the DPIA process before you start processing personal data. Your DPIA should be kept up to date as the processing and its impacts evolve.
  • How will you ensure transparency? You must make information about the processing publicly accessible unless an exemption applies. If it does not take disproportionate effort, you must communicate this information directly to the individuals the data relates to.
  • How will you mitigate security risks? In addition to personal data leakage risks, you should consider and mitigate risks of model inversion and membership inference, data poisoning and other forms of adversarial attacks.
  • How will you limit unnecessary processing? You must collect only the data that is adequate to fulfil your stated purpose. The data should be relevant and limited to what is necessary.
  • How will you comply with individual rights requests? You must be able to respond to people’s requests for access, rectification, erasure or other information rights.
  • Will you use generative AI to make solely automated decisions? If so, and these have legal or similarly significant effects (e.g. major healthcare diagnoses), individuals have further rights under Article 22 of UK GDPR.

Mr Almond warns that the ICO will be asking these questions of organisations that are developing or using generative AI and will act where organisations are not following the law and considering the impact on individuals.

To read the blog post in full, click here.