HomeInsightsInformation Commissioner’s Office publishes guidance on Data Protection Impact Assessments (DPIA) and AI

Contact

Simon Reader, Senior Policy Officer at the ICO, has published a blog post setting out some of the things organisations should think about when carrying out a DPIA for the processing of personal data in AI systems. The blog is part of the ICO’s Call for Input on developing the ICO framework for auditing AI.

Mr Reader explains that under the GDPR the use of AI for processing personal data will usually trigger the legal requirement for completing a DPIA. If the result of an assessment indicates residual high risk to individuals that cannot be reduced, data controllers must consult with the ICO.

A DPIA needs to describe the nature, scope, context and purposes of any processing of personal data. It must make clear how and why AI is going to be used to process the data. Mr Reader explains that it will need to set out:

  • how data will be collected, stored and used;
  • the volume, variety and sensitivity of the input data;
  • the nature of the data controller’s relationship with data subjects; and
  • the intended outcomes for individuals or wider society and for the data controller.

Mr Reader says that a DPIA should be undertaken at the earliest stages of project development and should feature, at a minimum, the following key components:

  • a systematic description of the processing activity: this should include data flows and the stages when AI processes and automated decisions may have effects on individuals. Where AI systems are partly or wholly outsourced to external providers, both organisations should also assess whether joint controllership has been established under Article 26 of the GDPR and, if so, to collaborate in the DPIA process as appropriate;
  • assessing necessity and proportionality: the deployment of an AI system to process personal data needs to be driven by the proven ability of that system to fulfil a specific and legitimate purpose, not by the availability of the technology. By assessing necessity in a DPIA, an organisation can evidence that these purposes could not be accomplished in another reasonable way. When assessing proportionality, the interests of the organisation need to be weighed up against the rights and freedoms of individuals;
  • identifying risks to rights and freedoms: the use of personal data in the development and deployment of AI systems may not just pose risks to individuals’ privacy and data protection rights. Data controllers should therefore consider any relevant legal frameworks beyond data protection;
  • measures to address the risks: data protection officers and other information governance professionals should be involved in AI projects from the earliest stages. Once measures have been introduced to mitigate the risks identified, the DPIA should document the residual levels of risk posed by the processing. These must be referred to the ICO for prior consultation if they remain high; and
  • a “living” document: a DPIA should be considered a “live” document, in that it is subject to regular review or re-assessment should the nature, scope, context or purpose of the processing alter for any reason.

The ICO is interested to hear about how organisations are approaching DPIAs in the context of AI and is seeking views and feedback. To read the blog post in full, click here.

Expertise