Insights Information Commissioner’s Office publishes Guidance on AI and data protection

The guidance forms part of the ICO’s framework for auditing AI. It is aimed at two audiences:

  • those with a compliance focus, such as data protection officers (DPOs), general counsel, risk managers, senior management, and the ICO’s own auditors; and
  • technology specialists, including machine learning experts, data scientists, software developers and engineers, and cybersecurity and IT risk managers.

The guidance clarifies how to assess the risks to rights and freedoms that AI can pose from a data protection perspective, and the appropriate measures to implement to mitigate them.

While data protection and AI ethics overlap, the guidance does not provide generic ethical or design principles for the use of AI. It corresponds to data protection principles, and is structured as follows:

  • part one addresses accountability and governance in AI, including data protection impact assessments (DPIAs);
  • part two covers fair, lawful and transparent processing, including lawful bases, assessing and improving AI system performance, and mitigating potential discrimination;
  • part three addresses data minimisation and security; and
  • part four covers compliance with individual rights, including rights related to automated decision-making.

The guidance explains that the accountability principle makes organisations responsible for complying with data protection and for demonstrating compliance in any AI system. In an AI context, accountability requires an organisation to:

  • be responsible for the compliance of its system;
  • assess and mitigate its risks; and
  • document and demonstrate how the system is compliant and justify the choices made.

These issues should be considered as part of the DPIA for any system intended to be used. Organisations should note that, in the majority of cases, they are legally required to complete a DPIA if they use AI systems that process personal data.

The ICO says that it will continue to focus on AI developments and their implications for privacy by building on this foundational guidance, and continuing to offer tools that promote privacy by design to those developing and using AI. It will continue to develop this guidance to ensure it stays relevant, and will therefore continue to consult with those using it to understand how it works in practice and ensure it remains consistent with emerging developments. To access the guidance, click here.