The ICO says in a blog post that applications of AI are “starting to permeate many aspects of our lives”. The post acknowledges the benefits that AI can bring to organisations and individuals, but recognises that there are risks too.
The ICO says that the GDPR’s “considerable focus” on new technologies reflects the concerns of legislators in the UK and throughout Europe about the personal and societal effect of powerful data-processing technology, such as like profiling and automated decision-making.
In the ICO’s view, the GDPR strengthens individuals’ rights when it comes to the way their personal data is processed by technologies such as AI.
Further, the law requires organisations to build in data protection by design and to identify and address risks at the outset by completing data protection impact assessments. “Privacy and innovation must sit side-by-side. One cannot be at the expense of the other”.
This is why, the ICO says, “AI is one of our top three strategic priorities”. Accordingly, the ICO has put together a team from the Technology Policy and Innovation Directorate to develop the ICO’s first auditing framework for AI.
The aim of the framework is to provide “a solid methodology to audit AI applications and ensure they are transparent, fair; and to ensure that the necessary measures to assess and manage data protection risks arising from them are in place”. The framework will also inform future guidance for organisations to support the continuous and innovative use of AI within the law.
The ICO is asking for input on the “genuine challenges arising from the adoption of AI”. Accordingly, it will shortly publish another article to outline the proposed framework structure, its key elements and focus areas. The ICO says that it will use the feedback to inform a formal consultation paper, which it expects to publish by January 2020. The final AI auditing framework and the associated guidance for firms is on track for publication by spring 2020. To read the ICO blog post in full, click here.