HomeInsightsInformation Commissioner’s Office consults on known security risks exacerbated by AI

The ICO notes in a blog post, which forms part of the ICO’s ongoing consultation on developing a framework for auditing AI, that using AI to process any personal data will have important implications for an organisation’s security risk profile.

Some implications may be triggered by the introduction of new types of risks, e.g. adversarial attacks on machine learning models. In this blog post, the ICO focuses on the way AI can adversely affect security by making known risks worse and more challenging to control.

The ICO says that information security is a key component of its AI Auditing Framework, but is also central to its work as the information rights regulator. The ICO is planning to expand its general security guidance to take into account the additional requirements set out in the General Data Protection Regulation (GDPR). While this guidance will not be AI-specific, it will cover a range of topics that are relevant for organisations using AI, including software supply chain security and increasing use of open-source software.

The ICO is particularly keen to hear views on this topic so it can integrate them into both the framework and the guidance. In particular, it would appreciate insights on the following questions:

  1. how and to what degree are organisations currently inspecting externally maintained software code for potential vulnerabilities?
  2. are there any other well-known security risks that AI is likely to exacerbate? If so, which ones and what effect will AI have?
  • what should any additional ICO security guidance cover?

To access the blog post in full, click here.

Topics