Insights Cyber Security of AI: Government publishes Call for Views on voluntary Code of Practice

Contact

The Department for Science, Innovation and Technology has published a Call for Views on the Cyber Security of AI. It proposes the establishment of a voluntary Code of Practice and Global Standard which “will set baseline requirements for all AI technologies and distinguish actions that need to be taken by different stakeholders across the AI supply chain”.

According to the Call for Views, the AI Cyber Security Code of Practice reflects the Government’s wider strategy to embed a ‘secure by design approach’ to the development of AI models and systems. The Government states that the Code “sets out practical steps for stakeholders across the AI supply chain, particularly Developers and System Operators, to protect end-users. The Code applies to all AI technologies and will help ensure that security is effectively built into AI models and systems as well as across the AI lifecycle”. It also states that it has been designed in line with the Government’s pro-innovation approach to AI, allowing “flexibility via a principles-based approach”. Those 12 Principles are as follows:

  1. Raise staff awareness of threats and risks;
  2. Design your system for security as well as functionality and performance;
  3. Model the threats to your system;
  4. Ensure decisions on user interactions are informed by AI-specific risks;
  5. Identify, track and protect your assets;
  6. Secure your infrastructure;
  7. Secure your supply chain;
  8. Document your data, models and prompts;
  9. Conduct appropriate testing and evaluation;
  10. Communication and processes associated with end-users
  11. Maintain regular security updates for AI model and systems’ and
  12. Monitor your system’s behaviour

Each Principle sets out more detailed requirements expected of organisations, as well as specifying to which stakeholders each requirement is likely to apply. The Call for Views also states that it is anticipated that the Code will be reviewed and updated as necessary to reflect changes to AI technology and relevant regulatory regimes.

Feedback is invited on the interventions outlined in the Code from “global stakeholders” as the Government intends to submit the updated Code to the ‘Secure AI Technical Committee in the European Telecommunications Standards Institute’ in September 2024 to help inform the development of a global standard.

Commenting on the Call for Views, Rosamund Powell, Research Associate at the Alan Turing Institute, said “AI systems come with a wide range of cyber security risks which often go unaddressed as developers race to deploy new capabilities. The code of practice released today provides much-needed practical support to developers on how to implement a secure-by-design approach as part of their AI design and development process.

Plans for it to form the basis of a global standard are crucial given the central role international standards already play in addressing AI safety challenges through global consensus. Research highlights the need for inclusive and diverse working groups, accompanied by incentives and upskilling for those who need them, to ensure the success of global standards like this.”

The Call for Views is open until 10 July 2024 and can be read in full here