Insights UK AI regulation: Government publishes outcome to its 2023 consultation paper

Contact

In its March 2023 paper, “A pro-innovation approach to AI regulation,” the UK government proposed a regulatory framework based on five principles to guide and inform the responsible development and use of AI in all sectors of the economy: (1) safety and trust; (2) appropriate transparency and explanability; (3) fairness; (4) accountability and governance; and (5) contestability and redress.  The principles would have a non-statutory basis and would be implemented by existing UK regulators.  The paper anticipated the possibility of introducing a statutory duty on regulators to have regard to the principles but recognised, at the same time, that regulation is not always the most effective way to support responsible innovation.

The Government has now published the feedback received on the paper and its response.  It plans to proceed with its principles-based, regulator-led approach and the paper refers to instances of regulators already adopting it; the Competition & Markets Authority (“CMA”) review of foundation models (previously reported by Wiggin) and the Office of the Information Commissioner’s (“ICO”) updated guidance on data protection and AI (see also the ICO’s Call for Evidence on Generative AI, as previously reported by Wiggin).  Existing laws will apply to AI, within the framework of the principles, and the paper refers, for example, to rules around automated decision making under UK GDPR (rules which the Government is proposing to relax under the proposed Data Protection and Digital Information Bill).  The Government has also published new initial guidance to regulators  on how to apply the principles within their existing remits, and the paper refers to the Government’s earlier paper “Emerging processes for frontier AI safety” (previously reported by Wiggin), to be updated by the end of 2024, on how, for example, to address the principle of transparency.

As to whether the principles-based approach is adequate to address AI risks, the paper states that some mandatory measures will eventually be required to address potential AI-related harms.  In particular, there may be a case for binding requirements on those developing highly capable general-purpose AI, an AI technology posing potentially significant risks and which does not necessarily fall into existing rules and laws, to ensure they are accountable for making the technology safe.  This may involve creating or allocating new regulatory powers.  In assessing whether such measures are necessary, the Government will also specifically consider the need to address the fair and effective allocation of legal liability across the AI value chain.

The Government has not dismissed the introduction of a statutory duty on regulators to have regard to the principles after a period of non-statutory implementation.  In the meantime, the government has asked several key regulators to publish an update on their strategic approaches to AI by 30 April 2024.

There will be a central function in Government to coordinate, monitor and adapt the new regulatory framework.  That work has already started with risk monitoring and assessment activities within the Department for Science, Innovation and Technology (“DSIT”).  The AI Safety Institute will lead evaluations and safety research in Government, in collaboration with partners globally, and the Government will create and consult on a cross-economy AI risk register.  The central function will also support regulators to interpret and apply the principles, and £10m will be spent on developing regulators’ AI expertise.  A new pilot regulatory service will be hosted by the Digital Regulation Cooperation Forum to make it easier for AI innovators to navigate the regulatory landscape.

The Government will monitor and evaluate the new regulatory framework and will issue a consultation in the Spring on its proposed method of assessing the framework including its proposed metrics and data sources.

As for non-regulatory tools to help businesses embed the AI principles into their processes, the paper discusses the continued development and adoption of technical standards and assurance techniques for AI.  The Centre for Data Ethics and Innovation, which published a Portfolio of AI Assurance Techniques in 2023, will now be renamed the Responsible Technology Adoption Unit sitting within DSIT, and will develop tools and techniques to enable responsible adoption of AI in the private and public sectors.  DSIT will also publish an “Introduction to AI Assurance.”

The Government will continue its global coordination on AI with the UN, Council of Europe, OECD, G7 etc (previously reported by Wiggin here and here).

Code of Practice on AI and Copyright

In June 2023, the Government committed to developing a code of practice on copyright and AI, with the involvement of both the AI and creative sectors, aiming to make licences for text and data mining more available (the background to which was previously reported by Wiggin). However, the consultation response paper published in February states that, despite the efforts of the IPO working group, the group will not be able to agree an effective voluntary code. Instead, the Government has stated that will lead a period of engagement with the AI and rightsholder sectors to agree an approach that allows both sectors to grow.  This will include greater transparency from AI developers on data inputs and the attribution of outputs, and exploring, in particular, mechanisms enabling rightsholders to understand whether content they produce is used as an input into AI models. The paper states that the Government will set out further proposals on the way forward soon.

For more information, click here.