Insights Governance of AI Report: UK Government Responds

Contact

In August 2023, the Parliament’s Science, Innovation and Technology Committee published an interim report exploring the benefits and the risks of AI, making nine recommendations to Government. These included suggesting that the Government’s approach to AI governance and regulation should address, through domestic policy and international engagement, 12 challenges to the governance of AI:

  1. As datasets are prepared by humans, there is the potential for them to contain inherent bias.
  2. For example, how should AI governance ensure fair use of biometric data for facial recognition by law enforcement?
  3. AI expands the opportunities for the creation of fake news (which can damage reputations and undermine democratic elections) and fraud.
  4. The large data sets needed for the best AI could sit with the most powerful companies, leading to a reduction in competition.
  5. Computing power. Similarly, access to the huge and costly computing power needed for powerful AI could be limited to a few organisations.
  6. Lack of transparency. AI could be built in such a way that it is not possible to match processes with output.
  7. Open source. AI code could be concentrated among a few companies, hampering competition and scrutiny for safety purposes.
  8. Some AI models make use of copyright protected works without permission.
  9. It may not be clear whether developers or providers of AI should bear the risk for any harms caused by AI.
  10. AI applications may disrupt the jobs that many people currently do.
  11. International coordination. A coordinated approach to the regulation of AI would be more efficient. Currently the EU is proposing a different approach to the UK under its proposed AI Act.
  12. AI in the sphere of national security presents a potential threat to human life.

The Committee also referred to the Government’s March 2023 white paper, “A pro-innovation approach to AI regulation”, which proposes a principles framework for existing UK regulators to guide future development of AI and its use, and raised concerns that, for this approach to work, the UK may need a more well-developed central coordinating strategy, and the government should undertake a gap analysis of the UK’s regulators to determine if they need extra resource or new powers to enforce the framework. The White Paper also states that the Government anticipates introducing a statutory obligation on regulators to have due regard to these principles only when parliamentary times permits. Raising concerns that other jurisdictions like the EU and the US will steal a march and their laws and framework will become the default even if they are less effective, the Committee called on the government to introduce its proposed legislation in the King’s Speech in November, it being the last chance to do so before the General Election.

In its response, published on 16 November 2023, the Government highlights the work already undertaken to establish a suitable framework for AI regulation, including the establishment of a Central AI Risk Function within Government to monitor existing and emerging risks, and the AI and Digital Hub established within the Digital Regulation Cooperation Forum. It then addresses the nine recommendations made by the Committee. In respect of the 12 governance challenges, the Government states that it agrees with them. Many of them (bias, liability and copyright) relate to risks arising from or exacerbated by foundation models or frontier AI. The Government sets out the actions taken to date to address these and the other challenges. These include the November AI Global Safety Summit held in the UK, out of which came a paper on the Emerging Processes for Frontier AI (previously reported by Wiggin) and the Bletchley Declaration (previously reported by Wiggin), the Frontier AI Taskforce, the recently announced AI Safety Institute and other initiatives.

On the Committee’s call for AI-specific legislation to be brought forward within this Parliamentary session, which did not happen, the Government states that it does not want to rush to legislate but rather wants to learn about the capabilities and risks of AI and the potential frameworks for action. It wants to take an evidence-based approach to legislation referring again to the Summit, the Taskforce, the Institute, and the Emerging Processes for Frontier AI paper, as well as its work with leading frontier AI companies, each offering vital insights into foundation models and frontier AI.

In its response, the Government confirms that it will be publishing its response to the White Paper consultation later this year which will include its latest assessment of the development of the UKI’s AI regulatory framework.

For more information, click here.