September 22, 2025
The Department for Science, Innovation and Technology has published a roadmap for trusted third-party AI assurance, setting out how it intends to support the AI assurance market in the UK.
As the roadmap explains, AI assurance is “crucial to ensure that AI systems are developed and deployed responsibly and in compliance with the law. By providing ways to measure, evaluate and communicate the trustworthiness of AI systems, assurance can increase confidence in AI systems, supporting AI adoption and economic growth”. It is also something that the Government has identified as an area in which the UK has the opportunity to be a world leader, building on its strengths in both the professional services and technology sectors.
However, to achieve its ambitions of growing what is currently a relatively small market, the Government has set its sight on removing a number of barriers that are currently in the way. Four barriers in particular are identified:
1. Quality
The roadmap points to a lack of standardised benchmarks of quality for those providing AI assurance services. Often, a way to address this would be to introduce a form of accreditation or process certification. However, the roadmap makes the point that the market is still too nascent to “develop generalisable requirements to certify organisations based on their organisational practices, skills and competencies”. Instead, the Government will focus on the professionalisation of the AI assurance industry and establish a “multi-stakeholder consortium to support the development of the AI assurance profession”.
2. Skills
In order to address the challenge expressed by assurance providers in the UK of finding employees with adequate skills to assure AI systems, the Government will work with the consortium to “support the development of a skills and competencies framework for AI assurance”.
3. Information Access
Third-party assurance can only be effective if providers have access to requisite information about the AI systems themselves. The roadmap states that the consortium will determine information access requirements for AI assurance providers and consider whether to introduce, for example, technical solutions to enable auditor access to AI systems or Government-backed guidelines setting out best practices for information sharing.
4. Innovation
Finally, the Government will establish an AI Assurance Innovation Fund to “develop novel and innovative assurance tools and services to address the risks posed by highly capable AI systems”.
To read the roadmap in full, click here.
Expertise