HomeInsightsAll Party Parliamentary Group (APPG) on Future of Work publishes report, “The New Frontier: Artificial Intelligence at work”, calling for legislation to regulate AI at work

Contact

+44 (0)20 7612 9612
info@wiggin.co.uk

In May 2021, the APPG on the Future of Work established an inquiry in response to growing public concern about AI and surveillance in the workplace and in response to the Institute for the Future of Work’s report “the Amazonian Era”. The inquiry, which ran until July 2021, examined the use and implications of surveillance and other AI technologies used at work. It also considered practical policy solutions to meet the challenges and opportunities that it found.

The report outlines the key findings from the inquiry and makes recommendations based on the evidence that the APPG considered.

Essentially, the inquiry found that AI is transforming work and working lives across the country in ways that have plainly outpaced, or that avoid, the existing regimes for regulation. It found that, since the COVID-19 pandemic began there has been a marked increase in the use of algorithmic surveillance, management and monitoring technologies, which has led to public concern. In fact, the inquiry found, the use of automated monitoring, target-setting and performance assessment technologies is having a negative impact on the mental and physical wellbeing of workers.

It is therefore clear, the APPG says, that the Government must bring forward robust proposals for AI regulation to meet these challenges.

The APPG says that its recommendations are aimed at ensuring the AI ecosystem is genuinely human-centred, principles-driven and accountable. They are centred around a proposal for an Accountability for Algorithms Act. The focus is on changes to work but the APPG says that its recommendations inform the wider debate about AI governance and regulation as part of the UK’s AI Strategy.

The key recommendations are:

  • an Accountability for Algorithms Act: this would establish a new corporate and public sector duty to undertake, disclose and act on pre-emptive Algorithmic Impact Assessments (AIA); the duty would apply from the earliest stage of design and deployment of algorithmic systems at work and require rigorous ex ante assessment and ex post facto evaluation of risks and other impacts on work and workers; AIAs would always include a dedicated equality impact assessment;
  • updating digital protection: the proposed Act would increase protection for workers against the adverse impacts of powerful but invisible algorithmic systems, including an easy-to access right to: (i) a full explanation of the purpose, outcomes and significant impacts of algorithmic systems at work; (ii) a summary AIA; and (iii) a means for redress; it would also include a right to be “involved” in shaping the design and use of algorithmic systems at work; these new rights would be set out in a dedicated schedule to the Act: “Worker Rights for the age of AI”;
  • enabling a partnership approach: to apply the principle of collaboration in the 2021 Digital Regulation Plan and recognise the collective dimension of data processing, additional collective rights are needed for unions and specialist third sector organisations to exercise new duties on members or other groups’ behalf; this could be further supported by the Government establishing an AI Partnership Fund to allow the TUC to build on and diversify the work of its AI Working Group and develop training to give workers the tools and knowledge required to interact with, comprehend and challenge the use of AI at work; the proposed approach also offers opportunities for skills development and investment in collaboration with the private sector;
  • enforcement in practice: the joint Digital Regulation Cooperation Forum (DRCF) should be expanded with new powers to create certification schemes, suspend use of certain AI or impose terms, and issue statutory guidance to supplement the work of individual regulators and sector-specific standards; the DRCF should be equipped and funded to run regulatory sandboxes to pilot new approaches to promote equality as part of the AIAs, as well as to rigorously enforce existing and new obligations; and
  • supporting human-centred AI: the principles of “Good Work”, which incorporate fundamental rights and freedoms under national and international law and are set out in the Good Work Charter at Annex 1 to the report, should be recognised as fundamental values to guide the development and application of a human-centred AI Strategy; this will ensure that the AI Strategy works to serve the public interest; in parallel to the AI Strategy, the Cabinet Office should initiate a Work 5.0 Strategy to address the challenges and opportunities of automation and ensure a human-centred transformation of work across the UK.

To access the APPG’s report, click here.