Insights Guidance published on responsible use of AI in HR and recruitment processes

Contacts

The Department for Science, Innovation & Technology, alongside a number of other organisations including the Information Commissioner’s Office, Equality and Human Rights Commission, and Ada Lovelace Institute, has published guidance for the responsible use of AI in HR and recruitment processes.

The guidance is a comprehensive document, setting out in non-technical terms how businesses can identify and mitigate risks associated with the use of AI in recruitment and hiring processes. These risks include unfair bias and discrimination against applicants, as well as a ‘risk of digital exclusion’ for applicants who may not be proficient in, or have access to, technology. Examples are provided such as job description review software that may be discriminatory, chatbots used to engage with candidates which are trained on irrelevant or insufficient data, headhunting software and CV matching tools that perpetuate existing biases, and video interviewing tools that may result in discriminatory outcomes.

Whilst the guidance’s principal focus is on the use of AI in Recruitment and HR processes, many of its recommendations and guides to best practice will be applicable and of interest to any organisation planning to integrate and employ AI systems into their business.

The guidance is structured in two parts, focusing on ‘assurance mechanisms’ for both the procurement of AI systems, and also their deployment.

On the procurement side of things, the guidance details the key considerations that an organisation will have to bear in mind before it goes out to tender. These will include matters applicable to any AI system, such as developing a clear vision as to its desired purpose and output, understanding its functionality, and considering how it can be integrated into existing processes. In the specific context of HR and recruitment processes, organisations are encouraged to consider the extent to which the AI system meets their obligations under the Equality Act 2010, ensuring, for example, that reasonable adjustments can be adopted for applicants with disabilities, new barriers are not created for applicants with protected characteristics, or that existing biases are not amplified at scale. In order to address such matters, the guidance advises the implementation of Algorithmic Impact Assessments, Equality Impact Assessments, Data Protection Impact Assessments, and the development of an effective AI Governance Framework. It also provides helpful information that should ideally be obtained from potential suppliers, such as asking them to conduct a ‘bias audit’ of their system or to produce a ‘model card’ – a standardised reporting tool for capturing key facts about AI models.

The guidance moves on to consider best practice once a system has been selected and is about to be deployed. This includes conducting a thorough pilot, which won’t only ensure that employees understand how the system works, but also might reveal any bias or inaccuracy that had not yet been detected. Once the system has been deployed, it is critical that continuous monitoring takes place to confirm that it performs as intended, and the guidance recommends that adequate measures are in place to ensure that potential applicants understand how the system is being used and what to do if they should wish to raise any problems.

The guidance can be read in full here.