We are hearing on a daily basis, the amazing explosion of potential automation driven by AI across the recruitment market. Purpose-built AI for recruitment is already here, transforming how organisations engage with and hire talent. Clearly, speeding up processes and driving productivity could be a massive win for applicants, organisations and recruitment professionals alike. Automation removes human input – and thereby removes the thousands of potential biases individuals hold – appearing to be the ideal way of removing bias from the hiring process. However, how confident can we be that AI will not perpetuate pre-existing bias in the recruitment process?

I asked ChatGPT “…is it impossible to ensure that data used to train AI is 100% without bias?”

The answer I received was “…you are correct that it is impossible to ensure that data used to train AI is completely without bias. This is because AI algorithms are only as unbiased as the data used to train them, and any data set can contain implicit or explicit biases that are difficult to identify and remove completely.”

Ok then, this is not simple and we must utilise AI powered recruitment tools with caution. There are thousands of potential biases that we all carry with us in every part of our life and we have struggled to eliminate these from our traditional pre-digital, ‘analogue’ recruitment processes. So it appears to be an almost impossible task to ensure the actual data that we use to train AI does not perpetuate bias in automated recruitment processes.

Instances of Bias in AI Recruitment Processes

Claims that AI will debias the recruitment process are exaggerated and often misleading. In fact AI’s track record in this area has been worrying, there are already a number of high profile examples where AI tools have resulted in unexpected bias in the recruitment process. As a 2022 research study from Cambridge University points out, AI is not the silver bullet, that by “constructing associations between words and people’s bodies” it helps to produce the “ideal candidate” rather than merely observing or identifying it.

Probably one of the most high profile of these cases was Amazon’s recruitment algorithm, which was found to be biased against female candidates. The algorithm had been trained on CVs submitted to Amazon over the previous ten years, which were predominantly from male candidates. As a result, the algorithm learned to favour male candidates and to penalise CVs containing keywords associated with women. In 2018, a Reuters investigation found that the algorithm used by Amazon to screen job applicants was systematically downgrading applications containing words such as “women” or “female” – it just did not like women.

Amazon and others have of course learned from this experience and 5 years on AI purpose-built recruitment is vastly more sophisticated than predecessors from 2018.

Another often cited example is the recruiting tool developed by HireVue hiring platform, which uses AI-based assessment tools, supporting organisations to identify potential new hires. Critics have argued that this tool may be biased against candidates with a disability. The platform previously halted in 2021 using facial analysis due to public outcry as it has been shown that facial recognition algorithms can be less accurate for people with darker skin tones. Research by the not for profit The Algorithmic Justice League in 2018 demonstrated that facial-analysis technology created by Microsoft and IBM, amongst others, performed better on paler-skinned subjects and men, with darker-skinned females most often ‘misgendered’ by the programs. This contributed to these businesses curbing their facial recognition technology programmes in 2020.

It is positive to see platforms like HireVue adapting, learning and developing their product in order to eliminate these potential bias. However, these recent examples show that product developers must continue to be hyper aware and proactive in terms of over-coming potential bias.

Proactive Strategies for Improvement

So how can we counter this and attempt to ensure that the data used to train AI does not perpetuate pre-existing bias in the recruitment process?

Being aware of these potential issues and taking proactive steps to counter this is key for any creators of AI recruitment tools, some of the actions that can be taken include:

Build Diverse Testing Teams

Organisations can help to mitigate potential biases by using diverse teams to both build and test their AI models. If this team is diverse and representative of the wider population, it may help to spot and deal with any biases in the development phase.

Ensure Diversity in the Data

Intentionally diversify the data used to train AI tools – audit the dataset to ensure that it includes data from individuals of different genders, races, ethnicities, backgrounds and age profiles etc. This requires a conscious effort to collect diverse data.

Be Transparent

Employers and organisations should be completely transparent in terms of their use of AI in their recruitment process, ideally providing clarity of what data has been used to train algorithms and how they have attempted to mitigate bias.

Continuous Auditing

Finally consistent and regular auditing and reviewing of the results of any AI recruitment tools. Challenge themselves – has the use of this AI tool had an impact on the overall diversity of the workforce, does it require retraining?

In Conclusion – Proactivity Is Key

So AI algorithms are only as unbiased as the data used to train them and it is crucial to be aware of that fact from the outset. Work hard to feed diverse data sets, consistently work to review the results and the impact these tools are having on inclusive/ diverse recruitment.

Going back to my original question “Is it impossible to ensure that data used to train AI is 100% without bias?” By being aware and taking proactive steps, organisations can help to reduce the risk of bias in AI algorithms and work to improve the accuracy and fairness of the recruitment process. Monitor and evaluate the performance of AI algorithms to make continuous adjustments as necessary to ensure that they are not perpetuating bias. However, as with non-automated more human recruitment processes, it is likely impossible to eliminate bias completely. We need to remember that we cannot assume the purpose-built AI recruitment tools will solve all our diversity challenges and we must always engage with our eyes wide open.