AI algorithms recently used by Amazon led to unconscious recruitment bias because it was based on historic data sets for a company that had mainly male employees
Artificial intelligence is becoming a a key part of the recruitment process for many businesses but it poses a risk of bias. David Ingham, digital partner in media and entertainment at global technology consultancy Cognizant, discusses how to prevent machine learning algorithms from producing results hindered by prejudice.
A series of events during 2018 exposed how online systems can both intentionally and unintentionally lead to bias.
Some companies, when recruiting, were criticised for purposefully tailoring online advert audiences when hiring candidates.
Accidental bias was also identified in cases where algorithms manage digital ads for STEM roles.
With artificial intelligence lauded as one of the key trends in recruitment this year – and the likes of Goldman Sachs, Unilever and Hilton Worldwide already investigating its potential – it is critical that the risk of bias is recognised and addressed if organisations want to attract the right talent and not falling foul of anti-discrimination laws.
Amazon recruitment AI highlighted bias in data training algorithms
One notable example was Amazon.
The company developed an AI-based tool to sort through resumes and CVs in the US to identify the best candidates to progress to interview.
Amazon’s algorithm compared applicants to their current employee base, to find the candidates that best fit the profile of a successful employee. So far, so good.
However, as the existing employee population was primarily men, the AI system adopted some unconscious bias from the data set.
For example, some all-female universities were misinterpreted to be lower-class institutions as fewer existing Amazon employees had attended them.
This example is just one way in which AI and machine learning systems can become partisan, due to underlying data sets that favour certain results.
Even when Amazon tried to correct the bias in the algorithm, it became apparent that the data set itself was a result of previous decisions that were potentially influenced by unconscious bias.
Therefore, they did not reflect the same levels of diversity that Amazon wanted as the algorithm continued to reinforce these unconscious biases when selecting “appropriate” candidates.
Partiality can also be introduced when the data that trains algorithms is constantly changing.
As the employee base within most organisations shifts constantly, with people joining or leaving, an algorithm managing recruitment can’t be consistent in its decisions.
For example, a system could review a CV in January and see that there are few similarities to the employee base and reject the candidate.
However, if over the following six months a number of people join the organisation with similar experience, it could review the CV again and have a completely different outcome.
The challenge is that an algorithm does not necessarily recognise positive qualities that would be spotted by a human – rather, it bases its decision on past data sets.
Interestingly, algorithms can create bias even when working from clean data sets.
This is because, despite many AI systems being very advanced, they are still only able to look at trends across large quantities of past data.
This means they are often unable to recognise rare skills and appreciate them, purely because they are few and far between.
Addressing recruitment bias in intelligent systems
However, there is no easy route to reducing bias in automated systems.
Tweaking an algorithm to compensate for potentially prejudiced datasets could backfire.
For instance, if an algorithm tries to address a lack of native Spanish speakers, once that has been overcome within the data set, it could continue to overcompensate for a trend that no longer exists.
Instead, the first step for any organisation is being aware that this can happen and then audit its automation and AI procedures for bias.
As simple as that seems, the Cognizant report Making AI Responsible and Effective found that just half of the companies surveyed have policies and procedures in place to identify and address ethical considerations – either in the initial design of AI applications or in their behaviour after the system is launched.
It is important to note that addressing the ethical considerations of intelligent systems should not just be viewed as a “nice-to-have” by organisations.
Companies that do not consider the ethical ramifications of their automation projects are taking a significant legal risk.
It is for this reason that Cognizant’s Centre for the Future of Work predicted that an important new role would shortly appear within compliance teams – that of the algorithm bias auditor.
These auditors would have responsibility for establishing guidelines and compliance methodologies that employees across the organisation – whether in development, business management or compliance – can easily understand and follow.
Ethical applications of AI algorithms
At this stage in the AI maturity curve, the best option for organisations is only to apply it to systems where the outcome of the algorithm could not result in a person being unfairly disadvantaged.
For example, not applying potentially biased AI systems to manage recruitment processes that could negatively impact the outcome for the user.
However, as organisations invest in remediating this issue – through both advanced technological development and introducing new compliance processes – we may find that intelligent systems can address the unconscious bias influencing many of the human decisions that these systems are trying to manage.
When supported by an algorithm bias auditor, organisations will become more effective at capturing issues that currently go unaddressed.
Ultimately, both business leaders and designers of these automated systems must pay greater attention to the potential impact of AI bias.
It is the only way to ensure that we are not burnt by the partiality of previous generations, iterations or machines.