Slave to the algorithm: AI bias could exacerbate divisions in the workplace and society
Apple CEO Tim Cook has a simple message for those developing or utilising AI in the workplace
“What all of us have to do is to make sure we are using AI in a way that is for the benefit of humanity, not to the detriment of humanity,” he told MIT Technology Review.
But will AI benefit all of humanity equally? Can we use it to foster inclusivity? Or will it exacerbate the exclusion and inequality that is already widespread in the workplace and society?
A study by academics at Warwick Business School (WBS) and several leading US business schools, including Harvard, offers a tantalising glimpse of AI’s potential.
They gave consultants a creative task to develop a new footwear product for a fictional fashion company. Some were allowed to use AI tools, others were not.
Those who used AI completed more tasks more quickly and to a higher quality. This suggests AI can be a powerful tool when used in the correct way. But there was an even more striking result.
Professor Hila Lifshitz, Head of the AI Innovation Network at WBS and co-author of the study, said: “All those who used AI benefited from doing so, but those who achieved the lowest scores in the preliminary tests benefited the most.
“It had a levelling effect, reducing the gap between the strongest and the weakest performers.”
Will AI benefit weaker workers?
These findings should not be taken out of context. The study was not designed to investigate the potential of AI as a tool to improve equality and inclusion within the workplace.
Nonetheless, the suggestion that AI could offer the greatest benefit to weaker team members – who may not have enjoyed the same educational advantages as their colleagues, or who may face other disadvantages or disabilities – could have fascinating applications.
For example, researchers at WBS have developed an AI tool to help students improve their writing.
The AI Essay-Analyst offers students formative feedback ahead of their essay deadline, giving them time to make thoughtful revisions before submitting their work.
This can help them to overcome difficulties in explaining and connecting ideas, sentence structure, readability and referencing.
Unlike generative AI, it does not generate writing but nudges users to improve their own writing.
It also does not collect any user data and was developed with AI ethics principles in mind.
Dr Isabel Fischer, Associate Professor (Reader) of Responsible Digital Innovation and Education and project leader, said: “Our tool was initially developed to help to level the playing field between students from disadvantaged backgrounds and their peers from more privileged backgrounds, who tend to have a better support network at home and the confidence to seek personalised feedback.”
The 2010 Equality Act in the UK identifies nine protected characteristics. The Alan Turing Institute believes that, of those, people with disabilities could benefit the most from AI.
It could provide audio description for the visually impaired, captioning for the deaf, speech synthesis for people who cannot communicate, and smart monitoring for care needs.
The risks created by biased AI
However, the Institute warns that algorithms need to work fairly for all disabilities in a vast range of settings if they are truly to foster equality and widespread inclusion.
The challenge of creating ‘fair’ algorithms is central to the developing relationship between AI and equality, diversity and inclusion (EDI).
After all, an unfair or biased algorithm has the potential to create deeper inequality and exclusion.
For example, Amazon had to scrap its AI recruitment tool in 2018 because it penalised CVs that referred to women and downgraded graduates of two all-women colleges.
The algorithm had been trained on 10 years of hiring data that showed that men were given most managerial jobs at Amazon, so it favoured CVs from male applicants.
Biased decisions made by AI programmes are not just bad for the individuals who are discriminated against. They can cause the company to miss profitable opportunities.
Dr Anh Luong, Assistant Professor of Business Analytics at WBS, studies how companies can reduce these costly biases. Creating fairer AI tools is a good starting point, but is not enough by itself.
“Building a better AI system can be very costly and time consuming,” says Dr Luong. “Even then, its performance can fluctuate quite drastically and abruptly.”
Dr Luong and her co-authors from the City University of New York found that human workers can learn to compensate for bias in the algorithms they use when making decisions, such as whether or not to grant a loan application.
Managers could encourage this learning process by reviewing the decisions that individual staff members made, then rewarding good decisions while penalising bad decisions.
However, these incentives only made staff more aware of inaccuracies not that the recommendations made by the AI were unfair.
It was only when staff were notified that a certain group had been treated unfairly in the past that they became more aware of the potential bias and the impact that the AI could have.
Dr Luong says: “For the best results, companies should combine building a better AI system with organisational practices such as strong EDI policies and training, and incentivising employees in the correct way.”
Could AI be bad for your health?
Algorithmic bias poses pressing challenges in other sectors too. Several of these are highlighted in an editorial in the journal Information and Organization co-authored by Eivor Oborn, Professor of Healthcare Management and Innovation at WBS.
For example, the tendency for patients from marginalised groups to be under-represented in health data has prompted concerns that AI tools could exacerbate healthcare inequalities.
One of the first real-world applications of AI in healthcare is the use of machine learning algorithms to diagnose diabetic retinopathy from photographs.
Researchers trained one such tool using only photographs of light-skinned patients. Using that tool, dark-skinned patients had a 12 per cent lower chance of receiving an accurate diagnosis than light skinned patients due to physiological differences between the two groups.
“AI tools can result in better outcomes for populations who match the dataset,” says Professor Oborn.
“However, issues arise when you try to use that tool for another population with different features.”
Those concerns are echoed by Dr Shweta Singh, Assistant Professor of Information Systems at WBS.
Dr Singh contributed to the Physiological Society report From ?Black Box’ to Trusted Healthcare Tools, which was launched in the UK House of Lords in June 2023.
She points to the case of IBM’s multi-billion AI tool called Watson for Oncology, which was supposed to revolutionise healthcare.
However, doctors raised concerns that there weren't enough data for the programme to make good treatment recommendations for a diverse range of patients and branded it unsafe.
“In one case, the AI suggested giving a 65-year-old patient with lung cancer and internal bleeding a drug called bevacizumab which could have resulted in a fatal haemorrhage,” says Dr Singh.
Will AI exacerbate inequality?
“Part of the problem is that we cannot ask AI to explain how it arrived at a recommendation.
“That makes it very difficult to identify individual cases of bias that are created by the data the AI has been trained on. That issue isn’t limited to healthcare. It’s equally applicable when using AI in the context of employment, social housing and passport photographs.
“But when people think about the dangers posed by AI, they think about The Terminator and The Matrix. They don’t think about the more immediate risks created by bias and explainability.”
There are also growing concerns about AI and ‘data colonialism’ exacerbating the divide between the Global North and the Global South.
For example, a TIME investigation revealed that ‘ethical AI’ company Sama was paying Kenyan workers less than $2 per hour to feed an OpenAI tool with labelled examples of child sexual abuse, suicide, torture and bestiality so a chatbot could recognise harmful content.
Staff described their work as “a kind of mental torture” resulting in post-traumatic stress disorder, anxiety and depression.
Professor Oborn says: “Marginalised workers, especially in developing countries, are working on cutting-edge AI systems for the Global North to enable further technological development.
“Not only are they getting paid low wages and working under precarious circumstances, they are not reaping the benefits of the technology they are helping to create.
“As AI products progress further based on unrepresentative datasets and technological infrastructure that is not available to all, there is a risk that they will further marginalise the marginalised.
“We need a more meaningful way of understanding and managing these risks.”
To paraphrase Tim Cook, we have to make sure we are using AI in a way that benefits all of humanity, not in a way that benefits the few to the detriment of many.
Further reading:
Who will benefit from AI in the workplace and who will lose out
Working on the jagged frontier: How companies should use AI
Beyond the hype: What managers need to ask before adopting AI tools
Learn to lead your organisation through transition with the executive education programme, Leadership for a Complex World, at WBS London at The Shard.
Discover more about Artificial Intelligence and Digital Innovation. Subscribe to the Core Insights newsletter.