AI hallucinations

AI hallucinations: The challenge is particularly concerning in fields such as healthcare, law and education

As we enter 2025, writes Shweta Singh, the ethics of artificial intelligence (AI) continue to pose a complex web of challenges.

From bias and privacy concerns to the disruptive effects of AI ‘hallucinations’, these issues demand immediate attention to prevent harm and foster trust in AI technologies.

For a start, algorithms based on historical crime data have led to over-policing in communities of colour. By reinforcing pre-existing inequalities, these systems perpetuate systemic biases instead of addressing them.

Such outcomes raise ethical concerns about fairness and the long-term societal impact of these technologies. For instance, The New York Times has highlighted how facial recognition technologies disproportionately misidentify people of colour, resulting in wrongful arrests and false criminal accusations.

Take the case of Robert Williams, a black man in Detroit, who was mistakenly arrested due to a flawed facial recognition match, prompting calls for stricter oversight of AI applications in law enforcement.

And then there is the issue of hallucinations – that is, AI just making stuff up. This challenge is particularly concerning in fields requiring accuracy, such as healthcare and law enforcement.

Hopes were once riding high for the multi-billion AI tool called Watson for Oncology until it suggested incorrect medical diagnoses to a user, potentially jeopardising patient safety.

Similarly, in a high-profile case, a lawyer, Steven Schwartz submitted a legal brief generated by AI containing fabricated case law, which resulted in professional embarrassment and highlighted the risks of relying on generative AI without verification. In another case, ChatGPT invented a sexual harassment scandal and named a real law professor as the accused.

Such hallucinations not only undermine trust in AI but also pose risks to decision-making in critical sectors.

 

A case in point is the higher education sector. Universities are grappling with a surge in AI-assisted plagiarism. Students increasingly use generative AI to produce essays and research papers that mimic original work, undermining the purpose of education.

Generative AI systems often fabricate references, or fail to attribute sources correctly, and they sometimes include non-existent journal articles. This not only compromises academic rigour and misleads both students and teachers, but could lead to the potential erosion of academic standards.

As students and researchers rely on AI for content creation, their engagement with material diminishes, leading to a decline in critical thinking and originality. Hopefully, in 2025, more people in education and beyond understand that AI tools should complement, not replace, traditional academic work.

By fostering an environment of responsible use, institutions can leverage AI to enhance learning outcomes while maintaining the values of academic integrity.  

How will AI help teachers and students?

Despite all the concerns about just how responsibly AI is being used in education, AI does, in fact, hold out the promise of transforming learning with fairness in mind, writes Isabel Fischer. It will offer huge opportunities for personalised learning to address the diverse needs of students.  

Innovations such as oral-based tools that enhance more extensive oral communication, chatbot reading lists, and programme-level assessments are set to transform the educational landscape. These advancements, alongside diverse assessment methods like knowledge graphs and concept maps, aim to cater to varied learning styles and improve overall educational outcomes.

Educators are expected to evolve into facilitators, guiding students and professionals in leveraging generative AI for critical thinking, creative problem-solving, and collaboration.

This shift moves beyond simple phrasing and paraphrasing, encouraging deeper engagement with content. Personalised feedback from generative AI tools will play a crucial role in boosting learners' confidence, particularly for those facing academic and professional hurdles.

Ethical considerations will remain paramount in 2025, especially concerning the fairness and transparency of AI evaluation systems. The European Union's AI Act, with its emphasis on accountability, risk categorisation, and regulation will provide an important framework for shaping responsible AI implementation in education, and ensuring compliance with ethical standards.

With all this in play, 2025 and beyond could be pivotal years for socially responsible AI-supported learning.

Further reading:

Predictions for 2025: a bleak outlook but with glimmers of light and hope

What will 2025 bring for energy and climate action?

How employee wellbeing will move up the agenda in 2025

Social economy to gain ground in 2025 as companies see the benefits

The EU will lead the way on sustainability reporting in 2025

 

Shweta Singh is Assistant Professor of Information Systems and Management at Warwick Business School, working on responsible AI.

Isabel Fischer is Associate Professor (Reader) of Responsible Digital Innovation and Educations. She teaches Creating Digital Communities on the MSc Management of Information Systems & Digital Innovation, and is a winner of the prestigious National Teaching Fellowship by Advance HE.

Discover more articles on Digital Innovation and Entrepreneurship by signing up to Core Insights.