The AI essay feedback focuses on issues such as readability, sentence structure and referencing
Warwick Business School has developed an AI feedback tool to help students, particularly those who may lack a good support network, to improve their writing.
Unlike generative AI platforms currently in the spotlight such as ChatGPT, Warwick Business School’s AI Essay-Analyst gives students machine-generated feedback on their essays ahead of their final deadlines, so they have time to make thoughtful revisions before submitting their work.
The feedback from the AI focuses on the strengths and weaknesses of students’ writing on issues such as readability, sentence structure and referencing, as well as looking at how well their essays explain and link up ideas.
“We started working on the tool after a survey found that a majority of our students saw their struggles with writing as the biggest barrier to academic success,” said project leader Dr Isabel Fischer, Associate Professor of Information Systems at WBS.
One of the main benefits of the new instrument, she said, is that it helps “to level the playing field between students from disadvantaged backgrounds and their peers from more privileged backgrounds who tend to have a better support network at home, and also have the confidence to seek personalised face-to-face feedback”.
As such, the AI device is seen as a good fit with the University of Warwick’s 2030 strategy to ensure that all of its students, irrespective of background, have equal opportunities “to thrive and progress at Warwick”.
The emphasis of AI Essay-Analyst on providing formative feedback rather than final grading distinguishes it from many of its rivals in the education technology market which seek to take over the role of teacher and mark the final pieces of work.
“We do not claim that AI is the last word,” added Dr Fischer. “What we do provide is a tool that can motivate students to work on further improving their writing, either on their own, or in a peer group setting.
“It is especially useful for dissertations, because after having written, for example 10,000 words on a topic – often during long nights – it is refreshing and invigorating to get a report with clear visualisations ‘on demand’, that reviews key features and gives suggestions for improvement.”
Pie charts, knowledge graphs and word clouds form part of an AI-generated feedback report of up to 15 pages that students receive, analysing everything from ‘argumentative zoning’ (how students organise their paragraphs thematically) to the use of transition phrases such as ‘similarly’.
Since the scheme was launched at the end of last year, students have been invited to feedback on the process and whether it fulfils their needs.
“I think the academic writing section is helpful,” said one first year undergraduate. “As an international student, I don’t know how my writing looks for native speakers, so this has helped me to understand where I can improve.”
An MSc student on one of the postgraduate business degrees said: “The knowledge graph allowed me to see the bigger picture at a time when I was too focused on the detail. It helped me to break down my essay and also showed the correct, as well as incorrect, relationships between key concepts.”
There are advantages for lecturers too. “Submissions tend to be of a better quality and lecturers can focus on providing substantive content feedback rather than critiquing referencing techniques and writing,” said Dr Fischer.
“This is our attempt to use new technologies for the benefit of our students, at a relatively low ongoing cost,” she added.
And it is still early days. Fischer and her WBS colleagues Zhewei Zhang, Assistant Professor of Information Systems and Management, and Joe Nandhakumar, Professor of Information Systems, together with three PhD students, are continuing to integrate new codes and trends.
One of them is how to detect the tell-tale signs of ChatGPT’s content generation in pieces of academic writing.