The International Centre for Academic Integrity has always defined academic integrity with five fundamental values – honesty, trust, fairness, respect and responsibility. Is AI bringing those values into disrepute?
AI tools like ChatGPT keep raising the bar
ChatGPT first launched in November 2022. It was free to use, and had a wide range of features, from answering simple questions to writing emails, essays, and code, it quickly became the fastest-growing app of all time. Just after two months, it had 100 million active users.
It’s latest version, GPT-4, requires users to pay a £16 a month subscription (although the free version still exists) and has even more capabilities. The word limit is far greater and it’s now got the ability to read and interpret images.
Many ChatGPT users are students using the chatbot to complete or assist with their university work. But because of the app’s detailed, unique, and human-like answers, it’s hard for lecturers to differentiate between student and AI work. There’s a concern they’re relying too heavily on these tools, and aren’t then taking on the right knowledge and skills needed to complete their degrees.
Some Russell Group universities fear plagiarism
AI tools are dividing the higher education sector. Institutions are either discouraging their use, banning them entirely or exploring ways to embrace them. Russell Group universities like Oxbridge, Manchester, Bristol and Edinburgh have opted to put a ban in place – fearing using AI encourages plagiarism.
A University of Cambridge spokesperson said, “Students must be the authors of their work. Content produced by AI platforms, such as ChatGPT, does not represent the student’s own original work so would be considered a form of academic misconduct to be dealt with under the university’s disciplinary procedures.”
From their point of view, AI tools damage the integrity of the assessment process. But with so much coursework done at home and more assessments taking place online these days, how easy will it be to police their use?
Others think embracing AI is vital for employability
Other universities believe banning is too radical, and instead want to focus on teaching students how to use the tools effectively. After all, removing these tools completely could impact their employability – there’s a high chance the profession they move into will be using AI and if universities remove them, students could be left at a disadvantage.
Michael Draper, a professor in legal education at the University of Swansea and an expert on academic integrity, had this to say, “if we’re preparing students for the outside world of work and if in the workplace this sort of technology is given to us, then I think we need to embrace it rather than ban it”.
Students need to be guided on AI best practice
University College London has taken a similar stance and even updated their website with guidelines to help students use AI chatbots “effectively, ethically and transparently”.
For example, showing them how to quote where they’ve used AI responses within their work. They believe they’re then both able to embrace the technology and keep the assessment process honest and fair.
With AI technology here to stay, maybe this is the more practical route to go down – that way universities can both embrace AI tools and protect AI values. Will more universities choose the same approach? Only time will tell.
Sources
https://academicintegrity.org/images/pdfs/20019_ICAI-Fundamental-Values_R12.pdf
https://mashable.com/article/chatgpt-fastest-growing
https://www.sciencefocus.com/future-technology/gpt-3/
https://www.timeshighereducation.com/campus/chatgpt-and-future-university-assessment
Disclaimer
This article and related document links do not purport to be comprehensive or to give legal advice. While every effort has been made to ensure accuracy, Risk Management Partners cannot be held liable for any errors, omissions or inaccuracies contained within the article and related document links.
Readers should not act upon (or refrain from acting upon) information in this article and related document links without first taking further specialist or professional advice.
Disclosure
Risk Management Partners Limited is authorised and regulated by the Financial Conduct Authority. Registered office: The Walbrook Building, 25 Walbrook, London EC4N 8AW. Registered in England and Wales. Company no. 2989025
The decade about long-term partnerships
The last 25 years have seen the completion of some spectacular infrastructure projects, what does the next decade hold?
Managing wellbeing and mental health is critical for the education system where the provision of support services has often been lacking.
UK housing trends have seen significant shifts during the past quarter of a century, a period which has seen the average house price escalate by more than 400%.
For information on how we use your personal data please refer to our UK Privacy Notice | EEA Privacy Notice.