There's no easy fix for AI at universities - but banning it won't work.

Bans won't be effective, nor will any effort to make prompts 'AI-proof'. To save written work as the foundation of humanities education - as I believe we should - the hard work of structural change has to start now.
There's no easy fix for AI at universities - but banning it won't work.
Like

Many senior philosophers I have spoken to believe using AI for writing assignments is academic misconduct, akin to plagiarism, and ought to be treated as such. I am inclined, instinctively, to agree. But universities are deceiving themselves if they act as though the use of Large Language Models (LLMs) can reliably be detected. Nor are their any quick-fixes to make assignments AI-proof. Solutions like focussing on recent work or events they are not trained on can only work in the short term, if at all, as the technology advances. If we are seeking to maintain written assignments as the core of humanities education - which I believe we should, on the grounds that it best encourages the creative, systematic, generative thinking and writing required of scholars - rather than shifting to in-person written or oral exams - then nothing short of a broad cultural shift in our institutions will suffice. 

The Impossibility of Effectively Banning AI 

Many academics think they can detect the usage of AI in assignments - numerous examples are given, for instance, in a recent discussion on the Daily Nous.

I would hazard, however, that this only captures incompetent usage. As a recent article in the Chronicle of Higher Education  points out (and anyone who has played around with the capacities of GPT will know) it is wholly possible to use AI for the bulk of the work, while producing an assignment which won't display the hallmarks of an AI-authored essay (factual inaccuracies, repetition of the prompt, fabricated citations etc.). It is relatively simple to prompt an AI to produce a high-level plan (the first, and hardest, stage of an open-ended assignment) followed by section by section sub-planning, while performing the remaining work manually. No marker, nor any software, would detect this - yet it represents a substantial advantage, and arguably constitutes falsifying authorship, 

Additionally, the GPT-4 model has already proved itself to be significantly more effective than GPT-3 - it is, for instance, able to fluidly integrate quotations where the previous iteration struggled. Any approach based on detecting and punishing AI use is therefore time-sensitive - it is very likely that technological advancement will rapidly outpace the capacity to detect. 

To argue that AI should never be used in academic work - that it is plagiarism (albeit of a new kind, since it steals the work of no specific author), deception, lazy - is a completely legitimate position, It is also entirely beside the point. Whether we like it or not, students are already using Large Language Models to complete assignments, and there is no feasible way to directly prohibit them from doing so. 

The Need to Keep Essays 

As a digression, I want to very briefly offer a defence of the essay, and a case against replacing it with timed, in-person examination (at least in Philosophy, my discipline).

The core skills a humanities course should encourage are surely some combination of critical thinking, creativity, originality, generating ideas and written expression (since these are the skills needed by scholars in those fields). Long-form writing gives students the time and opportunity to engage, in depth, with the nuances and significance of a particular area of thought. Work over an extended period demands that students be able to form, structure, justify and express a sophisticated line of argument, and gives them the space for the depth of research and thought that this requires. The creativity of this kind of work - the novel consideration and combination of ideas - is invaluable and a source of satisfaction for those who take it seriously. 

In person, written examinations do not facilitate the same degree of depth and creativity which the essay demands. The emphasis is instead on memorisation and regurgitation - students are incentivised to engage only superficially. In most cases, these are senseless wastes of time with little relevance for the actual work of scholars. Additionally, it would be false to assume that exams themselves are free from cheating of other kinds.

Some have advocated a newfound emphasis on oral examinations, of the kind practiced in the Italian higher education system. These avoid some of the pitfalls of in-person, written examination - the interpersonal exchange can allow an examiner to probe the depths of a student's understanding more so than a written exam. Yet this also fails to prepare students for the written work which constitutes the bulk of academic output - I think it's quite apparent that the best philosophers are not always the best speakers. Oral exams also lack the standardised anonymity necessary for impartiality in written assessment. 

Any effort to move away from long-form academic writing as the main basis of assessing students would be at a great detriment to student learning, and subsequently to the humanities disciplines themselves. 

Easy Fixes Won't Cut it

With that said, most mooted adjustments to essay-writing prompts are no solution to the problem of AI cheating.

Focussing on hyper-recent work exploits one existing limitation of most LLMs - their historic training sets. GPT-4 is only trained on Data prior to 2022 - asking questions this year which include references to current scholarship would be a barrier to cheating with it. Yet this solution is precarious hostage to technological advancement - already, some LLMs (such as Bing's chat AI) can actively scour the web for recent information. At the current rate of progress, it will surely be redundant soon.

Some institutions, such as RV University Bengaluru, India, have proposed surprise checks on cheating suspects - asking them to explain or independently re-do suspicious work. Yet this still relies upon the capacity to accurately detect AI-use. As shown above, that isn't feasible, especially in the long term. 

There has been some discussion of using 'multimodal' assessment - demanding that students have several forms of work assessed at once (for instance, accompanying an essay with an oral presentation or video). Yet all of the work which would involve academic knowledge and skills beyond presentation could still be performed using an LLM (and indeed, some of these side-tasks could also be performed by AI soon - already, GPT-4 is capable of, for instance, image analysis). 

The Potential for a Cultural Shift - Accommodation, Engagement, and Verifying Authenticity 

The root cause of the problem, properly considered, is that students desire to get around the demands of academic work at all. Only a serious cultural shift in universities - to engage with students in terms they find compelling, to emphasise process as well as outcome, and even (though today utopian) to de-emphasise assessment itself as the focal point of education - would constitute a real, lasting solution. Practical steps might include staff working with students during the writing process. Though labour-intensive, this would be enormously beneficial, refining the skills actually necessary for academic work and alleviating the compulsion to fall back on work-arounds. Seizing students attention with tasks that they perceive as valuable, giving them space for creative and fulfilling expression, and convincing them of the benefits of working through the writing process themselves, would not just reduce the propensity to cheat, but foster a healthier academic culture. 

Another measure would be the integration of something akin to 'artist statements' alongside assessed work, as James M. Lang has proposed. This would be a metacognitive exercise in which students are asked to step back and give an account of the writing process itself - their methods, intensions, and an evaluation of the outcome (this could even be submitted in stages throughout the assessment period to document progress). Asides from requiring students to give evidence of independent work, this form of reflection would have its own benefits, encouraging a form of productive self-criticism which students can build upon. This could take the form of a written exercise, or a randomly administered oral interview (as Imperial College London has adopted),

It is absolutely paramount that expectations about AI use are clearly communicated to students from the start, especially if it is to be a disciplinary matter. with understanding of the regulations a core part of matriculation. It is in this communication, however, that universities may want to establish a degree of realistic accommodation for AI use. There is an emerging line of argument within some parts of the sector - recently taken up by University College London - that AI is part of a new reality in academia and society as a whole - a reality which students should be prepared for. UCL recently issued a  statement which boldly declares to student that "rather than seek to prohibit your use of [LLMs], we will support you in using them effectively, ethically and transparently.” This policy includes properly educating students on AI - its functions, its potential and its limitations - and provides guidelines for its permissible use in writing processes (such as citing each instance of prompting, as one would cite an academic source). 

As far as I can see, then, there seem to be two alternatives. If AI is part of the new paradigm in academia, it should be sensibly accommodated. If it is, as many hold, a threat to academic integrity and effective education, then I believe only a far-sighted and far-reaching structural change can offer a real response. 

Please sign in

If you are a registered user on Laidlaw Scholars Network, please sign in

Go to the profile of Peter Vojnits
11 months ago

Hi Kassiopeia,

This is a fascinating piece, thank you for sharing. I'm aware that AI is presenting some issues in academia lately. However, it also offers an opportunity. The quote from UCL seems to be an apt demonstration of this. AI should not be rejected outright, but should be used in a collaborative and meaningful way to enhance the world of academia.  

Go to the profile of Kassiopeia
11 months ago

Hi Peter, thanks for reading! This sprang out of some research the St Andrews philosophy department has asked me to do on AI cheating, but I do think there could be a positive role (especially in overcoming writers block etc.). I wonder how you think it could be used, as you say, in a collaborative and meaningful way? 

Go to the profile of Peter Vojnits
11 months ago

I think we should harness AI's ability to generate text and ideas at a pace not seen before. Now that such technology is widely available to the public, I believe there is a real opportunity for academics/researchers/students to collaborate using AI to determine how ideas and theories fit together. Of course, ways to mitigate the risk of plagiarism should be developed as well, but AI should not be dismissed in its entirety. 

Go to the profile of Hannah Viljoen
9 months ago

Hi Kassiopeia, 

Absolutely love this piece and your ideas on the future of the AI-problem in higher education. I have also been thinking about this issue and I particularly liked your suggestion of involving professors and tutors in the writing process itself - I think a practical mechanism to do this would be to have one extra tutorial per semester which focuses specifically on the assessment pieces, where tutors and tutees could reflect together and have the opportunity to gain feedback. This would also provide the necessary oversight you mentioned. Moreso, I totally agree that blanket banning AI is not the solution - particularly since I find it to be hugely beneficial in the learning process. I often use Chat GPT to get summaries of cases or papers so that I can ascertain whether they are worth it to read for the current topic I am looking at. Excellent article and thanks for sharing! 

Go to the profile of Kassiopeia
9 months ago

Thank you so much!

I've been using ChatGPT to create reading lists on topics I'm unfamiliar with - the only downside being that it occasionally completely invents a text which doesn't exist, which I spend an hour combing through the library trying to find. 

This work actually came out of an internship that the philosophy department asked me to do on AI adjustments, and I've actually been talking to my uni's Associate Dean for Education since - the university seems to actually be quite receptive to change and the opportunities here, which I find heartening. I'd be interested to hear what Trinity is doing about it?

Go to the profile of Hannah Viljoen
9 months ago

I've also noticed that ChatGPT can be inaccurate at times - often when I use it for case summaries and then go back and read the actual case, it misses important points in the judgement so its not foolproof. Interesting that St. Andrews is receptive to change, what kind of things are they mostly looking at doing? Trinity scared all of us with the oral exams rumour, but the majority of my assessments are closed book, hand written exams so it hasn't affected me that much.