Author: Julia Staffel, PhD
At the end of November in 2022, OpenAI launched ChatGPT, a text-generating AI with never-before-seen capabilities of producing almost any type of text with the click of a button. Students immediately realized the potential of using it for their assignments, easily generating essays and code, while avoiding being caught by existing plagiarism detectors. At the end of the fall semester of 2022, professors increasingly saw submissions from students that looked strange – eerily perfect in grammar and spelling, written in a confident tone, and often at least broadly competent, but completely lacking in depth and originality. While those AI-written assignments might not earn straight As, they are certainly good enough to earn passing grades in many college classes. Anecdotes started circulating about students who didn’t write a single word of their own in an entire semester and still passed all their courses.
Since then, an arms race has developed between AI users and those who try to detect and discourage its use. Turnitin, the popular plagiarism detection software, has been newly equipped with an AI detection function, but even when it correctly flags AI-generated content (which it doesn’t do very reliably), it is difficult to prove a student’s illicit use of AI to the standards required by most university honors councils. This is because, unlike with other cases of plagiarism in which we can point to the source that has been copied, AI detectors can deliver no more than a statistical probability that the content was not generated by a human, which leaves too much room for students being wrongly accused. Further, new AI tools, so-called paraphrasers, are being developed, that can alter AI-generated text to fool AI detectors into classifying it as human-written.
So, what do we do? Some educators have advocated for what some have called “the nuclear option”: Doing absolutely nothing differently, and teaching writing like it’s 2021. They argue that it’s not their job to play police, and that students who value learning and expressing their ideas will be motivated to write on their own. I think they are wrong: we morally owe it to our honest students to not make them feel like fools for doing the hard work of writing their own essays, when others use AI to get better grades without lifting a finger (or a pen). While it’s never been possible to make cheating impossible (and trying to do so is usually bad pedagogy), we can develop assignments that discourage the use of AI and help students develop as writers. This is not to say that we should ban AI completely from our classrooms, but students need basic writing skills to be able to use those tools responsibly and effectively.
Hence, in this webinar, I hope to leave you with new knowledge and practical advice in three different areas:
AI-generators, detectors and paraphrasers
In the first part of the webinar, I will explain and demonstrate the main AI tools that are relevant for teaching in higher education, with a focus on text-generating AIs. We will discuss how they generate content, why this can lead to errors and “hallucinations”, and how students use them to generate written work. We will then explore the functioning of AI detectors and paraphrasers – software intended to recognize whether content has been written by a human or an AI, and software intended to confuse those detectors. We will discuss the limits of AI detectors and what this means for our academic integrity policies.
Assignments that discourage AI use
In the second part of the webinar, I will offer some practical suggestions for how to design assignments that discourage the use of AI. For example, I will suggest that we revive a medieval assignment, the “reportatio”, which is essentially a narrative essay that transcribes a lesson. I will also explain how we can use google docs in order to track how students work on their essays to check whether their writing has followed a natural process or not.
Assignments that teach responsible AI use and technological literacy
In the third part of the webinar, I will discuss how we can teach students the responsible use of AI. This means that we need to have honest conversations with them about the contexts in which it is acceptable to use AI-generated texts, and the contexts in which AI assistance is illegitimate. Further, we need to make sure they understand the limits of AI, so they don’t blindly trust its outputs, like the unfortunate lawyer who was recently in the news: he found out the hard way that just because ChatGPT claims a court case exists doesn’t mean that it really does. Hence, we need to teach students how to fact-check AI-generated content, and how to use their own knowledge to decide when AI-output is trustworthy and when it isn’t.