Back in mid-February, Anna Mills wrote a Substack post called “Why I’m using AI detection after all, alongside many other strategies.” Mills, who teaches at Cañada College in Silicon Valley, has written a lot about teaching and AI, and she was a member of the MLA-CCCC Joint Task Force on Writing and AI. That group recommended that teachers use AI detection tools with extreme caution, or to use them not at all.
What changed her mind? Well, it sounds like she had had enough:
I argued against use of AI detection in college classrooms for two years, but my perspective has shifted. I ran into the limits of my current approaches last semester, when a first-year writing student persisted in submitting work that was clearly not his own, presenting document history that showed him typing the work (maybe he. typed it and maybe he used an autotyper). He only admitted to the AI use and apologized for wasting my time when he realized that I was not going to give him credit and that if he initiated an appeals process, the college would run his writing through detection software.
I haven’t had this kind of student encounter over AI cheating, but it’s not hard for me to imagine this scenario. It might be the last straw for me too. And like I think is the case with Mills, I’m getting sick of seeing this kind of dumb AI cheating.
Last November, I wrote here about a “teachable moment” I had when an unusually high number of freshman comp students who dumbly cheated with AI. The short version: for the first short assignment (2 or 3 pages), students are supposed to explain why they are interested in the topic they’ve selected for their research, and to explain what prewriting and brainstorming activities they did to come up with their working thesis. It’s not supposed to be about why they think their thesis is right; it’s supposed to be a reflection on the process they used to come up with a thesis that they know will change with research. It’s a “pass/revise” assignment I’ve given for years, and I always have a few students who misunderstand and end up writing something kind of like a research paper with no research. I make them revise. But last fall, a lot more of my students did the assignment wrong because they blindly trusted what ChatGPT told them. I met with these students, reminded them what the assignment actually was, and to also remember that AI cannot write an essay that explains what you think.
I’m teaching another couple of sections of freshman composition this semester and students just finished that first assignment. I warned them about avoiding the mistakes with AI students made last semester, and I repeated more often that the assignment is about their process and is not a research paper. The result? Well, I had fewer students trying to pass off something written by AI, but I still had a few.
My approach to dealing with AI cheating is the same as it has been ever since ChatGPT appeared: I focus on teaching writing as a process, and I require students to use Google Docs so I can use the version history to see how they put together their essays. I still don’t want to use Turnitin, and to be fair, Mills has not completely gone all-in with AI detection. Far from it. She sees Turnitin as an additional tool to use along with solid process writing pedagogy. Mills also shares some interesting resources about research into AI detection software and the difficulty of accurately spotting AI writing. Totally worth checking her post out.
I do disagree with her about how difficult it is to spot AI writing. Sure, it’s hard to figure out if a chunk of writing came from a human or an AI if there’s no context. But in writing classes like freshman composition, I see A LOT of my students’ writing (not just in final drafts), and because these are classes of 25 or so students, I get to know them as writers and people fairly well. So when a struggling student suddenly produces a piece of writing that is perfect grammatically and that sounds like a robot, I get suspicious and I meet with the student. So far, they have all confessed, more or less, and I’ve given them a second chance. In the fall, I had a student who cheated a second time; I failed them on the spot. If I had a student who persisted like the one Mills describes, I’m not quite sure what I would do.
But like I said, I too am starting to get annoyed that students keep using AI like this.
When ChatGPT first became a thing in late 2022 and everyone was all freaked out about everyone cheating, I wrote about/gave a couple of talks about how plagiarism has been a problem in writing classes literally forever. The vast majority of examples of plagiarism I see are still a result of students not knowing how to cite sources (or just being too lazy to do it), and it’s clear that most students don’t want to cheat and they see the point of needing to do the work themselves so they might learn something.
But it is different. Before ChatGPT, I had to deal with a blatant and intentional case of plagiarism once every couple of years. For the last year or so, I’ve had to deal with some examples of blatant AI plagiarism in pretty much every section of first-year writing I teach. It’s frustrating, especially since I like to think that one of the benefits of teaching students how to use AI is to discourage them from cheating with it.