Reflecting on Melanie Dusseau’s “Burn It Down: A License for AI Resistance”
I don’t completely disagree with Melanie Dusseau’s advice in her recent Inside Higher Ed column Burn It Down: A License for AI Resistance, but there’s something about her over-the-top enthusiasm for “burning it down” that reminds me of this famous scene from Monty Python and the Holy Grail:
Dusseau, who is a creative writing professor at the University of Findlay, writes “Until writing studies adopted generative artificial intelligence as sound pedagogy, I always felt at home among my fellow word nerds in rhet comp and literary studies.” A bit later, she continues:
If you are tired of the drumbeat of inevitability that insists English faculty adopt AI into our teaching practices, I am here to tell you that you are allowed to object. Using an understanding of human writing as a means to allow for-profit technology companies to dismantle the imaginative practice of human writing is abhorrent and unethical. Writing faculty have both the agency and the academic freedom to examine generative AI’s dishonest training origins and conclude: There is no path to ethically teach AI skills. Not only are we allowed to say no, we ought to think deeply about the why of that no.
Then she catalogs the many many mmmmmaaaaaannnnnnyyyyyy problems of AI in prose I found engaging and intentionally funny in its alarmed tone. Dusseau writes:
Resistance is not anti-progress, and pedagogies that challenge the status quo are often the most experiential, progressive and diverse in a world of increasingly rote, Standard English, oat milk sameness. “Burn it down” is a call to action as much as it is a plea to have some fun. The robot revolution came so quickly on the heels of the pandemic that I think a lot of us forgot that teaching can be a profoundly joyful act.
AI resistance/refusal is catching on. The day after I read this article, I came across (via Facebook) a similar albeit much more academic call for resistance, “Refusing GenAI in Writing Studies: A Quickstart Guide” by Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes. While also calling for the field to “refuse” AI, it’s more of an academic manifesto with a lot of citation, it’s a much more nuanced and complicated, and also still a work in progress. For example, sections that are “coming soon” on their wordpress site include “What Is GenAI Refusal?” and “Practicing Refusal.” Perhaps I’ll write more specifically about this when it is closer to finished, but this post isn’t about that.
Anyway, why does “burning it down” make me think of that Monty Python scene? The peasants bring one of the knights (ChatGPT just told me it was “Sir Bedevere the Wise”— let’s hope that’s right!) a witch (or AI) to be burned at the stake. They’re screaming and enraged, wanting to burn her immediately. The knight asks why they believe she’s a witch, and the evidence the peasants offer up is flimsy. The wise knight walks them through the logic of how to test if the woman truly is a witch: to put her on the scales and see if she weighs as much as a duck and thus floats like wood and thus she too is made of wood and will burn for being a witch. (Stick with me here— the punchline at the end has a twist).
Like the mob, Dusseau has had enough with all these witches/AIs. She wants it gone and for it to have never existed in the first place. But since that’s not possible, Dusseau is calling for like-minded writing teachers to refuse to engage. “To the silent, hopeless AI skeptics and Star Trek fans: resistance is not futile. We simply do not have to participate. Let Melville’s Bartleby provide the brat slogan of our license to resist: ‘I would prefer not to.’”
Now, maybe I’m just not hearing the “drumbeat of inevitability” for embracing AI to teach writing because I’m one of these people teaching a lot with/about AI this semester. But I have no idea what she’s talking about. If anything, it seems like most faculty around here have either ignored AI or banned it. Most of my students this semester have told me that AI has not come up as a topic in their other classes at all.
Before one burns it all down, it probably is a good idea to figure out what “it” is. Maybe Dusseau has already done that. Or maybe she is like a lot of my fellow academic AI resisters who don’t know much about AI and think that it is only for brute-force cheating. Maybe she knows better and is making an informed decision about resisting AI; it’s hard for me to tell.
I think her arguments for why we should refuse AI boil down to two. First, AI requires giant data centers and it takes A LOT of electricity and water to run those sites. That is completely true, and that doesn’t even get into the labor exploitation that went into training LLMs and monitoring content, the monopolistic and unregulated giant corporations that control all this, etc. All true, but look: these data centers also power EVERYTHING we do online and they have been an environmental problem for decades. So it’s not that she’s wrong, but I suspect that Dusseau isn’t thinking about refusing Facebook or Google searches anytime soon.
The second argument is that it ruins writing. Like almost every other person I’ve read making this argument, Dusseau references Ted Chiang’s New Yorker article “Why A.I. Isn’t Going to Make Art” in passing. What she doesn’t mention is Chiang’s definition of art is really fiction writing, and he sets the bar extremely high as to what counts as “art.” I prefer Matteo Wong’s response in The Atlantic, “Ted Chiang Is Wrong About AI Art,” but I’ll leave that debate for another time.
I think what Dusseau means by “writing” is writing that is personal, expressive, and “creative,” poetry and fiction and the like. Of course, AI is not the right tool for that. It’s not for writing a heartfelt fan letter from a child to an Olympic athlete, and Google found that out with the backlash to their “Dear Sydney” ad campaign this summer. (If you don’t know what I’m talking about check out the great post Annette Vee wrote about this called “Why ‘just right’ is wrong: What the Gemini ad ‘Dear Sydney; says about writing that people choose to do.”) Everyone I follow/read about AI agrees with this.
But most writing tasks are not personal, expressive, or creative, and that is particularly true for many writing tasks we all have to do sometimes, often reluctantly, for school or for work: routine reports, memos, forms, the kind of things we call “paperwork.” A lot of students are required to write when they would “prefer not to,” which is why students sometimes use AI to sometimes cheat on writing assignments. So yes, like Dusseau, I don’t want AI writing my journal entries, personal emails, or anything else that’s writing I choose to do, and I don’t want students to cheat. But there’s a role for AI with some of these not-chosen writing tasks that is perhaps useful and not cheating.
The other problem is that Dusseau’s own resistance is not going to stop any of her students or her colleagues from using AI. I don’t know if AI-based writing tools are going to inevitably be a part of writing pedagogy or not, but I do know that AI is going to continue to be a tool that people are going to continue to use. I have students in all of my classes (though more of them in the class of English majors) who are AI refusers, and I think that’s really important to note here: not all students are on board with this AI stuff either. But for my students who seem to know how to use AI effectively and as something akin to a brainstorming/proofreading/tutoring tool, it seems to work pretty well. And that’s the kind of AI use that is impossible for a teacher to detect.
So to me, the council of the knight is best. Before we burn this AI witch, why don’t we see what we’re up against? Why don’t we research this a bit more? Why don’t we not burn it own but instead (to very generally reference Cynthia Selfe’s Technology and Literacy in the 21st Century) pay attention to it and on alert?
But here’s the thing: in that Monty Python scene, it turns out she is a witch.
The punchline in that scene goes by so quick it took me a few viewings to realize it, but the woman does weigh the same as the duck, thus is made out wood, and thus is a witch. The peasants were right! SHE’S A WITCH!
Because like I said at the beginning of this, I don’t completely disagree with Dusseau. I mean, I still don’t think “burn it down” is a good strategy— we gotta pay attention. But I’m also not saying that she’s wrong about her reasons for resisting AI.
My semester isn’t quite over, and I have to say I am not sure of the benefits of the up-front “here is how to use AI responsibly” approach I’ve taken this semester, particularly in freshman comp. But I do know an impassioned and spirited declaration to students about why they too should burn it all down is not going to work. If writing teachers don’t want their students to use AI in their courses, they cannot merely wish AI away. They need to learn enough to understand the basics of it, they need to explain to students why it’s a bad idea to use it (or they need to figure out when using AI might be okay), and they’re going to have to change their writing assignments to make them more AI proof.
Of course I agree with you and Dr. Vee.
And I’m still pushing that AI would have most of us settling for a word that’s “good enough” rather than just right. It’s easier to accept what’s offered than to do the work of finding a better word, sentence construction, etc. Close enough works in horse shoes, hand grenades, and atom bombs.
Here’s a short version of where I would use AI. Having worked in the same gifted kid organization for 40 years, the terminology for “gifted” has changed every so many years to “talented,” “advanced learner,” back to “gifted,” and so forth. The philosophy of the organization didn’t change. So, my job was to recast philosophical statements with the latest terminology. Your basic copy & paste job with a human eye for when the new term conflicts with sentence structure (in which case you need to recast the sentence). In a mature organization, the sheer volume of text to be fixed is huge. AI can do this, BUT with a human overseer because AI won’t be sensitive to shifts in meaning caused by terminology changes. E.g., “advanced learner” juxtaposed to skipping a grade doesn’t mean gifted; it means student moved up a grade or two.