I meant to post about this quite a while ago, but I got busy getting ready for the beginning of the semester, and then of course I got busy actually teaching, and then whammo, we’re starting the fifth week of the semester already. Flyin’ time.
I’m teaching a “special topics” class this semester called “Writing, Rhetoric, and AI,” and it’s a 400/500 level class– that is, both undergraduate (22) and graduate (3) students. The actual course is in Canvas and thus behind a firewall, but there is a website at rwai.stevendkrause.com. The website name– Rhetoric, Writing, and AI– should be Writing, Rhetoric, and AI, but it’s too complicated to change now. It’s the type of typo/error I was always looking for when working on MOOC stuff. Massive Open Online Courses? Massive Online Open Courses? Anyway…
The website is mainly for one of the three major projects for the class, the AI News & Updates Collaborative Annotated Bibliography Website and Report. I landed on using WordPress and running it on the server space where I’ve hosted this blog forever because it seemed like the least bad option. I wanted a space/platform where students could submit entries and I could approve and organize them, and I wanted it to be public. I thought about Substack, but I think that would have required all of my students to sign up for Substack, and while I like it, I didn’t want to force it on anyone. (I suggested a link to something on Substack to a friend/colleague of mine in the field, and this person said they wouldn’t have anything to do with that platform because of the “Nazi problem).”
So the body/”blog-like” part of that site is where students’ entries are published. I have the Syllabus and assignments on the page “Course Documents.” There are three major assignments for everyone and an additional assignment for the graduate students. The first essay project is a reflection essay based on a series of AI writing “experiments” we’re trying out. Then there’s that already mentioned annotated bibliography assignment, and finally a research essay project assignment.
We’re also doing plenty of reading and discussing of the readings, which I list here— at least so far. Because this is a class that is new and a crazily fast-moving target, I thought I’d plan the first part of the class first and then adjust for the second part of the semester, depending on what students are interested in researching/talking about. I know we’re going to talk about the environmental issues with AI, but beyond that, I’ll have to see what students think.
For the first half of the semester, we have mostly been reading/discussing AI and writing fairly directly– comp/rhet, pedagogy, creative writing, tech writing (and AI in the workplace), and so forth. It’s all been a mix of MSM, websites, along with a handful of academic articles.
The graduate students also need complete a book review project where they will each make a short video about a book they read about AI, and then also lead a discussion about their book.
I think things are going reasonably well, though one of the challenges of teaching online is sensing “the vibe,” if you will. Everyone seems friendly enough and engaged, so that is a good thing. I am surprised about two things with this group so far. First, most of these students “hate” AI– at least so far and before this course. That squares with my experiences in introducing AI things into a class called “Digital Writing” last fall, where almost all of them were “against” AI, especially when it came to writing. But I thought a course explicitly about AI’s connections to writing would attract more “pro” AI students than it (apparently) has.
Second, most of them have little experience with AI. Some have even said that this class was the first time they ever used AI at all. Now, maybe some of these students are kinda/sorta underestimating their experiences with AI; after all, there’s good evidence that most students who use AI don’t want to admit it, and also good evidence that the vast majority of students use AI at least occasionally.
Then again, these are almost all English major types. My first year writing classes are almost always composed entirely of students from majors not in the humanities. A lot of these students are not crazy about AI either, but that is definitely less true for the ones majoring in anything STEM or business-related.
Anyway, so far/so good– and it looks like I’m on the schedule to teach this course again next term. Probably. Stay tuned….
I think I first read someone bring up the “blue-book solution” for AI cheating shortly after ChatGPT exploded in fall 2022, but as I recall it, it was a joke. “Ha ha, now that AI can write as well as students, we’ll have to make them write by hand and while we’re watching. Ha ha!” My standard comment on social media to posts/articles about going back to handwritten and timed writing in the name of stopping cheating was “why not make them use a stone and chisel?” Ha ha.
Well, here we are three years in, and now blue-books really are a “solution” to AI. According to the Wall Street Journal,sales are up– way up. Earlier in August, Katie Day Good had an op-ed in The Chronicle of Higher Education titled “Bring Back the Blue-Book Exam,” and then at the end of August, Clay Shirky had an op-ed in The New York Times called “Students Hate Them. Universities Need Them. The Only Real Solution to the A.I. Cheating Crisis.” Both of these pieces make (mostly) serious arguments that the only way we can deal with/fight against AI cheating– a “crisis,” apparently– is to go back to the way we used to do these things. Way back.
Jeez.
Katie Day Good teaches at Calvin University in Grand Rapids and is “a media historian and cultural scholar of emerging technologies in education and everyday life.” A lot of her current work seems to be about “cultural movements to disconnect from digital technology and take a ‘digital sabbath,'” so maybe this return to handwriting is kind of in her research/scholarship lane.
But Shirky?!? Here’s a guy who became famous as a new media evangelist, who, in the book Here Comes Everybody, enthusiastically writes about crowd-sourcing everything and the joys of a world where content is both consumed and produced by users. His by-line describes his current job as “a vice provost at N.Y.U.” where he helps “faculty members and students adapt to digital tools.” This is the guy who is suggesting a return to blue-books and oral exams?!?
Jeez again.
Before I get more into the specifics of Good’s and Shirky’s essays, I want to bring up three “bigger picture” problems with blue-books and similar calls to return to the 19th century, problems that don’t come up in either one of these essays. First, blue-books, along with oral exams and other face to face assessments, obviously won’t work for an online class, especially ones that are asynchronous. And roughly speaking, a little over half of all college students take at least one class online, and about a quarter of all college students only take classes online. So what is an online teacher to do, collect blue-books by snail mail?
Second, timed writing like this is bad pedagogy, and people in writing studies have known this forever. No one is an especially good writer when they are being timed and watched, not to mention with no opportunity for things like feedback from peers or revision. I think these exercises are more like filling out a form than writing, and honestly, a better solution is some kind short answer/multiple-choice exam.
Third, and my apologies for offending anyone who thinks that blue-books might be a good idea, this is just fucking lazy. Good and Shirky are suggesting it’s just too much work for a teacher to change the assignment in some way where it is either not effective to use AI or that leans into AI in specific and useful ways. Shirky dismisses doing this work thusly: “We cannot simply redesign our assignments to prevent lazy A.I. use. (We’ve tried.)” It’s just too hard to do anything differently! Instead, Good and Shirky are saying we should travel back in time and just keep pretending that there is no other possible way to change how we do things.
I saw a version of this same logic at the beginning of my career in the early 1990s when word processing and internet technologies were emerging. There were similar efforts then to restrict student access to things like spelling and grammar checkers, or banning students from using online sources. Teachers– especially English teachers, I think– do not like to change how they teach, even when what and how they teach is altered by technology. As a result, teachers often follow the lazier solution, which is to ban the technology. Thus blue-books.
Both Good and Shirky begin the same way all of these AI freak-out essays begin: we can’t trust students at all and every one of them cheats on everything, especially now that it is so easy with AI. Good writes the new capabilities of AI made her rethink the “take-home essays” she used to assign in favor of blue book exams, presumably (in part) because of the possibility of cheating. Shirky begins with a vague story about a philosophy professor he met with who said he simply could not get “several” of his students to stop cheating with AI.
“Take-home essays” (I think she means what I’d call a take-home essay exam) have always required teachers to trust that their students won’t cheat. After all, when the student is working “at home,” there is nothing to stop that student from getting help from others and the internet, or even to get someone else to complete the assignment for them. I don’t know if Good was ever concerned about her students cheating on their take-homes before AI (she doesn’t seem to have been worried), but she started using blue books based merely on the possibility of cheating with AI.
As for Shirky’s philosophy professor colleague: I don’t know what several of them used AI to cheat means (are we talking half the class? three students? what?), but to me, the solution is obvious: fail them. I am going to assume (perhaps wrongly) that this hypothetical professor Shirky cites has a policy that does not allow students to use AI, and I’m also going to assume that the professor explained this policy and the consequences of using AI, which (again, just guessing) was failure. So, what exactly is the problem? If it’s that easy for the professor to catch students cheating, why not just enforce your policy and fail those students?
My own approach has been to be very up-front with students about what I think is and isn’t cheating with AI (and the short version is it is cheating if the writer directly copies/pastes AI output into something that the writer said they wrote). If I think a student is cheating with AI– which, for me, is based on my admittedly not perfect sense of what a particular student’s writing “sounds like,” and the document history of their Google Doc— I talk to them about it. In the last year and a half or so, I have had a lot more students cheating than I did before AI, meaning I’ve had to have a lot more of those uncomfortable conversations with cheating students. I give them another chance to do the assignment right and almost all of them managed to turn things around and pass the class just fine. I had a couple of repeat cheaters last year and I failed them on the spot.
In a post on Substack where she was explaining why she’s using AI detection software, Anna Mills described a confrontation she had with a student who adamantly denied he cheated with AI even though Mills is almost certain he did. After all, students also know AI is difficult to detect. I get it, and it can be hard to prove AI cheating. I’m sure I’ve had students who have managed to get away with some AI that I would have counted as cheating had I known. But every time I have had that “I think you cheated” conversation with a student, be it with AI or old-fashioned plagiarism, that student has confessed, often in tears.
As I’ve said many MANY times before:
Most students do not want to cheat.
Students cheat when they are failing and they are desperate.
Students who cheat are not criminal masterminds and are easily caught.
All that said, it does depend on what exactly counts as cheating, and I don’t think it is cheating if students use AI as part of their process.
Good views this return to handwritten essays as a “balm for my tech-weary soul.” She goes on:
My students’ handwritten essays brim with their humanity. Each page conveys personality, craft, voice, and a “realness” that feels increasingly scarce in our screen-saturated, algorithmically-distorted information environment. As such, handwriting accomplishes something greater now than ever before in education: It restores a sense of trust to the student-teacher relationship that has been shaken by AI.
In the next paragraph, she also brings up some of the other beliefs in handwriting’s “authenticity,” that handwriting helps people make better connections in the brain than typing, that it results in better notes, etc. Well, right before Covid struck, I was researching laptop/cellphone bans in f2f classes and requiring students to take notes by hand. Long story short, the studies I’ve seen about comparing laptop notes with handwritten notes in classrooms– mostly quantitative/experimental methodologies coming out of Education/Psychology– strike me as flawed for all kinds of different reasons. And the claims about handwriting as a tool for judging one’s “authenticity” and identity and the like have been debunked by many researchers– I would recommend in particular the very readable and well-researched book by Tamara Thorton, Handwriting in America: A Cultural History. I also have my own baggage as someone with terrible handwriting, who remembers failing handwriting in the fourth grade, and also as someone who has typed everything I could type since I was in high school.
So for me, the idea that handwriting is “better” and that it is both possible and reasonable to make judgements about the writer based on their handwriting, that more of one’s humanity is revealed through handwriting– that’s all bullshit.
Shirky doesn’t seem to think that handwriting has the same kind of “Magic” that Good sees in her students’ writing, and he admits that a lot of students and faculty are skeptical of this change. But in the name of rigor and a “more relational model of higher education,” we must return to the way things were done, and he then proceeds to cherry-pick different speech and writing assignments all the way back to the 1300s. In the process, I think he indirectly describes a lot of the pedagogy common to small discussion classes like first year writing: requiring students to meet during office hours, entering into “Socratic dialogue or simple Q&A” with the class, and so forth.
“There is the problem of scale,” with old techniques like oral exams, Shirky admits. “With some lecture classes in the hundreds of students, in-class conversation is a nonstarter.” Well, wait a minute: maybe the past practices we need to return to are smaller classes. Perhaps one of the reasons why I am not that worried about AI cheating is that I feel like I actually do most of these things in the classes I teach now. My students end up doing a lot of writing— discussion posts to readings, scaffolded essays part of the research project, and drafts of work in progress— along with plenty of discussing as well.
So what if every class were no more than 25 students? That wouldn’t be logistically possible, and it wouldn’t be a complete solution to AI cheating either, of course. But it’s a start, and we’ve also known for a very long time that lecturing is also a terrible pedagogy.
I will say this: both end on a vaguely positive note, even if their optimism about the future does not strike me as particularly realistic. Good takes a lot of pleasure in this return to the past, connecting us back to Plato and education as “not a process of pouring knowledge into an empty soul, but as a ‘turning around’ of the soul in the direction of beauty and truth.” She sure seems to think that those blue-books and handwriting can accomplish a lot!
And after spending the rest of his op-ed saying there’s nothing to be done about AI except return to “technology free” classrooms, Shirky ends by predicting higher education will adapt. “Despite frequent pronouncements that college is doomed because students can now get an education from free online courses or TV or radio or the printing press, those revolutions never flattened us. Nor will A.I.” We’ll see. I want to believe Shirky is right, but….
I’m kind of surprised, but I am still coming across essays and Substack posts and such where teachers/professors are freaking out about AI. ChatGPT came out in November 2024, more than two and a half years ago. I would have thought folks would have moved on from these “writing assignments are dead” kinds of pieces by now, but no–throw a brick out a window and you’ll hit one. Here’s a good recent example: “The Death of the Student Essay– and the Future of Cognition” by Brian Klaas. The title is the gist of it– I’ll come back to Klaas’ essay later.
It’s not that these “the death of the assigned paper and now I’m going to make my students chisel everything into stone” eulogies are entirely wrong. As I’ve been saying for a few years now, AI means teachers who used to merely assign writing with no attention to process can’t do that anymore. AI means teachers need to adjust their approach to education. It doesn’t mean that all of a sudden everyone will stop learning.
And before I go any further, I kind of think what I’m writing about here is Captain Obvious wisdom, but here it goes:
Here’s what I mean:
Learning is about gaining knowledge and skills, and humans do this in lots of different ways— play, practice, observation, experiences, trial and error. We learn things from others and the world around us, and while learning is often frustrating, I think learning is pleasurable and fulfilling. All of us start learning right after we’re born— how to get attention, to crawl, to roll, to walk, etc.— through help from our parents of course, but also on our own.
Some things we learn through exposure to the world around us; for example, speech. Of course, parents and others around babies try to help the process along (“say da-da!”), but mostly, babies and toddlers learn how to speak by picking up on how the humans around them are speaking. And as anyone who has parented or spent time around a chatty pre-schooler knows, sometimes it can be challenging to get them to stop talking.
On the other hand, some things we need to be taught how to do by others— not necessarily teachers per se, but other people who know how to do whatever it is we’re trying to learn. Reading and writing are good examples of this, which is one of the ways literacy is different from speech (or, as Walter Ong might have put it, orality). This is one of the reasons why, up until a few hundred years ago, the vast majority of people were illiterate.
Except Tarzan. This is a bit a of tangent, but bear with me:
Edgar Rice Burrough’s famous novel Tarzan of the Apes is an extraordinarily interesting, odd, offensive novel, and most of the adaptations of the book gloss over its over-the-top fantasy and weirdness. At the beginning of the book, Tarzan’s parents are put ashore in Africa after a mutiny on their ship, and his father builds a cabin stocked with the goods Tarzan’s parents were traveling with, including a lot of books. The parents are killed by “apes” (which are somehow different than gorillas, but that’s a different story) and the baby that becomes Tarzan is raised by them.
When he is around 10, Tarzan stumbles across the cabin with its books, and, long story short, he teaches himself to read. He does this by staring at the the marks on the pages of a children’s book, letters that looked like little bugs next to a picture of a strange ape that looked like him, and he figured out those little bugs were b-o-y. ”And so he progressed very, very slowly, for it was a hard and laborious task which he had set himself without knowing it—a task which might seem to you or me impossible—learning to read without having the slightest knowledge of letters or written language, or the faintest idea that such things existed.” Basically, Burroughs is saying “yeah, I know, I know, but just go with it.”
In contrast, education is a technology. To quote from my book, education is the “formal schooling apparatus that enables the delivery of various kinds of evaluations, certificates, and degrees through a recognized, organized, and hierarchical bureaucracy. It’s a technology characterized by specific roles for participants (e.g., students, teachers, professors, principals, deans) and where students are generally divided into groups based on both age and ability.” This is an argument I belabor in some detail— you can read more about it here with the right JSTOR access— but I’m sure anyone reading this has had first-hand experience with what I’m talking about.
Learning and education are a Venn diagram: when schooling goes well, education facilitates learning, and successful learners are rewarded by their educational experiences with degrees and certifications. But sometimes schooling does not go well. For whatever reason, some students, especially in courses like first-year writing, just do not want to be there. That was the case for me in a lot of high school and college classes. Sometimes, it was because of bad teaching, but more often than not, it was my lack of interest in the subject, or the fact that it was a subject I was (and am still) not very good at– anything having to do with math or foreign languages, for example. Whatever the reason though, I knew I had to push through and do the course in order to move on toward finishing the degree.1
Everyone involved in education gets frustrated by the bureaucracies and rules of it, especially when the system that is education gets in the way of learning. For example, even professors in business colleges are annoyed by students who are not there to learn anything but to just get the credential and the job. Students are often annoyed at their professors who don’t seem to know how to help them learn because they are just so bad, and everyone is annoyed with all of the other curricular hoops, paperwork, and constant grading. And that’s because learning is the fun part, and the important part!
But here’s the thing: the occupational, monetary, class, and cultural values of academic credentials– that is, the degree as a commodity– are only possible with the technology of education. It is why students and their families (our “customers”) are willing to pay universities so much money. As I wrote in my book, “Students would probably not enroll in courses or at universities where they didn’t feel they were learning anything, but they certainly would not pay for those courses if there was no credit toward a degree associated with them.”
Educators, and I like to think most students as well, are attracted to the university because they enjoy learning and place a high value on learning for the sake of learning: that is, the humanness of it all. But look, I don’t know anyone who is a teacher or a professor who does this work just for the love of it. This is a job, and if I didn’t get paid, I wouldn’t be doing it. Besides, there is a lot of value in education’s certifications and degrees in all of our day-to-day lives. I find it reassuring that the engineers who designed the car I drive (not to mention the roads and bridges I drive on) have degrees that certify a level of expertise. I am glad my dentist went to dental school, that my doctor went to medical school, and so on.
So, to circle back to how this connects with AI in general and with Brian Klaas’ essay in particular: I think the vast majority of the “AI and the end of student writing” essays I have read (including this one) are incorrect in at least two ways. The first way, which I have been writing about for a while now and which I mentioned at the beginning of this post, is about the distinction between assigning writing as a product and teaching writing as a process. Like most teachers, Klaas does not seem to have a series of assignments, peer reviews, opportunities to revise, etc.; he’s assigning a term paper and hoping students write something that demonstrates they understood the content of the class. Klaas writes “Previously,” meaning before AI, “there was a tight coupling between essay quality and underlying knowledge assembled with careful intelligence. The end goal (the final draft) was a good proxy for the actual point of the exercise (evaluating critical thinking). That’s no longer true.” By quality, I think Klaas means grammatical correctness, and I don’t think that has ever been the primary indicator of a student’s critical thinking. Yes, the students who write the best essays also tend to write in grammatically correct prose, but that’s a pretty low bar. And don’t even get me started on the complexities scholars in my field could unpack in Klaas’ claim about the “coupling” between “quality” and “intelligence.”
Klaas also doesn’t seem that interested in doing the extra work of teaching writing either. He writes:
More than once, a student quite clearly used ChatGPT, but to try to cover their tracks, they peppered citations for course readings—completely at random—throughout the text. For example, after a claim about an event in 2024 in Bangladesh, there was a citation for a book written ten years earlier—about the Arab Spring. “Rather impressive time machine they must have had,” I commented.
After a career working to develop expertise, countless hours teaching, and my best attempts to instill a love of learning in young minds, I had been reduced to the citation police.
I’m sure Klaas is correct and this student was cheating, but I’ve got some bad news for him: if you want students to use proper citation style, you have to teach it. And, as I’ve written about before, teaching citation is even more important with AI for a variety of reasons, including the fact that AI makes up citations like this all the time.
But again, Klaas doesn’t want to teach writing anyway; “Next year, my courses will be assessed with in-person exams.” Well, if Klaas was assigning writing so students could write essays that are like answers to questions in an exam, maybe he should have just given an exam in the first place.
This leads me back to my Captain Obvious Observation: learning and education are not the same thing. Yes, any of us can use AI as a crutch to skip our innate needs and desires to learn, but AI’s real impact is how it disrupts the technologies and apparatuses of education. Klaas says as much, ultimately. He points out that AI probably means “universities will need to find ways to certify that grades are the byproduct of carefully designed systems to ensure that assessments were produced by students.” And in passing, he writes “We must not fall into the trap of mistaking the outputs of writing (which are increasingly substitutable through technology) from the value of the cognitive process of writing (which hones mental development and cannot be substituted by a machine).”
Exactly. And I think we know how to do that.
First, we have to teach students about AI, and that’s especially true if we don’t want them to use it. For example, had Klaas explained to his students that AI makes up citations all the time, they might not have tried to cheat like that in the first place. It’s not enough to just say “don’t use it.”
Second, we need to lean more into learning, and we need to be more obvious in explaining to our students why this is important. Teachers need to do a better job of explaining to students and ourselves why we ask students to do things like write essays in the first place. It’s not just so teachers have something to assess as evidence of what grade that student deserves. That’s education. Rather, we have students write essays (or write code, do math problems, conduct mock experiments, etc.) because we’re hoping they might learn something.
Third, we need to change how we teach in ways that discourage relying too much on AI and encourage students to do the learning themselves. Unfortunately, this is a lot of work, and I think this is actually what Klaas and others lamenting the “death” of student writing are really complaining about. The “write a paper about such and such” assignments faculty have been relying on forever won’t work anymore. Though maybe that assignment you thought worked well before AI actually wasn’t that effective either?
“Moving on” did not necessarily mean finishing the course– I dropped several as an undergraduate to avoid a D or an F. Also, I was lucky and unlucky as an undergraduate when it came to my two weakest school subjects. For my degree in English back in the 1980s, I did not have to take any math courses at all. However, I was required to have four semesters of a foreign language. If I had had to take the math class that my EMU English majors have to take as part of general education, I’m not sure I would have made it. On the other hand, EMU students do not have to take a foreign language. I studied German, and I was terrible at it, which is why it took me about seven tries (including a summer school class) to pass the four semesters I needed. ↩︎
It’s a painting done around 1350 called “Henricus de Alemannia in Front of His Students” by Laurentius de Voltolina, depicting a lecture hall class at the University of Bologna. It’s one of those images that gets referenced once in a while about the ineffectiveness (or effectiveness) of the lecture as a teaching method, but I’m more interested now in thinking about how recognizable this scene still is to humans now.
I wrote about this picture a bit in my book More than a Moment, which is about the rise and fall of MOOCs (remember them? the good old days!) in higher education. The second chapter is called “MOOCs as a Continuation of Distance Education Technologies” and it’s about some key moments/technologies in distance ed: correspondence courses, radio and television courses, and the first wave of “traditional” online courses.
I began the chapter by talking about a couple of MOOC entrepreneur TED talks in the early 2010s, including one by Peter Norvig in 2012 called “The 100,000 Student Classroom.” It’s a talk about a class in Artificial Intelligence Norvig co-taught (along with his then Stanford colleague Sebastian Thrun, who went on to create the MOOC start-up Udacity) with about 200 f2f students where they also allowed anyone to “participate” as “students” “online” in the “course” for “free.”1 Like most of the early high-profile MOOC prophets/ profiteers, Norvig and Thrun seem to believe that they discovered online teaching, completely unaware that less prestigious universities and community colleges have been offering online classes for decades. But I digress.
Norvig opens his talk by showing this image to suggest that nothing in education has changed in the last 600+ years. There’s still the “sage on the stage” lecturer, there are textbooks, and students sitting in rows as the audience, some paying close attention, some gossiping with each other, some sleeping. That is, nothing has changed— until now! It gets a laugh from the crowd, and it’s typical of the sort of hook that is part of the genre of a successful TED talk.
In my book, I point out that there are a lot of details of the modern classroom that would be unimaginable in the 14th century, things like audio-visual equipment and laptop computers, not to mention things we don’t even think of as technology anymore— electric lighting, synthetic building materials, controlled heating and cooling, whiteboards and chalkboards, and so forth. In fact, as I go on to argue, the conventional f2f classroom of “today” (well, almost 10 years ago, but you get the idea) is so seamlessly connected to digital libraries, social media, and online learning platforms that line between f2f and online learning is fuzzy.
I still think that is mostly true, but the more I read about how AI is going to change education completely, re-seeing an image like this makes me wonder. Maybe the reason why we still recognize what is happening here is because this is what learning still looks like. In other words, what if the real reason why technology has not fundamentally changed learning is because this is just what it is?
Maybe it’s because I’m writing this now while traveling in Europe and I’ve seen a lot of old art depicting other things humans have done forever: worshiping, fighting, having sex, playing games, acting, singing, dancing. The details are different, and maybe it’s hard to recognize the weapon or the instrument or whatever, but we can still tell what’s going on because these are all things that humans have always done. Isn’t learning like this?
Don’t get me wrong— a lot of the details of how learning works have changed with technologies like literacy, correspondence, computer technology, online courses, and so on. But even an asynchronous online course is recognizable as being similar to this 1350 lecture hall course, or like a small group Socratic dialog, just one that takes place with writing down words that are somewhere between snail mail exchanges and synchronous discussions.2
I guess what I’m getting at is maybe images like this one demonstrate that the desire to learn new things is something ingrained in the species. Learning is like all of these other things that human animals just do.
So if we can remember that learning does not mean the same thing as “going to college or whatever to get a credential to get a job” and that we are still a social species of animal that cannot stop trying to learn new things, maybe AI won’t “change everything” in education. And honestly, if the sci-fi scenarios of Artificial General/Super Intelligence come to pass and the machines replace their human creators, we’ve got much bigger problems to worry about.
These are all scare quotes because none of these words mean the same thing in MOOCs as they mean in conventional courses. ↩︎
In fact, I’d suggest that what happens in online discussion forums and on social media are much more like what Socrates meant by dialogue in Phaederus than what he meant by the more problematic and unresponsive technology of writing. ↩︎
I am home from the 2025 Conference for College Composition and Communication, after leaving directly after my 9:30 am one man show panel and an uneventful drive home. I actually had a good time, but it will still probably be the last CCCCs for me. Probably.
The first part of the original title, “Echoes of the Past,” was just my lame effort at having something to do with the conference theme, so disregard that entirely. This has nothing to do with sound. The first part of my talk is the part after the colon, “Considering Current Artificial Intelligence Writing Pedagogies with Insights from the Era of Computer-Aided Instruction,” and that is something I will get to in a moment, and that does connect to the second title,
“The Importance of Paying Attention To, Rather Than Resisting, AI.” It isn’t exactly what I had proposed to talk about, but I hope it’ll make sense.
So, the first part: I have always been interested in the history of emerging technologies, especially technologies that were once new and disruptive but became naturalized and are now seen not as technology at all but just as standard practice. There are lots of reasons why I think this is interesting, one of which is what these once-new and disruptive technologies can tell us now about emerging writing technologies. History doesn’t repeat, but it does rhyme, and history prepares the future for whatever is coming next.
For example, I published an essay a long time ago about the impact of chalkboards in 19th-century education, and I’ve presented at the CCCCs about how changes in pens were disruptive and changed teaching practices. I wrote a book about MOOCs where I argued they were not new but a continuation of the long history of distance education. As a part of that project, I wrote about the history of correspondence courses in higher education, which emerged in the late 19th century. Correspondence courses led to radio and television courses, which led to the first generation of online courses, MOOCs, and online courses as we know them now and post-Covid. Though sometimes emerging and disruptive technologies are not adopted. Experiments in teaching by radio and television didn’t continue, and while there are still a lot of MOOCs, they don’t have much to do with higher education anymore.
The same dynamic happened with the emergence of computer technology in the teaching of writing beginning in the late ’70s and early ’80s, and that even included a discussion of Artificial Intelligence– sort of. In the course of poking around and doing some lazy database searches, I stumbled across the first article in the first issue– a newsletter at the time– of what would become the journal Computers and Composition, a short piece by Hugh Burns called “A Note on Composition and Artificial Intelligence.”
Incidentally, this is what it looks like. I have not seen the actual physical print version of this article, but the PDF looks like it might have been typed and photocopied. Anyway, this was published in 1983, a time when AI researchers were interested in the development of “expert systems,” which worked with various programming rules and logic to simulate the way humans tend to think, at least in a rudimentary way.
Incidentally and just in case we don’t all know this, AI is not remotely new, with a lot of enthusiasm and progress in the late 1950s through the 1970s, and then with a resurgence in the 1980s with expert systems.
In this article, Burns, who wrote one of the first dissertations about the use of computers to teach writing, discusses the relevance of the research in the field of artificial intelligence and natural language processing in the development of Computer Aided Instruction, or CAI, which is an example of the kind of “expert system” applications of the time. “I, for one,” Burns wrote, “believe composition teachers can use the emerging research in artificial intelligence to define the best features of a writer’s consciousness and to design quality computer-assisted instruction – and other writing instruction – accordingly” (4).
If folks nowadays remember anything at all about CAI, it’s probably “drill and kill” programs for practicing things like sentence combining, grammar skills, spelling, quizzes, and so forth. But what Burns was talking about was a program called Topi, which walked users through a series of invention questions based on Tagmemic and Aristotelian rhetoric.
There were several similar prompting, editing, and revision tools at the time. One was Writer’s Workbench, which was an editing program developed by Bell Labs and initially meant as a tool for technical writers at the company. It was adopted for writing instruction at a few colleges and universities, and
John T. Day wrote about St. Olaf College’s use of Writer’s Workbench in Computers and Composition in 1988 in his article “Writer’s Workbench: A Useful Aid, but not a Cure-All.” As the title of Day’s article suggests, the reviews to Writer’s Workbench were mixed. But I don’t want to get into all the details Day discusses here. Instead, what I wanted to share is Day’s faux epigraph.
I think this kind of sums up a lot of the profession’s feelings about the writing technologies that started appearing in classrooms– both K-12 and in higher education– as a result of the introduction of personal computers in the early 1980s. CAI tools never really caught on, but plenty of other software did, most notably word processing, and then networked computers, this new thing “the internet,” and then the World Wide Web. All of these technologies were surprisingly polarizing among English teachers at the time. And as an English major in the mid-1980s who also became interested in personal computers and then the internet and then the web, I was “an enthusiast.”
From around the late 1970s and continuing well into the mid-1990s, there were hundreds of articles and presentations in major publications in composition and English studies like Burns’ and Day’s pieces, about the enthusiasms and skepticisms of using computers for teaching and practicing writing. Because it was all so new and most folks in English studies knew even less about computers than they do now, a lot of that scholarship strikes me now as simplistic. Much of what appeared in Computers and Composition in its first few years was teaching anecdotes, as in “I had students use word processing in my class and this is what happened.” Many articles were trying to compare writing with and without computers, writing with a word processor or by hand, how students of different types (elementary/secondary, basic writers, writers with physical disabilities, skilled writers, etc.) were harmed or helped with computers, and so forth.
But along with this kind of “should you/shouldn’t you write with computers” theme, a lot of the scholarship in this era raised questions that have continued with every other emerging and contentious technology associated with writing, including, of course, AI: questions about authorship, the costs (because personal computers were expensive), the difficulty of learning and also teaching the software, cheating, originality, “humanness” and so on. This scholarship was happening at a time when using computers to practice or teach writing was still perceived as a choice– that is, it was possible to refuse and reject computers. I am assuming that the comparison I’m making here to this scholarship and the discussions now about AI are obvious.
So I think it’s worth re-examining some of this work where writers were expressing enthusiasms, skepticisms, and concerns about word processing software and personal computers and comparing it to the moment we are in with AI in the form of ChatGPT, Gemini, Claude, and so forth. What will scholars 30 years from now think about the scholarship and discourse around Artificial Intelligence that is in the air currently?
Anyway, that was going to be the whole talk from me and with a lot more detail, but that project for me is on hold, at least for now. Instead, I want to pivot to the second part of my talk, “The Importance of Paying Attention To, Rather Than Resisting, AI.”
I say “Rather Than Resisting” or Refusing AI in reference to Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes website “Refusing Generative AI in Writing Studies,” but also in reference to articles such as Melanie Dusseau’s “Burn It Down: A License for AI Resistance,” which was a column in Inside Higher Ed in November 2024, and other calls to refuse/resist using AI. “The Importance of Paying Attention To,” is my reference to Cynthia Selfe’s “Technology and Literacy: A Story about the Perils of Not Paying Attention,” which was first presented as her CCCC chair’s address in 1998 (published in 1999) and which was also expanded as a book called Technology and Literacy in the Twenty-first Century.
If Hugh Burns’ 1983 commentary in the first issue of Computers and Composition serves for me as the beginning of this not-so-long-ago history, when personal computers were not something everyone had or used and when they were still contentious and emerging tools for writing instruction and practice, then Selfe’s CCCCs address/article/book represents the point where computers (along with all things internet) were no longer optional for writing instruction and practice. And it was time for English teachers to wake up and pay attention to that.
And before I get too far, I agree with eight out of the ten points on the “Refusing Generative AI in Writing Studies” website, broadly speaking. I think these are points that most people in the field nowadays would agree with, actually.
But here’s where I disagree. I don’t want to go into this today, but the environmental impact of the proliferation of data centers is not limited to AI. And when it comes to this last bullet point, no, I don’t think “refusal” or resistance are principled or pragmatic responses to AI. Instead, I think our field needs to engage with and pay attention to AI.
Now, some might argue that I’m taking the call to refuse/resist AI too literally and that the kind of engagement I’m advocating is not at odds with refusal.
I disagree. Word choices and their definitions matter. Refusing means being unwilling to do something. Paying attention means to listen to and to think about something. Much for the same reasons Selfe spoke about 27 years ago, there are perils to not paying attention to technology in writing classrooms. I believe our field needs to pay attention to AI by researching it, teaching with it, using it in our own writing, goofing around with it, and encouraging our students to do the same. And to be clear: studying AI is not the same as endorsing AI.
Selfe’s opening paragraph is a kidding/not kidding assessment of the CCCCs community’s feelings about technology and the community’s refusal to engage with it. She says many members of the CCCCs over the years have shared some of the best ideas we have from any discipline about teaching writing, but it’s a community that has also been largely uninterested in the focus of Selfe’s work, the use of computers to teach composition. She said she knew bringing up the topic in a keynote at the CCCCs was “guaranteed to inspire glazed eyes and complete indifference in that portion of the CCCC membership which does not immediately sink into snooze mode.” She said people in the CCCCs community saw a disconnect between their humanitarian concerns and a distraction from the real work of teaching literacy.
It was still possible in a lot of English teacher’s minds to separate computers from the teaching of writing– at least in the sense that most CCCCs members did not think about the implications of computers in their classrooms. Selfe says “I think [this belief] informs our actions within our home departments, where we generally continue to allocate the responsibility of technology decisions … to a single faculty or staff member who doesn’t mind wrestling with computers or the thorny, unpleasant issues that can be associated with their use.”
Let me stop for a moment to note that in 1998, I was there. I attended and presented at that CCCCs in Chicago, and while I can’t recall if I saw Selfe’s address in person (I think I did), I definitely remember the times.
After finishing my PhD in 1996, I was hired by Southern Oregon University as their English department’s first “computers and writing” specialist. At the 1998 convention, I met up with my future colleagues at EMU because I had recently accepted the position I currently have, where I was once again hired as a computer and writing specialist. At both SOU and EMU, I had colleagues– you will not be surprised to learn these tended to be senior colleagues– who questioned why there was any need to add someone like me to the faculty. In some ways, it was similar to the complaints I’ve seen on social media about faculty searches involving AI specialists in writing studies and related fields.
Anyway, Selfe argues that in hiring specialists, English departments outsourced responsibility to the rest of the faculty to have anything to do with computer technology. It enabled a continued belief that computers are simply “tool[s] that individual faculty members can use or ignore in their classrooms as they choose, but also one that the profession, as a collective whole–and with just a few notable exceptions–need not address too systematically.” Instead, she argued that what people in our profession needed to do was to pay attention to these issues, even if we really would rather refuse to do so: “I believe composition studies faculty have a much larger and more complicated obligation to fulfill–that of trying to understand and make sense of, to pay attention to, how technology is now inextricably linked to literacy and literacy education in this country. As a part of this obligation, I suggest that we have some rather unpleasant facts to face about our own professional behavior and involvement.” She goes on a couple of paragraphs later to say in all italics “As composition teachers, deciding whether or not to use technology in our classes is simply not the point–we have to pay attention to technology.”
Again, I’m guessing the connection to Selfe’s call then to pay attention to computer technology and my call now to pay attention to AI is pretty obvious.
The specific case example Selfe discusses in detail in her address is a Clinton-Gore era report called Getting America’s Children Ready for the Twenty-First Century, which was about that administration’s efforts to promote technological literacy in education, particularly in K-12 schools. The initiative spent millions on computer equipment, an amount of money that dwarfed the spending on literacy programs. As I recall those times, the main problem with this initiative was there was lots of money spent to put personal computers into schools, but very little money was spent on how to use the computers in classrooms. Self said, “Moreover, in a curious way, neither the CCCC, nor the NCTE, the MLA, nor the IRA–as far as I can tell–have ever published a single word about our own professional stance on this particular nationwide technology project: not one statement about how we think such literacy monies should be spent in English composition programs; not one statement about what kinds of literacy and technology efforts should be funded in connection with this project or how excellence should be gauged in these efforts; not one statement about the serious need for professional development and support for teachers that must be addressed within context of this particular national literacy project.”
Selfe closes with a call for action and a need for our field and profession to recognize technology as important work we all do around literacy. I’ve cherry-picked a couple of quotes here to share at the end. Again, by “technology”, Selfe more or less meant PCs, networked computers, and the web, all tools we all take for granted. But also again, every single one of these calls applies to AI as well.
Now, I think the CCCCs community and the discipline as a whole have moved in the direction Selfe was urging in her CCCCs address. Unlike the way things were in the 1990s, I think there is widespread interest in the CCCC community in studying the connections between technologies and literacy. Unlike then, both MLA and CCCCs (and presumably other parts of NCTE) have been engaged and paying attention. There is a joint CCCC-MLA task force that has issued statements and guidance on AI literacy, along with a series of working papers, all things Selfe was calling for back then. Judging from this year’s program and the few presentations I have been able to attend, it seems like a lot more of us are interested in engaging and paying attention to AI rather than refusing it.
At the same time, there is an echo–okay, one sound reference– of the scholarship in the early era of personal computers. A lot of the scholarship about AI now is based on teachers’ experiences of experimenting with it in their own classes. And we’re still revisiting a lot of the same questions regarding the extent to which we should be teaching students how to use AI, the issues of authenticity and humanness, of cheating, and so forth. History doesn’t repeat, but it does rhyme.
Let me close by saying I have no idea where we’re going to end up with AI. This fall, I’m planning on teaching a special topics course called Writing, Rhetoric, and AI, and while I have some ideas about what we’re going to do, I’m hesitant about committing too much to a plan now since all of this could be entirely different in a few months. There’s still the possibility of generative AI becoming artificial general intelligence and that might have a dramatic impact on all of our careers and beyond. Trump and shadow president Elon Musk would like nothing better than to replace most people who work for the federal government with this sort of AI. And of course, there is also the existential albeit science fiction-esque possibility of an AI more intelligent than humans enslaving us.
But at least I think that we’re doing a much better job of paying attention to technology nowadays.
The first time I attended and presented at the CCCCs was in 1995. It was in Washington, D.C., and I gave a talk that was about my dissertation proposal. I don’t remember all the details, but I probably drove with other grad students from Bowling Green and split a hotel room, maybe with Bill Hart-Davidson or Mick Doherty or someone like that. I remember going to the big publisher party sponsored by Bedford-St. Martin’s (or whatever they were called then) which was held that year at the National Press Club, where they filled us with free cocktails and enough heavy hors d’oeuvres to serve as a meal.
For me, the event has been going downhill for a while. The last time I went to the CCCCs in person was in 2019– pre-Covid, of course– in Pittsburgh. I was on a panel of three scheduled for 8:30 am Friday morning. One of the people on the panel was a no-show, and the other panelist was Alex Reid; one person showed up to see what we had to say– though at least that one person was John Gallagher. Alex and I went out to breakfast, and I kind of wandered around the conference after that, uninterested in anything on the program. I was bored and bummed out. I had driven, so I packed up and left Friday night, a day earlier than I planned.
And don’t even get me started on how badly the CCCCs did at holding online versions of the conference during Covid.
So I was feeling pretty “done” with the whole thing. But I decided to put in an individual proposal this year because I was hoping it would be the beginning of another project to justify a sabbatical next year, and I thought going to one more CCCCs 30 years after my first one rounded things out well. Plus it was a chance to visit Baltimore and to take a solo road trip.
This year, the CCCCs/NCTE leadership changed the format for individual proposals, something I didn’t figure out until after I was accepted. Instead of creating panels made up of three or four individual proposals, which is what the CCCCs had always done before– which is whatevery other academic conference I have ever attended does with individual proposals— they decided that individuals would get a 30-minute solo session. To make matters even worse, my time slot was 9:30 am on Saturday, which is the day most people are traveling back home.
Oh, also: my sabbatical/research release time proposal got turned down, meaning my motivations for doing this work at all has dropped off considerably. I thought about bailing out right up to the morning I left. But I decided to go through with it because I was also going to Richmond to visit my friend Dennis, I still wanted to see Baltimore, and I still liked the idea of going one more time and 30 years later.
Remarkably, I had a very good time.
It wasn’t like what I think of as “the good old days,” of course. I guess there were some publisher parties, but I missed out on those. I did run into people who I know and had some nice chats in the hallways of the enormous Baltimore convention center, but I mostly kept to myself, which was actually kind of nice. My “conference day” was Friday and I saw a couple of okay to pretty good panels about AI things– everything seemed to be about AI this year. I got a chance to look around the Inner Harbor on a cold and rainy day, and I got in half-price to the National Aquarium. And amazingly, I actually had a pretty decent-sized crowd (for me) at my Saturday morning talk. Honestly, I haven’t had as good of a CCCCs experience in years.
But now I’m done– probably.
I’m still annoyed with (IMO) the many many failings of the organization, and while I did have a good solo presenting experience, I still would have preferred being on a panel with others. But honestly, the main reason I’m done with the CCCCs (and other conferences) is not because of the conference but because of me. This conference made it very clear: essentially, I’ve aged out.
When I was a grad student/early career professor, conferences were a big deal. I learned a lot, I was able to do a lot of professional/social networking, and I got my start as a scholar. But at this point, where I am as promoted and as tenured as I’m ever going to be and where I’m not nearly as interested in furthering my career as I am retiring from it, I don’t get much out of all that anymore. And all of the people I used to meet up with and/or room with 10 or so years ago have quit going to the CCCCs because they became administrators, because they retired or died, or because they too just decided it was no longer necessary or worth it.
So that’s it. Probably. I have been saying for a while now that I want to shift from writing/reading/thinking about academic things to other non-academic things. I started my academic career as a fiction writer in an MFA program, and I’ve thought for a while now about returning to that. I’ve had a bit of luck publishing commentaries, and of course, I’ll keep blogging.
Then again, I feel like I got a good response to my presentation, so maybe I will stay with that project and try to apply for a sabbatical again. And after all, the CCCCs is going to be in Cleveland next year and Milwaukee the year after that….
The two big things on my mind right now are finishing this semester (I am well into the major grading portion of the term in all three of my classes) and preparing for the CCCCs road trip that will begin next week. I’m sure I’ll write more on the CCCCs/road trip after I’m back.
But this morning, I thought I’d write a post about a course I’m hoping to teach this fall, “Writing, Rhetoric, and AI.” I’ve set up that page on my site with a brief description of the course– at least as I’m imagining it now. “Topics in” courses like this always begin with just a sketch of a plan, but given the twists and turns and speed of developments in AI, I’ve learned not to commit to a plan too early.
For example: the first time I tried to teach anything about AI was in a course I taught in fall 2022 in a 300-level digital writing course. I came up with an AI assignment based in part on an online presentation by Christine Photinos and Julie Wihelm for the 2023 Computers and Writing Conference, and also on Paul Fyfe’s article “How to Cheat on Your Final Paper: Assigning AI for Student Writing.” My plan at the beginning of that semester was to have students use the same AI tools these writers were talking about, which was OpenAI’s GPT-2. By the time we were starting to work on the AI writing assignment for that class, ChatGPT was released. So plans changed, English teachers started freaking out, etc.
Anyway, the first thing that needs to happen is the class needs to “make”– that is, get enough students to justify it running at all. But right now, I’m cautiously optimistic that it is going to happen. The course will be on Canvas and behind a firewall, but my plan for now is to eventually post assignments and readings lists and the like here. Once I figure out what we’re going to do.
What changed her mind? Well, it sounds like she had had enough:
I argued against use of AI detection in college classrooms for two years, but my perspective has shifted. I ran into the limits of my current approaches last semester, when a first-year writing student persisted in submitting work that was clearly not his own, presenting document history that showed him typing the work (maybe he. typed it and maybe he used an autotyper). He only admitted to the AI use and apologized for wasting my time when he realized that I was not going to give him credit and that if he initiated an appeals process, the college would run his writing through detection software. I haven’t had this kind of student encounter over AI cheating, but it’s not hard for me to imagine this scenario. It might be the last straw for me too. And like I think is the case with Mills, I’m getting sick of seeing this kind of dumb AI cheating.
Last November, I wrote here about a “teachable moment” I had when an unusually high number of freshman comp students who dumbly cheated with AI. The short version: for the first short assignment (2 or 3 pages), students are supposed to explain why they are interested in the topic they’ve selected for their research, and to explain what prewriting and brainstorming activities they did to come up with their working thesis. It’s not supposed to be about why they think their thesis is right; it’s supposed to be a reflection on the process they used to come up with a thesis that they know will change with research. It’s a “pass/revise” assignment I’ve given for years, and I always have a few students who misunderstand and end up writing something kind of like a research paper with no research. I make them revise. But last fall, a lot more of my students did the assignment wrong because they blindly trusted what ChatGPT told them. I met with these students, reminded them what the assignment actually was, and to also remember that AI cannot write an essay that explains what you think.
I’m teaching another couple of sections of freshman composition this semester and students just finished that first assignment. I warned them about avoiding the mistakes with AI students made last semester, and I repeated more often that the assignment is about their process and is not a research paper. The result? Well, I had fewer students trying to pass off something written by AI, but I still had a few.
My approach to dealing with AI cheating is the same as it has been ever since ChatGPT appeared: I focus on teaching writing as a process, and I require students to use Google Docs so I can use the version history to see how they put together their essays. I still don’t want to use Turnitin, and to be fair, Mills has not completely gone all-in with AI detection. Far from it. She sees Turnitin as an additional tool to use along with solid process writing pedagogy. Mills also shares some interesting resources about research into AI detection software and the difficulty of accurately spotting AI writing. Totally worth checking her post out.
I do disagree with her about how difficult it is to spot AI writing. Sure, it’s hard to figure out if a chunk of writing came from a human or an AI if there’s no context. But in writing classes like freshman composition, I see A LOT of my students’ writing (not just in final drafts), and because these are classes of 25 or so students, I get to know them as writers and people fairly well. So when a struggling student suddenly produces a piece of writing that is perfect grammatically and that sounds like a robot, I get suspicious and I meet with the student. So far, they have all confessed, more or less, and I’ve given them a second chance. In the fall, I had a student who cheated a second time; I failed them on the spot. If I had a student who persisted like the one Mills describes, I’m not quite sure what I would do.
But like I said, I too am starting to get annoyed that students keep using AI like this.
When ChatGPT first became a thing in late 2022 and everyone was all freaked out about everyone cheating, I wrote about/gave a couple of talks about how plagiarism has been a problem in writing classes literally forever. The vast majority of examples of plagiarism I see are still a result of students not knowing how to cite sources (or just being too lazy to do it), and it’s clear that most students don’t want to cheat and they see the point of needing to do the work themselves so they might learn something.
But it is different. Before ChatGPT, I had to deal with a blatant and intentional case of plagiarism once every couple of years. For the last year or so, I’ve had to deal with some examples of blatant AI plagiarism in pretty much every section of first-year writing I teach. It’s frustrating, especially since I like to think that one of the benefits of teaching students how to use AI is to discourage them from cheating with it.
The other day, I read Marc Watkin’s excellent Substack post “AI Is Unavoidable, Not Inevitable,” and I would strongly encourage you to take a moment to do the same. Watkins begins by noting that he is “seeing a greater siloing among folks who situate themselves in camps adopting or refusing AI.” What follows is not exactly a direct response to these refusing folks, but it’s pretty close and I find myself agreeing with Watkins entirely. As he says, “To make my position clear about the current AI in education discourse I want to highlight several things under an umbrella of ‘it’s very complicated.'”
Like I said, you really should read the whole thing. But I will share this long quote that is so on point:
Many of us have wanted to take a path of actively resisting generative AI’s influence on our teaching and our students. The reasons for doing so are legion—environmental, energy, economic, privacy, and loss of skills, but the one that continually pops up is not wanting to participate in something many of us fundamentally find unethical and repulsive. These arguments are valid and make us feel like we have agency—that we can take an active stance on the changing landscape of our world. Such arguments also harken back to the liberal tradition of resisting oppression, protesting what we believe to be unjust, and taking radical action as a response.
But I do not believe we can resist something we don’t fully understand. Reading articles about generative AI or trying ChatGPT a few times isn’t enough to gauge GenAI’s impact on our existing skills. Nor is it enough to rethink student assessments or revise curriculum to try and keep pace with an ever-changing suite of features.
To meaningfully practice resistance of AI or any technology requires engagement. As I’ve written previously, engaging AI doesn’t mean adopting it. Refusing a technology is a radical action and we should consider what that path genuinely looks like when the technology you despise is already intertwined with the technology you use each day in our very digital, very online world.
Exactly. Teachers of all sorts, but especially those of us who are also researchers and scholars, need to engage with AI well enough to know what we are either embracing or refusing. Only refusing is at best willful ignorance.
AI is difficult to compare to previous technologies (as Watkins says, AI defies analogies), but I do think the emergence of AI now is kind of like the emergence of computers and the internet as tools for writing a couple of decades ago. A pre-internet teacher could still refuse that technology by insisting students take notes by hand, hand in handwritten papers, and take proctored timed exams completed on paper forms. When I started at EMU in 1998, I still had a few very senior colleagues who taught like this, who never touched their ancient office computers, who refused to use email, etc. But try as they might, that pre-internet teacher who required their students to hand in handwritten papers did not make computers and the internet disappear from the world.
It’s not quite the same now with AI as it was with the internet back then because I don’t think we are at the point where we can assume “everyone” routinely uses AI tools all the time. This is why I for one am quite happy that most universities have not rolled out institutional policies on AI use in teaching and scholarship– it’s still too early for that. I’ve been experimenting with incorporating AI into my teaching for all kinds of different reasons, but I understand and respect the choices of my colleagues to not allow their students to use AI. The problem though is refusing AI does not make it disappear out of the students’ lives outside of the class– or even within that class. After all, if someone uses AI as a tool effectively– not to just crudely cheat, but to help learn the subject or as a tool to help with the writing– there is no way for that AI forbidding professor to tell.
Again, engaging with AI (or any other technology) does not mean embracing, using, or otherwise “liking” AI (or any other technology). I spent the better part of the 2010s studying and publishing about MOOCs, and among many other things, I learned that there are some things MOOCs can do well and some things they cannot. But I never thought of my blogging and scholarship as endorsing MOOCs, certainly not as a valid replacement for in-person or “traditional” online courses.
I think that’s the point Watkins is trying to make, and for me, that’s what academics do: we’re skeptics, especially of things based on wild and largely unsubstantiated claims. As Watkins writes, “… what better way to sell a product than to convince people it can lead to both your salvation and your utter destruction? The utopia/ dystopia narratives are just two sides of a single fabulist coin we all carry around with us in our pockets about AI.”
This is perhaps a bad transition, but thinking about this reminded me of Benjamin Riley’s Substack post back in December, “Who and What comprise AI Skepticism?” This is one of those “read it if you want to get into the weeds” sorts of posts, but the very short version: Casey Newton, who is a well-known technology journalist, wrote about how he thought there are only two camps of AI Skepticism: AI is real and dangerous, and AI is fake and sucks. Well, A LOT of prominent AI experts and writers disputed Newton’s argument, including Riley. What Riley does in his post is describe/create his own taxonomy of nine different categories of AI Skepticism, including one category he calls the “Sociocultural Commentator Critics– ‘the neo-Luddite wing,'” which would include AI refusers.
Go and check it out to see the whole list, but I would describe my skepticism as being most like the “AI in Education Skeptics” and the “Technical AI Skeptics” categories, along with a touch of “Skeptics of AI Art and Literature” category. Riley says AI in Education Skeptics are “wary of yet another ed-tech phenomena that over-hypes and under-delivers on its promises.” I think we all felt the same warriness of ed-tech and over-hype with MOOCs.
Riley’s Technical AI Skeptics are science-types, but what I identify with is exploring and exposing AI’s limitations. AI failures are at least as interesting to me as AI successes, and it makes me question all of these claims about AI passing various tests or whatever. AI can do no wrong in controlled experiments much in the same way that self-driving cars do just fine on a closed course in clear weather. But just like that car doesn’t do so great driving itself through a construction zone or a snowstorm, AI isn’t nearly as capable outside of the lab.
And I say a touch of the Skeptics in AI Art and Literature because while I don’t have a problem with people using AI to make art or to write things, I do think that “there is something essential to being human, to being alive, that we express through art and writing.” Actually, this is one of my sources of “cautious optimism” about AI: since it isn’t that good at doing the kind of human things we teach directly and indirectly in the humanities, maybe there’s a future in these disciplines after all.
I’ll add two other reasons why I’m skeptical about how AI. First, I wonder about the business model. While this is not exactly my area of expertise, I keep reading pieces by people who do know what they’re talking about raising the same questions about where the “return on investment” is going to come from. The emergence of DeepSeek is less about its technical capabilities and more about further disrupting the business plans.
Second, I am skeptical about how disruptive AI is going to be in education. It’s fun and easy to talk with AI chatbots, and they can be helpful for some parts of the writing process, especially when it comes to brainstorming, feedback on a draft, proofreading, and so forth. There might be some promise that today’s AI will enable useful computer-assisted instruction tools that go beyond “drill and kill” applications from the 1980s. And assuming AI continues to develop and mature into a truly general-purpose technology (like electricity, automobiles, the internet, etc.), of course, it will change how everything works, including education. But besides the fact that I don’t think AI is going to ever be good enough to replace the presence of humans in the loop, I don’t think anyone is comfortable with an AI replacing a human teacher (or, for that matter, human physicians, airline pilots, lawyers, etc.).
If there is going to be an ROI opportunity from the trillion dollars these companies have sunk into this stuff, it ain’t going to come from students using AI for school work or from people noodling around with it for fun. The real potential with AI is in research, businesses, and industries that work with enormous data sets and in handling complex but routine tasks: coding, logistics, marketing, finance, research into the discovery of new proteins or novel building materials, and anything involving making predictions based on a large database.
Of course, the fun (and scary and daunting!) part of researching AI and predicting its future is everyone is probably mostly wrong, but some of us might have a chance of being right.
As I wrote about earlier in December, I am “Back to Blogging Again” after experimenting with shifting everything to Substack. I switched back to blogging because I still get a lot more traffic on this site than on Substack, and because my blogging habits are too eclectic and random to be what I think of as a Newsletter. I realize this isn’t true for lots of Substackers, but to me, a Newsletter should be about a more specific “topic” than a blog, and it should be published on a more regular schedule.
So that’s my goal with “Paying Attention to AI.” We’ll see how it works out. Because I still want to post those Substack things here– because this is a platform I control, unlike any of the other ones owned by tech oligarchs or whatever, and because while I do like Substack, there is still the “Nazi problem” they are trying to work out. Besides, while Substack could be bought out and turned into a dumpster fire (lookin’ at you, X), no one is going to buy stevendkrause.com, and that’s even if I was selling.
Anyway, here’s the first post on that new Substack space.
Welcome to (working title) Paying Attention to AI
More Notes on Late 20th Century Composition, CAI, Word Processing, the Internet, and AI
My goal for this Substack site/newsletter/etc. is to write (mostly to myself) about what will probably be the last big research/scholarly project of my academic career, but I still don’t have a good title. I’m currently thinking “Paying Attention to AI,” a reference to Cynthia Selfe’s “Technology and Literacy: A Story about the Perils of Not Paying Attention,” which was her chair’s address at the 1997 Conference for College Composition and Communication before it was republished in the journal for the CCCs in 1999 and also expanded into the book Technology and Literacy in the Twenty-First Century.
But I also thought something mentioning AI, Composition, and “More Notes” would be good. That’s a reference to “A Note on Composition and Artificial Intelligence,” a brief 1983 article by Hugh Burns in the first newsletter issue of what would become the journal Computers and Composition. AI meant something quite different in the late 1970s/early 1980s, of course. Burns was writing then about how research in natural language processing and AI could help improve Computer Assisted Instruction (CAI) programs, which were then seen as one of the primary uses of computer technology in the teaching of writing— along with the new and increasingly popular word processing programs that run on these newly emerging personal computers.
Maybe I’ll figure out a way to combine the two into one title…
This project is based on a proposal that’s been accepted for the 2025 CCCCs in Baltimore, and also a proposal I have submitted at EMU for a research leave or a sabbatical for the 2025-26 school year. 1 I’m interested in looking back at the (relatively) recent history of the beginnings of the widespread use of “computers” (CAI, personal computers, word processors and spell/grammar checkers, local area networks, and the beginnings of “the internet”).
Burns’ and Selfe’s articles make nice bookends for this era for me because between the late 1970s until about the mid 1990s, there were hundreds of presentations and articles in major publications in writing studies and English about the role of personal computers and (later) the internet and the teaching of writing. Burns was enthusiastic about the potential of AI research and writing instruction, calling for teachers to use emerging CAI and other tools. It was still largely a theory though since in 1983, fewer 8% of households had one personal computer. By the time Selfe was speaking and then writing 13 or so years later, over 36% of households had at least one computer, and the internet and “World Wide Web” was rapidly taking its place as a general purpose technology altering the ways we do nearly everything, including how we teach and practice writing.
These are also good bookends for my own history as a student, a teacher, and a scholar, not mention as a writer who dabbled a lot with computers for a long time. I first wrote with computers in the early 1980s while in high school. I started college in 1984 with a typewriter and I got a Macintosh 512KE by about 1986. I was introduced to the idea of teaching writing in a lab of terminals— not PCs— connected to a mainframe unix computer when I started my MFA program at Virginia Commonwealth University in fiction writing in 1988. (I never taught in that lab, fwiw). In the mid-90s and while in my PhD program at Bowling Green State University, the internet and “the web” came along, first as text (remember Gopher? Lynx?) and then as GUI interfaces like Netscape. By the time Selfe was urging the English teachers attending the CCCCs attendees to, well, pay attention to technology, I had starte my first tenure-track job.
A lot of the things I read about AI right now (mostly on social media and MSM, but also in more scholarly work) dhas a tinge of the exuberant enthusiasm and/or the moral panic about the encroachment of computer technology back then, and that interests me a great deal. But at the same time, this is a different moment in lots of small and large ways. For one thing, while CAI applications never really caught on for teaching writing (at least beyond middle school), AI shows some real promise in making similar tutoring tools actually work. Of course, there were also a lot of other technologies and tools way back when that had their moments but then faded away. Remember MOOs/MUDs? Listservs? Blogs? And more recently, MOOCs?
So we’ll see where this goes.
1 FWIW: in an effort to make it kinda/sorta fit the conference theme, this presentation is awkwardly titled ““Echoes of the Past: Considering Current Artificial Intelligence Writing Pedagogies with Insights from the Era of Computer-Aided Instruction.” This will almost certainly be the last time I attend the CCCCs, my field’s annual flagship conference, because, as I am sure I will write about eventually, I think it has become a shit show. And whether or not this project continues much past the April 2025 conference will depend heavily on the research release time from EMU. Fingers crossed on that.
This past year was A LOT for me and the rest of my family. So so SO much happened, so much of it horrible and still difficult to comprehend, so much of it fantastic and beautiful. I suppose this “the worst of times/the best of times” sentiment is always kinda true, but I can’t think of another year where there was just so so much and in such extremes.
It’s been a lot. It’s been way too much for one year.
January
We were already well underway with one of the big ticket items of this year, which is building/buying/selling houses and moving for the first time in over 25 years.
On January 7, I started taking Zepbound, which is one of these weight loss drugs in the category of what everyone has heard of, Ozempic (though, as I wrote about during the year, it’s more complicated than that.)
My niece Emily got married in a huge and very Catholic ceremony in Kansas City. This was the first of the nieces/nephews (or cousins or grandchildren, depending on your perspective) to get married, so a big deal for the Krauses. Remarkably, there were no hitches with the weather or anything else.
The idea of moving started to get a lot more real when we were able to do a walk-through of the house right after they did the inspection for stuff they need to do before they put up drywall.
Of course, we (mostly me) have been driving by the construction site since November to see the progress, but walking around in what would become (in the order of these pictures) the upstairs/Steve loft area, stairs descending in the living room/main room and kitchen area was pretty cool. The Zepbound adventures continued (I was down about 7 pounds by the end of the month) as did the all first year writing semester.
March
We started getting real about selling the old house and preparing the move to the new one, and because we lived in our previous house in Normal Park for 25 years, it was stressful. I mean, we had decades worth of stuff to sort through– pack, sell, toss– and there was all the decluttering and the nervousness of would it sell and would we get what we were asking and all that. It’s kind of funny because everyone we talked to about this stuff– including my parents and in-laws– had all moved at least once (and usually twice) in the 25 years we hadn’t thought of it at all.
It’s funny to think about too because Annette grew up as an Air Force brat and her father was in for over 20 years, meaning she moved more than a dozen times before she was 15. I didn’t move that much as a kid, but we did move a couple of times, and in college and through my MFA program, I moved almost every year. So we used to know how to move.
April was the beginning of the “A LOT,” the far too much of the year. We had two open houses on the first Sunday of the month, and then on April 8, Annette and I cleared out to make room for potential buyers to come take a second look while we went to the eclipse. We met our friends Steve and Michelle and their daughter down in Whitehouse, Ohio (just outside of Toledo), which seemed like the easiest place to get to for the totality while avoiding bumper-to-bumper traffic into the “totality zone” in northern Ohio.
As I wrote on Instagram, being there for the totality was intense. I probably won’t be able to see another total eclipse in my lifetime; then again, a cruise in August 2027 in the Mediterranean is not impossible.
We had a second open house, which was nerve-wracking. Remember, we had not had anything to do with selling and buying a house in forever and everyone told us we’d get an offer immediately, so when that didn’t happen, we started contemplating scenarios about how we can swing paying for the new house without money from the sale of the old house and all of that. Well, another open house and we got an offer and everything worked out– eventually.
And the end of April was when Bill died, suddenly and just a few days after a group of us got together for dinner. That’s at the top of my list for of horrible and difficult to comprehend. It still doesn’t feel real to me, and I think about Bill almost every day.
May
MSU had a quite large memorial for Bill in early May we were able to attend– Will flew back too. There had to be at least 500 people at it, and it was as celebratory about a remarkable life as it could be. I wrote about some of this in early May here, though this is as much about my own thoughts of mortality than anything else. Like I said, this year has been a lot, and this was the horrible part.
And in mid-May, we closed on both houses and pretty much on the same day. We went to a Title office in Ann Arbor and met the guy who bought our house for the first time, and without going into a lot of details, I feel pretty confident that that he and his partner (who was there via Facetime) are a great fit, ready for the adventures and challenges of fixing up the place and making it their own. That was the selling part. The buying part of the new house we were able to do electronically, and weirdly and quite literally while we were running errands after the closing where we were selling, we received a number of emails to electronically sign some forms and boom, we bought the new house too.
It was and still is kind of bitter-sweet, leaving the old place and the old neighborhood. It was time to move on and the longer we are in the new place, the fewer regrets I have. Still, when you live someplace for 25 years, that place becomes more than just housing, and that is especially true when it is in such a great neighborhood. I still drive through the old neighborhood and the old house about once a week on my way to or from EMU.
A lot of the last part of May and the first part of June was a complete daze of moving. We decided that the way we’d move is to start taking stuff over a carload at a time (and I did most of the heavy lifting, mostly because Annette was teaching a summer class) and then hiring movers for the big stuff later. I remember talking with my father about this approach to moving, and his joke was it’s sort of like getting hit in the nuts fairly gently every day for a month, or getting hit once really hard. When we move again (no idea when that will be), I think the smarter move would be to do it all at once, but I don’t think there’s any escaping what Annette and I had erased from our memories after staying put so long: moving sucks.
Also in June: we celebrated our 30th wedding anniversary. Well, sort of. Before we started getting serious about buying a new house, the original plan was go go on a big European adventure that sort of retraced the trip we took for our honeymoon, but we decided to give each other a house instead. The 31sth wedding anniversary trip to Europe is coming this spring instead.
As part of the house closing deal, we were able to be in the old house through the first weekend in June and we had one last Normal Park hurrah by selling lots and lots of stuff in the annual neighborhood big yard sale event. I went once last time on June 10 to mow the lawn, double-check to make sure everything was cleaned up, and to do one last terror selfie.
July
The new house– the cost of it of course, but also just settling into it and all– meant we didn’t travel anyplace this summer in I don’t know how many years. I missed going up north, and we might not be able to do that again this coming year either. And we watched the shitshow that was the presidential election tick by. But there was golf, there was more AI stuff, hanging out with friends, going to art fairs in Plymouth and Ann Arbor, seeing movies and hanging out. Annette went to visit her side of things in late July, leaving me to fly solo for a few days, and her parents came back with her to stay in the new place for a while, our first house guests.
August
The in-laws visited, we went for a lovely little overnight stay in Detroit. played some golf, started getting ready for teaching, and I wrote a fair amount about AI here and in a Substack space I switched to in August. The switching back happened later. Started feeling optimistic about Kamala’s chances….Oh, and my son defended his dissertation and is now Dr. William Steven Wannamaker Krause (but still Will to me).
September
By September 5, when I wrote this post about both weight loss and Johann Hari’s book about Ozempic called Magic Pill, I was down about 35 pounds from Zepbound. The semester was underway with a lot of AI things in all three classes. There was a touch of Covid– Annette tested positive, I don’t think I ever did, but I felt not great. My parents visited in the end of September, and of course they too liked the new house.
October
The month started with a joint 60th birthday for Annette and our friend Steve Benninghoff– they both turned 60 a few months apart. It was the first big party we had here at the new house. During EMU’s new tradition of a “Fall break,” we went to New York City. We let up with Will and his girlfriend and went to the Natural History Museum (pretty cool), went with them to see the very funny and silly Oh, Mary! Annette and I also went to see the excellent play Stereophonic and met up with old friends Troy and Lisa, and also Annette at an old school Italian restaurant that apparently Frank Sinatra used to like a lot. Rachel and Colin came by for dinner when they were in town too. And of course school/work, too.
November
We started by going to see Steve Martin and Martin Short at the Fox Theater in Detroit— great and fun show. Then, of course, there was the fucking election, another bit of horrible for the year. More Substack writing about AI and just being busy with work– the travels and events of October really put me behind with school, and I felt like I spent the last 6 or so weeks of the semester just barely caught up on it all. Will and his girlfriend came out here before Thanksgiving and she flew back home to be with her family. Meanwhile we made our annual trip to Iowa for Thanksgiving/Christmas. A good time that featured some taco pizza the day after the turkey, and happily, very very little discussion of politics.
I ended up switching back to blogging but not quite giving up on Substack, as I talked about in this post. One of my goals for winter 2025 is to start a more focused Substack newsletter on my next (and likely last) academic research project on the history of AI, Computer Aided Instruction, and early uses of wordpressors in writing pedagogy from the late 70s until the early 90s. Stay tuned for that.
Oh, and the niece I had who was the first of the cousins to get married? Also the first to have a baby in early December– thus the first great-grandchild in the family.
There was much baking (in November too), and some decorating and some foggy pictures of the woods. Will and his girlfriend returned (I think Will has been back here more in the last couple of months than he has been in quite a while) and we took a trip to the Detroit Institute of Art before they left to California to see her family. Will came back here, we made the annual trip to Naples, Florida to see the in-laws, and now here we are.
Like I said, it’s been a lot, and a whole lot of it is bad. I worry about Trump. I miss Bill terribly. He touched a lot of people in his life and so I know I’m not alone on that one.
But I’m also oddly hopeful for what’s to come next. The more we are in the new house, the more it is home. The Zepbound adventure continues (I’m down about 40 pounds from last January), I’m hopeful for Will as he starts a new gig as a post-doc researcher, I’m looking forward to the new term, and I’m looking forward to all that is coming in the new year.