Annette and I went on a month-long trip through Europe to celebrate our 31st anniversary, from late May to late June. “Where’d you go?” you ask? Well:
We started in Denmark, visiting friends in Roskilde and touring around Copenhagen (including the Christiania neighborhood) and going to the Hans Christian Andersen museum. Then to Berlin, which was the first stop on our honeymoon way back when. Berlin included fancy food, wandering about, some modern art, the Stasi Museum, some theater (a show about David Bowie in Berlin), and generally soaking in the history of life in what was behind the iron curtain, like going to a museum about what life was like in East Germany. Then we went to three other places from the honeymoon: Meissen, Dresden (which is the big city of the region), and then Prague for a few days.
In Paris, we stayed at a hotel as part of the cruise for two nights and then for five nights at an apartment that we had stayed in the last time we were there, a dozen years ago. We went to a great No Kings Day protest organized by an expat group in Paris, the restored Notre Dame (amazing), Sacre-Coeur, and the Orsay, and we also took a bus trip out to Monet’s House in Giverny. We were going to the Catacombs, but it was closed because of a museum worker’s strike (more on that in a moment), so we went to the Picasso Museum. And we wandered around and gawked at a lot of stuff all along the way.
We also did a fair amount of what I’d describe as just “hanging out” while in Paris, both in the apartment and out eating or having a coffee or a cocktail. I had a list of food and cafe places to visit, but in the end, we didn’t go to any of them. Other than one recommendation from people who owned the apartment (Wepler), we just ate at places that looked good and that had seating, pretty much all bistros. Plus we ate dinner in the apartment three of the five nights we were at the apartment, a nice break from eating restaurant food for most of the previous three weeks.
That’s the recap; here are some random thoughts:
I posted a lot of pictures along the way on Instagram (and thus also Facebook), and I have mixed feelings about this. It feels overly performative in a way, a kind of “hey, look at me!” attention-getting move. On the other hand, I always enjoy seeing other people’s travel pictures, and people have said nice things to me about my pictures. We went to a big family thing the weekend after we got back, my parents’ 60th wedding anniversary in Door County, and I caught up with a bunch of relatives I only (sorta/kinda) keep in touch with on Facebook. They all came up to me at some point at this party and told me it looked like a great trip.
This was the longest trip I’ve ever been on (I think), and Annette and I travel quite a bit. We went on an epic transatlantic cruise with stops in London and Reykjavik in 2017 that was 21 days, and I think our honeymoon was also about three weeks. A month felt like a long time, and I was ready to go home when we did (Annette said she could have stayed longer).
All of Europe is a trip hazard, with cobblestones everywhere and nothing level. And my God, the stairs, THE STAIRS! I’m kidding (sort of), and I suppose it’s kind of hard to make these 500 year old buildings more accessible. I am also pretty sure the code in Europe is a lot less strict than the ADA is here. Then again, a whole lot of Americans might be better off health-wise if they had to climb some stairs once in a while.
I would give our packing efforts a “B” because we overpacked, but we kind of had to overpack. The forecast for the first 10 or so days of the trip had highs in the 60s, and then highs in the 90s by the time we got to Paris. Also, we knew that for the second and third week of the trip we wouldn’t have access to a washing machine. Still, there was stuff I packed that I never wore, and that’s a mistake.
We also had three different kinds of trips, each of which has different optimal packing strategies. The first part of the trip was kind of a “Rick Steves” style of travel: three or four different stops in about 15 days, almost always traveling by train. That’s the part of the trip where we were really overpacked, especially when we had to be quick to switch trains or when we had to haul our stuff up the stairs. The second part of the trip, a cruise, calls for a whole different packing strategy: bring as much stuff as you want because you barely have to handle your luggage yourself at all. The same is true with guided tours, though you do have to pack everything up every couple of days to move on to the next place. The third part of the trip, staying at a rented apartment (or a vacation home/cottage), is still a different packing strategy. It’s similar to a cruise in that you unpack and then only repack when you leave, but it also depends on the place. Most of the places we rent nowadays have a washing machine, making it easy to travel light. But if you’re going to stay in the same place for a week, well, why? When we rent a place “up north” or wherever in the US and we’re driving, I usually pack a box of kitchen supplies– some basic condiments/seasonings, a decent knife, etc. I could have used some of those things on this trip.
I never felt that the Europeans or Canadians we met were angry at us for being Americans, though we do not give off “MAGA” vibes. The few Trumpy-types on the river cruise kept to themselves, and we saw some things that suggested the rise of the far right, especially in what had been East Germany. But every person we actually talked to basically said it must suck to be an American now, they all hated Trump, and they were worried about the US. I felt a sympathetic and welcoming vibe I wasn’t expecting.
I’ve been on a few ocean cruises and I would (probably) go on another one of those, but I don’t think the river cruise was my cup of tea. It just wasn’t quite what I was expecting, I guess. You know the Viking commercials you see where the riverboat pulls up to a dock right in the heart of some charming city, allowing the passengers to explore at their own pace? This was not like that at all. Almost every stop required us to take a tour to see anything, on their schedule and always involving a bus ride. In contrast, ocean cruises make it easy for passengers to visit ports of call on their own. I could go on, but you get the idea.
Speaking of age: the main advantage of traveling when you’re young– like when we were on our honeymoon– is, well, youth. You are stronger, faster, can get by on a lot less sleep, stuff like that. I don’t know if this is automatically a trait of youth, but when I was in my late 20s, I was a lot more willing to stay in some less-than-comfy places. On our honeymoon, we stayed in a lot of “room for rent” kind of arrangements, some of them quite memorable for the wrong reasons. The advantage of traveling when you’re old (but not so old you can’t carry a bag, hustle up stairs, etc.) is money. I don’t think I would describe our trip as “luxurious” or “fancy,” but we also didn’t have to share a bathroom with any of the other people in the house.
There were A LOT of tourists in the touristy spots, especially in Prague and Paris. When we went to the castle and cathedral in Prague, we were surrounded by middle school/high school student tour groups– probably on the same kind of bus trip that our kid took (when he was that age many years ago) to D.C. and Gettysburg, a pretty common right of passage around here. And groups of Asian tourists, other Americans, and a lot of European tour groups too. We were in Paris when staff at the Louvre went on strike, and the same union was on strike at the catacombs and for the same reason: too many tourists. Rick Steves had an interesting article/blog post about this, one where he also basically says, “hey, it’s not my kind of trips that are the problem,” but of course, it kind of is.
At the same time, we found a lot of not so touristy places that were great: Roskilde with its cathedral and Viking Museum, basically all of Berlin. the Lobkowicz Palace and the Decorative Arts museum in Prague, and the Picasso museum in Paris, where were able to just sit and study a wall of portraits completely uninterrupted for about 10 minutes. So it is possible to avoid the crowds; you just have to visit some of the places not on everyone’s bucket list.
And then there’s the expat question, something we have been talking about lately. Part of that has been motivated by current events, of course. More realistically, it might be something we try out in retirement, which is somewhere between 5 and 10 years away, depending on both money and our shifting moods about work and (gestures at the world broadly). Living abroad for a few years at the beginning of retirement might be a good idea. Still, I think this trip has convinced me I am not ready for that, at least not quite yet. As Twain might put it, I hate my government currently, but I still love my country. I would miss all of my ‘merican things and stuff, not to mention friends and family. Everyone pretty much everywhere speaks English well enough for someone like me to live anywhere and to get by. But I think it’d be lonely living in a country where the conversations all around me (people, but also signage, newspapers, TV shows, billboards, etc.) were happening in a language I did not understand. It’d probably be easier for me to move someplace where people spoke English.
That said, if I were to move to one of the places we visited on this trip– or more realistically, if I were to go someplace we visited on this trip to stay for a few weeks or months– it’d probably be Berlin. There’s lots to see and do there– it’s about the size of metro DC– but it also seemed less of a destination than Paris or Prague, also kind of true of DC. And of course, also like DC, Berlin is the capital. It wasn’t cheap, but it also seemed reasonably affordable. We’re already talking about our next big trip, so who knows?
Interestingly enough, Hsu’s essay, as published in the July 7 & 14, 2025 print version of the magazine, has the nondescriptive title “The End of the Essay.” I think that says a lot about the differences between the two publishing formats– online and clickable versus on paper, just like it was 100 years ago. Neither headline is right because Hsu is not writing about college writing assignments “ending,” let alone being destroyed. Rather, this is more about how AI is challenging the college experience, including the essay assignment, and about the anxieties of both teachers and students around these changes. I think a more accurate title might be “AI is Changing and Complicating College Writing and Learning Itself,” but that’s not exactly a clickable link, is it?
I think it’s a really good read because it taps into the anxieties that both teachers and students have about AI and its role in college (especially in writing classes) without characterizing students as constant cheaters and teachers as all hopelessly out of touch and unwilling to change. And I think it also highlights the problem of an overemphasis on the technology of education at the expense of actually trying to learn something.
Hsu is an English professor at Bard College and the author of Stay True, which won the Pulitzer for Memoir in 2022, and he’s been a writer for The New Yorker since 2017. He’s a talented and accessible writer, plus he has also “been there” as someone who has had to deal with AI in his own teaching. Though it is worth mentioning that he realizes that the teaching situation he has at Bard, where “a student is more likely to hand in a big paper a year late (as recently happened) than to take a dishonorable shortcut,” might be a little different from those of us teaching at less elite institutions. He interviews several students on their experiences in using AI in classes, some of which are pretty straight-up cheating, but a lot of which is not– or probably not. One of the students he interviewed used AI more or less like Google as a search tool, and others talked about how they used AI more as a study tool in ways I try to teach in my classes.
Hsu also talks to faculty, some who are returning to handwritten blue books for exams, including the University of Virginia media studies scholar Siva Vaidhyanathan: “Maybe we go all the way back to 450 B.C.” (For what it’s worth: blue books are a bad idea, and I think Elizabeth Wardle articulates why in an excellent op-ed in The Cincinnati Enquirer called Students aren’t cheating because they have AI, but because colleges are broken,” a commentary that is similar to Hsu’s, but with a more accurate title). But he also talks to Dan Melzer, the director of the first-year writing program at the University of California, Davis, about what pretty much everyone in my field has advocated for the last 50 years, teach writing as a process. He talks about the many problems of higher education nowadays– the costs, the constant assessments, the shifting perceived values of different majors and higher education itself.
The only thing I wish is that there was citation for the various studies Hsu notes. Though perhaps that will be part of what I will ask students to do when I assign this in the fall, to track down Hsu’s evidence. I’m (probably) going to be assigning this in the fall in my classes, and not just because my classes this fall are going to be about AI.1 Actually, I think everyone who teaches in college– and that is especially true in fields like mine– needs to have “the AI talk” with their students.
I’ve informally polled my students over the last couple of years and asked, “How many of your professors in your other classes have said anything about AI?” A few students told me they were actively using AI in their courses, and there was more of that last year than the year before. A few students told me that their professors have forbidden them to use AI. When I asked those students if they thought their professors could actually tell if they were using AI or not, they generally shrugged. But in most cases, certainly more than half of my students said their professor didn’t say anything about AI at all.
By “the talk,” I’m thinking about sex rather than the conversation Black American parents have with their children about racism. But really, I mean talking with someone about something potentially embarrassing and uncomfortable for everyone, not unlike telling a student that they failed.
I can understand why a lot (most?) professors do not want to have the AI talk. It makes us vunerable. Most professors don’t feel like they know enough about AI, they don’t want to look like idiots or as hopelessly out of touch, and also because AI is scary. I also think that a lot of professors think that refusing and ignoring is enough: that is, just tell your students don’t use AI because it’s bad, m’kay?
Again, that’s what I like so much about Hsu’s essay. It’s a good starter for “the AI talk.”
I’m teaching an advanced special topics class called “Writing, Rhetoric, and AI,” and the two sections of first year writing I’m teaching this fall will have “your career goals and AI” as the research topic. ↩︎
I’m kind of surprised, but I am still coming across essays and Substack posts and such where teachers/professors are freaking out about AI. ChatGPT came out in November 2024, more than two and a half years ago. I would have thought folks would have moved on from these “writing assignments are dead” kinds of pieces by now, but no–throw a brick out a window and you’ll hit one. Here’s a good recent example: “The Death of the Student Essay– and the Future of Cognition” by Brian Klaas. The title is the gist of it– I’ll come back to Klaas’ essay later.
It’s not that these “the death of the assigned paper and now I’m going to make my students chisel everything into stone” eulogies are entirely wrong. As I’ve been saying for a few years now, AI means teachers who used to merely assign writing with no attention to process can’t do that anymore. AI means teachers need to adjust their approach to education. It doesn’t mean that all of a sudden everyone will stop learning.
And before I go any further, I kind of think what I’m writing about here is Captain Obvious wisdom, but here it goes:
Here’s what I mean:
Learning is about gaining knowledge and skills, and humans do this in lots of different ways— play, practice, observation, experiences, trial and error. We learn things from others and the world around us, and while learning is often frustrating, I think learning is pleasurable and fulfilling. All of us start learning right after we’re born— how to get attention, to crawl, to roll, to walk, etc.— through help from our parents of course, but also on our own.
Some things we learn through exposure to the world around us; for example, speech. Of course, parents and others around babies try to help the process along (“say da-da!”), but mostly, babies and toddlers learn how to speak by picking up on how the humans around them are speaking. And as anyone who has parented or spent time around a chatty pre-schooler knows, sometimes it can be challenging to get them to stop talking.
On the other hand, some things we need to be taught how to do by others— not necessarily teachers per se, but other people who know how to do whatever it is we’re trying to learn. Reading and writing are good examples of this, which is one of the ways literacy is different from speech (or, as Walter Ong might have put it, orality). This is one of the reasons why, up until a few hundred years ago, the vast majority of people were illiterate.
Except Tarzan. This is a bit a of tangent, but bear with me:
Edgar Rice Burrough’s famous novel Tarzan of the Apes is an extraordinarily interesting, odd, offensive novel, and most of the adaptations of the book gloss over its over-the-top fantasy and weirdness. At the beginning of the book, Tarzan’s parents are put ashore in Africa after a mutiny on their ship, and his father builds a cabin stocked with the goods Tarzan’s parents were traveling with, including a lot of books. The parents are killed by “apes” (which are somehow different than gorillas, but that’s a different story) and the baby that becomes Tarzan is raised by them.
When he is around 10, Tarzan stumbles across the cabin with its books, and, long story short, he teaches himself to read. He does this by staring at the the marks on the pages of a children’s book, letters that looked like little bugs next to a picture of a strange ape that looked like him, and he figured out those little bugs were b-o-y. ”And so he progressed very, very slowly, for it was a hard and laborious task which he had set himself without knowing it—a task which might seem to you or me impossible—learning to read without having the slightest knowledge of letters or written language, or the faintest idea that such things existed.” Basically, Burroughs is saying “yeah, I know, I know, but just go with it.”
In contrast, education is a technology. To quote from my book, education is the “formal schooling apparatus that enables the delivery of various kinds of evaluations, certificates, and degrees through a recognized, organized, and hierarchical bureaucracy. It’s a technology characterized by specific roles for participants (e.g., students, teachers, professors, principals, deans) and where students are generally divided into groups based on both age and ability.” This is an argument I belabor in some detail— you can read more about it here with the right JSTOR access— but I’m sure anyone reading this has had first-hand experience with what I’m talking about.
Learning and education are a Venn diagram: when schooling goes well, education facilitates learning, and successful learners are rewarded by their educational experiences with degrees and certifications. But sometimes schooling does not go well. For whatever reason, some students, especially in courses like first-year writing, just do not want to be there. That was the case for me in a lot of high school and college classes. Sometimes, it was because of bad teaching, but more often than not, it was my lack of interest in the subject, or the fact that it was a subject I was (and am still) not very good at– anything having to do with math or foreign languages, for example. Whatever the reason though, I knew I had to push through and do the course in order to move on toward finishing the degree.1
Everyone involved in education gets frustrated by the bureaucracies and rules of it, especially when the system that is education gets in the way of learning. For example, even professors in business colleges are annoyed by students who are not there to learn anything but to just get the credential and the job. Students are often annoyed at their professors who don’t seem to know how to help them learn because they are just so bad, and everyone is annoyed with all of the other curricular hoops, paperwork, and constant grading. And that’s because learning is the fun part, and the important part!
But here’s the thing: the occupational, monetary, class, and cultural values of academic credentials– that is, the degree as a commodity– are only possible with the technology of education. It is why students and their families (our “customers”) are willing to pay universities so much money. As I wrote in my book, “Students would probably not enroll in courses or at universities where they didn’t feel they were learning anything, but they certainly would not pay for those courses if there was no credit toward a degree associated with them.”
Educators, and I like to think most students as well, are attracted to the university because they enjoy learning and place a high value on learning for the sake of learning: that is, the humanness of it all. But look, I don’t know anyone who is a teacher or a professor who does this work just for the love of it. This is a job, and if I didn’t get paid, I wouldn’t be doing it. Besides, there is a lot of value in education’s certifications and degrees in all of our day-to-day lives. I find it reassuring that the engineers who designed the car I drive (not to mention the roads and bridges I drive on) have degrees that certify a level of expertise. I am glad my dentist went to dental school, that my doctor went to medical school, and so on.
So, to circle back to how this connects with AI in general and with Brian Klaas’ essay in particular: I think the vast majority of the “AI and the end of student writing” essays I have read (including this one) are incorrect in at least two ways. The first way, which I have been writing about for a while now and which I mentioned at the beginning of this post, is about the distinction between assigning writing as a product and teaching writing as a process. Like most teachers, Klaas does not seem to have a series of assignments, peer reviews, opportunities to revise, etc.; he’s assigning a term paper and hoping students write something that demonstrates they understood the content of the class. Klaas writes “Previously,” meaning before AI, “there was a tight coupling between essay quality and underlying knowledge assembled with careful intelligence. The end goal (the final draft) was a good proxy for the actual point of the exercise (evaluating critical thinking). That’s no longer true.” By quality, I think Klaas means grammatical correctness, and I don’t think that has ever been the primary indicator of a student’s critical thinking. Yes, the students who write the best essays also tend to write in grammatically correct prose, but that’s a pretty low bar. And don’t even get me started on the complexities scholars in my field could unpack in Klaas’ claim about the “coupling” between “quality” and “intelligence.”
Klaas also doesn’t seem that interested in doing the extra work of teaching writing either. He writes:
More than once, a student quite clearly used ChatGPT, but to try to cover their tracks, they peppered citations for course readings—completely at random—throughout the text. For example, after a claim about an event in 2024 in Bangladesh, there was a citation for a book written ten years earlier—about the Arab Spring. “Rather impressive time machine they must have had,” I commented.
After a career working to develop expertise, countless hours teaching, and my best attempts to instill a love of learning in young minds, I had been reduced to the citation police.
I’m sure Klaas is correct and this student was cheating, but I’ve got some bad news for him: if you want students to use proper citation style, you have to teach it. And, as I’ve written about before, teaching citation is even more important with AI for a variety of reasons, including the fact that AI makes up citations like this all the time.
But again, Klaas doesn’t want to teach writing anyway; “Next year, my courses will be assessed with in-person exams.” Well, if Klaas was assigning writing so students could write essays that are like answers to questions in an exam, maybe he should have just given an exam in the first place.
This leads me back to my Captain Obvious Observation: learning and education are not the same thing. Yes, any of us can use AI as a crutch to skip our innate needs and desires to learn, but AI’s real impact is how it disrupts the technologies and apparatuses of education. Klaas says as much, ultimately. He points out that AI probably means “universities will need to find ways to certify that grades are the byproduct of carefully designed systems to ensure that assessments were produced by students.” And in passing, he writes “We must not fall into the trap of mistaking the outputs of writing (which are increasingly substitutable through technology) from the value of the cognitive process of writing (which hones mental development and cannot be substituted by a machine).”
Exactly. And I think we know how to do that.
First, we have to teach students about AI, and that’s especially true if we don’t want them to use it. For example, had Klaas explained to his students that AI makes up citations all the time, they might not have tried to cheat like that in the first place. It’s not enough to just say “don’t use it.”
Second, we need to lean more into learning, and we need to be more obvious in explaining to our students why this is important. Teachers need to do a better job of explaining to students and ourselves why we ask students to do things like write essays in the first place. It’s not just so teachers have something to assess as evidence of what grade that student deserves. That’s education. Rather, we have students write essays (or write code, do math problems, conduct mock experiments, etc.) because we’re hoping they might learn something.
Third, we need to change how we teach in ways that discourage relying too much on AI and encourage students to do the learning themselves. Unfortunately, this is a lot of work, and I think this is actually what Klaas and others lamenting the “death” of student writing are really complaining about. The “write a paper about such and such” assignments faculty have been relying on forever won’t work anymore. Though maybe that assignment you thought worked well before AI actually wasn’t that effective either?
“Moving on” did not necessarily mean finishing the course– I dropped several as an undergraduate to avoid a D or an F. Also, I was lucky and unlucky as an undergraduate when it came to my two weakest school subjects. For my degree in English back in the 1980s, I did not have to take any math courses at all. However, I was required to have four semesters of a foreign language. If I had had to take the math class that my EMU English majors have to take as part of general education, I’m not sure I would have made it. On the other hand, EMU students do not have to take a foreign language. I studied German, and I was terrible at it, which is why it took me about seven tries (including a summer school class) to pass the four semesters I needed. ↩︎
It’s a painting done around 1350 called “Henricus de Alemannia in Front of His Students” by Laurentius de Voltolina, depicting a lecture hall class at the University of Bologna. It’s one of those images that gets referenced once in a while about the ineffectiveness (or effectiveness) of the lecture as a teaching method, but I’m more interested now in thinking about how recognizable this scene still is to humans now.
I wrote about this picture a bit in my book More than a Moment, which is about the rise and fall of MOOCs (remember them? the good old days!) in higher education. The second chapter is called “MOOCs as a Continuation of Distance Education Technologies” and it’s about some key moments/technologies in distance ed: correspondence courses, radio and television courses, and the first wave of “traditional” online courses.
I began the chapter by talking about a couple of MOOC entrepreneur TED talks in the early 2010s, including one by Peter Norvig in 2012 called “The 100,000 Student Classroom.” It’s a talk about a class in Artificial Intelligence Norvig co-taught (along with his then Stanford colleague Sebastian Thrun, who went on to create the MOOC start-up Udacity) with about 200 f2f students where they also allowed anyone to “participate” as “students” “online” in the “course” for “free.”1 Like most of the early high-profile MOOC prophets/ profiteers, Norvig and Thrun seem to believe that they discovered online teaching, completely unaware that less prestigious universities and community colleges have been offering online classes for decades. But I digress.
Norvig opens his talk by showing this image to suggest that nothing in education has changed in the last 600+ years. There’s still the “sage on the stage” lecturer, there are textbooks, and students sitting in rows as the audience, some paying close attention, some gossiping with each other, some sleeping. That is, nothing has changed— until now! It gets a laugh from the crowd, and it’s typical of the sort of hook that is part of the genre of a successful TED talk.
In my book, I point out that there are a lot of details of the modern classroom that would be unimaginable in the 14th century, things like audio-visual equipment and laptop computers, not to mention things we don’t even think of as technology anymore— electric lighting, synthetic building materials, controlled heating and cooling, whiteboards and chalkboards, and so forth. In fact, as I go on to argue, the conventional f2f classroom of “today” (well, almost 10 years ago, but you get the idea) is so seamlessly connected to digital libraries, social media, and online learning platforms that line between f2f and online learning is fuzzy.
I still think that is mostly true, but the more I read about how AI is going to change education completely, re-seeing an image like this makes me wonder. Maybe the reason why we still recognize what is happening here is because this is what learning still looks like. In other words, what if the real reason why technology has not fundamentally changed learning is because this is just what it is?
Maybe it’s because I’m writing this now while traveling in Europe and I’ve seen a lot of old art depicting other things humans have done forever: worshiping, fighting, having sex, playing games, acting, singing, dancing. The details are different, and maybe it’s hard to recognize the weapon or the instrument or whatever, but we can still tell what’s going on because these are all things that humans have always done. Isn’t learning like this?
Don’t get me wrong— a lot of the details of how learning works have changed with technologies like literacy, correspondence, computer technology, online courses, and so on. But even an asynchronous online course is recognizable as being similar to this 1350 lecture hall course, or like a small group Socratic dialog, just one that takes place with writing down words that are somewhere between snail mail exchanges and synchronous discussions.2
I guess what I’m getting at is maybe images like this one demonstrate that the desire to learn new things is something ingrained in the species. Learning is like all of these other things that human animals just do.
So if we can remember that learning does not mean the same thing as “going to college or whatever to get a credential to get a job” and that we are still a social species of animal that cannot stop trying to learn new things, maybe AI won’t “change everything” in education. And honestly, if the sci-fi scenarios of Artificial General/Super Intelligence come to pass and the machines replace their human creators, we’ve got much bigger problems to worry about.
These are all scare quotes because none of these words mean the same thing in MOOCs as they mean in conventional courses. ↩︎
In fact, I’d suggest that what happens in online discussion forums and on social media are much more like what Socrates meant by dialogue in Phaederus than what he meant by the more problematic and unresponsive technology of writing. ↩︎
I’ve now been on Zepbound for 70 weeks, and I was just looking over my notes on my progress over that time.1 For the first 35 weeks, I was losing about a pound a week and without thinking about it much. I didn’t exercise or diet more than usual (though I did and still do exercise and watch what I eat); I just wasn’t as hungry and so I didn’t eat as much.
Since week 35 (which was September 2024), I’ve lost just shy of eight more pounds– about 3 more pounds since the last time I wrote here about this— for a total of around 42-43 pounds. So on the one hand, I haven’t completely plateaued in my weight loss progress, and at least I am still heading in the right direction. Plus even with the stall, I’m still a lot less fat (and more healthy) than I was before I lost the weight– and at least I haven’t gained it back (yet). On the other hand, I ain’t going to get to my goal, which means losing about another 17-20 pounds, with Zepbound alone.
Obviously, that means I need to start doing something closer to acutally dieting and exercising more– or at least I need to shake up the routine/rut, and I’ll be doing that for the next month. Annette and I are going on a month long trip through Europe starting next week as part of a celebration of our 31st wedding anniversary, a trip that was delayed by a year because we decided to buy a new house. I won’t be dieting, and because it will not be easy to refrigerate Zepbound as we go from place to place, I’ll have to skip a week of dosing at the end of the trip.2 Still, I’m not worried because on trips like this that involve a lot of walking around, I almost never gain weight.
But enough about me (or just me). There’s been some interesting Zepbound news in the last couple of months. A few highlights:
One of the other ways that access has been reduced is new restrictions of compound pharmacy and other “knock-off” versions of these drugs. Basically, because these drugs are no longer in short supply from manufacturers, companies that make their own version of Zepbound (and there are a lot of companies like this) can no longer sell their own versions. The drug manufacturers and companies like Ro have been making deals to make the drugs a little less expensive, but they’re still expensive. I’m just happy that I don’t have to decide if it would be worth the $700 or so it would cost me a month out of pocket (and honestly, it might be).
Still, there’s a lot of optimism about the near future of these drugs. Eli Lilly Chief Scientific Officer Dan Skovronsky gave an interview on CNBC where he talked about a daily pill as effective as a weekly injection will be available soon (maybe by the end of the year), stronger versions of these drugs, and also using these drugs for lots of stuff besides weight loss specifically: heart disease, sleep apnea, and maybe other things like addiction. Sure, this guy is trying to sell Eli Lilly drugs, but there are a lot of articles out there reporting similar things.
Just a few days ago, there was this article in The New York Times, “Group Dining on Ozempic? It’s Complicated,” which is about the social etiquette of being on a GLP-1 drug and out to eat with others where you don’t eat that much. I am obviously not shy about the fact that I’m taking Zepbound, so when I’m eating with others at a restaurant or at a dinner party, I just tell people it’s the drugs. Annette and I went to a breakfast diner place in Detroit in April, and I ordered what turned out to be an enormous skillet of eggs, hash browns, sausage, and peppers and onions. It was delicious, but I could barely eat half. When the waitress came away to clear our plates, she seemed concerned that I might not have liked it. “No, it’s great– it’s just I’m taking one of those weight loss drugs and I can’t eat more.” “Oh yeah?” she said. “How’s it going? I have a cousin of mine who is on one of those things and has last 50 pounds.” So at least she understood.
And then there was the recent news that Weight Watchers (aka WW International) was going bankrupt largely as a result of people shifting to drug alternatives instead. I saw this op-ed in The New York Times on the mixed messages of Weight Watchers, “Weight Watchers Got One Thing Very Right” by Jennifer Weiner. On the one hand, Weiner points out a lot of the dieting culture promoted by Weight Watchers was harmful. A lot of mothers took their slightly overweight but still growing/developing daughters to Weight Watchers too early, and a lot of their customers never succeeded and yet kept coming back, “stuck in a cycle of loss, regain and shame that didn’t ultimately leave them any thinner, even as it fattened Weight Watchers’ coffers.” On the other hand, Weiner says Weight Watchers provided its customers– especially women– a sense of community at what were (pre-Covid, of course) regular meetings. They were safe “third spaces,” a gathering that was one of the “all-too-rare places in America where conservatives and progressives found themselves sitting side by side, commiserating about the same plateaus or the same frustrations or the same annoyance that the power that be had changed the point value of avocados, again.”
I’ve written about this before, but I actually was Weight Watchers member and attended meetings (with my wife) for about three years in I believe the early 2010s. The regular meeting we attended was similar to what Weiner describes. It was at a WW storefront center in a strip mall located right next to a Chinese restaurant. Whenever I went, I always peed right before weighing in, anxious to cut every possible ounce. Then there’d be a meeting that lasted anywhere from 30 to 45 minutes where people “shared,” and where the leader (in our case, a gay man named Robert) led us through some lesson, mostly built upon stories of his own weight loss he’d repeat over and over. I was not the only man to attend these meetings, but yes, mostly women. Annette and I attended regularly enough to know most of the other “regulars,” and to also spot folks who would show up once or twice and never again. I do not remember any discussions about exercise, or really any other weight loss advice that went beyond “eat less.” In those three years, my weight did not change, and I never felt the sense of belonging to a community. It felt pretty hopeless by the end. So yeah, I don’t feel too badly about the demise of WW.
As part of my journaling practices, I write down my weight for the morning and I also have a part where I track my weight on the days where I take Zepbound. ↩︎
Zepbound needs to be stored in the fridge, but it can be kept at room temperature for up to 21 days. So I’ll have to skip one dose in the last week we’re gone and then I’ll be able to return to normal when we get back. That’s a good thing because if I miss two weeks, I need to start over on Zepbound with the lowest dose, and as far as I can tell from my limited internet research, people who restart with Zepbound often don’t have the same level of success the second time around. ↩︎
Walsh frames his piece with the story of Chungin “Roy” Lee, a student recently kicked out of Columbia for using AI to do some rather sophisticated computer programming cheating, I believe both for some of his courses and for an internship interview. He has since launched a startup called Cluely, which claims to be an undetectable AI tool to help the user, well, cheat in virtually any situation, including while on dates and in interviews. Lee sees nothing wrong with this: Walsh quotes him as saying “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating.”
Walsh is tapping into the myth of the “mastermind” cheater, the student so brilliant they could do the work if they wanted to but prefer to cheat. In the real world, mastermind cheating does not exist, which is why Lee’s story has been retold in allkinds of places, including this New York Magazine article, and cheaters don’t usually raise over $5 million in VC start-up money with an app they created. Rather and 99.99999% of the time (and, in my 30+ years of teaching experience, 100% of the time), students who cheat are not very smart about it,2 and the reason they cheat is they are failing the course and they are desperate to try anything to pass.
The cheaters Walsh talks to for this article (though also maybe not cheaters, as I will get to in a moment) all claim “everyone” is already using ChatGPT et al for all of their assignments, so what’s the big deal? I’ve seen surveys, like this one summarized by Campus Technology, that claim close to 90% of students “already use AI in their studies,” but that’s not what my students have told me, and it’s not really what the survey results are either. I think 90% of college students have tried AI, but that’s not the same as saying they regularly use AI. According to this survey, it’s more like 54% of students said they used AI “at least on a weekly basis,” and the percentages were even lower for using AI to do things like create a first draft of an essay.3
I could go on with the ways that I think Walsh is wrong, but for me this article raises a larger question that I think is at the heart of AI hand wringing and resistance: what, exactly, is “cheating” in a college class?
I think everyone would agree that if a student turns in work that they did not do themselves, that’s cheating. The most obvious example in a writing class is a student handing in a paper that someone else wrote. But I don’t think it is cheating for students to seek help on their writing assignments, and what counts as cheating aided by others can be fuzzy. Here are three relatively recent non-AI-related examples I’ve had to deal with:
I teach a class called “Writing for the Web” in which (among other things) I require students to work through a series of still free tutorials on HTML and CSS on Codecademy, and I also require them to use WordPress to make a basic website. A lot of my students struggle with the technical aspects of these projects, and I always tell them to seek help from me, from each other, and from friends. Occasionally, a struggling student will get help from a more techno-savvy friend, and sometimes, the line between “getting help” and “getting someone else to do the work” gets crossed. That student perhaps welcomed and encouraged a little too much help from their friend, but the student still did most of the writing. Is this cheating?
I had a first-year writing student who went to see a writing tutor (although not one in the EMU writing center) about one of the assignments. I always think it is a good idea for students to seek help and advice from others outside the class— friends and family, but also tutors available on campus or even someone they might pay. I insist students do all of their writing in Google Docs for a variety of reasons— mostly as a way for me to see their writing process and to help me when grading revisions, but also because it discourages AI cheating. When I looked at the version history and the document comments, I saw that there were large chunks of the document actually written by the tutor. Is this cheating?
Also in first-year writing, I had a student who handed in an essay much more polished than the same student’s earlier work. I suspected the essay was written by someone else, so I called the student in for a conference. After I asked a few questions about some of the details in the essay, the student said, “Wait, you don’t think I wrote this, do you?” “No, I don’t, actually,” I said. The student said, “Well, I didn’t type it. What happened was I sat down with my mom and told her what the essay was supposed to be about, and then she wrote it all down for me.” Is this cheating?
I think the first example is kind of cheating, but because the extra help was more about coding and less about the writing, I didn’t penalize that student. The second example could count as cheating because someone other than the student did the work. But it’s hard to blame the student because the tutor broke one of the cardinal rules of tutoring: help, but never actually do the client’s/tutee’s work for them. The third example strikes me as clearly cheating, and every person I’ve told this story to believes that the student had to have known they were cheating. It’s probably true that the student was lying to me, but what if they really did think this was just getting help? Maybe Mom did this for their child all the way through high school.4
My own policy is pretty much the same as Vee’s, which is also very similar to Nature’s AI policy for publications. First, you cannot use verbatim the writing from AI because “any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.” Second, if a writer does use AI as part of the process (brainstorming, researching, summarizing, proofreading, etc.), they need to explain how they used AI and in some detail. So now, when my students turn in an essay, they also need to include an “AI Use Statement” in which they explain what AI tools they used, what kinds of prompts, how they applied the results, and so forth. I think both my students and I are still trying to figure out how much detail these AI Use Statements need, but that’s a slightly different topic.5
Anyway, while I am okay with students getting help from AI in more or less the same way they might get help from another human, I think a lot of teachers (especially AI refusers) are not.
Take this example of what Walsh sees as AI cheating:
Whenever Wendy uses AI to write an essay (which is to say, whenever she writes an essay), she follows three steps. Step one: “I say, ‘I’m a first-year college student. I’m taking this English class.’” Otherwise, Wendy said, “it will give you a very advanced, very complicated writing style, and you don’t want that.” Step two: Wendy provides some background on the class she’s taking before copying and pasting her professor’s instructions into the chatbot. Step three: “Then I ask, ‘According to the prompt, can you please provide me an outline or an organization to give me a structure so that I can follow and write my essay?’ It then gives me an outline, introduction, topic sentences, paragraph one, paragraph two, paragraph three.” Sometimes, Wendy asks for a bullet list of ideas to support or refute a given argument: “I have difficulty with organization, and this makes it really easy for me to follow.”
Once the chatbot had outlined Wendy’s essay, providing her with a list of topic sentences and bullet points of ideas, all she had to do was fill it in. Wendy delivered a tidy five-page paper at an acceptably tardy 10:17 a.m. When I asked her how she did on the assignment, she said she got a good grade. “I really like writing,” she said, sounding strangely nostalgic for her high-school English class — the last time she wrote an essay unassisted. “Honestly,” she continued, “I think there is beauty in trying to plan your essay. You learn a lot. You have to think, Oh, what can I write in this paragraph? Or What should my thesis be? ” But she’d rather get good grades. “An essay with ChatGPT, it’s like it just gives you straight up what you have to follow. You just don’t really have to think that much.”
If Wendy cut and pasted text directly from the AI and just dumped it into an essay, then yes, that’s cheating— though proving AI cheating like that isn’t easy.6 But let’s assume that she didn’t do that and she used this advice as another brainstorming technique. I do not think this counts as cheating, and the fact that Wendy probably has some professors who think this is cheating is what makes this so confusing for Wendy and every other student nowadays.
Eventually, educators will reach a consensus on what is and isn’t AI cheating, and while I’m obviously biased, I think the consensus will more or less line up with my thoughts. But because faculty can’t agree on this now, it is essential that we take the time to decide on an AI policy and to explain that policy as clearly as possible to our students. This is especially important for teachers who don’t want their students to use AI at all, which is why instead of “refusing” AI, educators ought to be “paying attention” to it.
The article is behind a firewall, but I had luck accessing it via 12ft.io↩︎
Though I will admit that I may have had mastermind cheaters in the past who were so successful I never caught on…. ↩︎
The other issue about when/why students cheat— with AI or anything else— is it depends a lot on the grade level of the student. The vast majority of problems I’ve had with cheaters, generally and with AI in particular, have been with first year students in gen ed composition and rhetoric. I rarely have cheating problems in more advanced courses and with students who are juniors and seniors. ↩︎
Ultimately, I made this student rewrite their essay on their own. As I recall, the student ended up failing the course because they didn’t turn in a number of assignments and missed too many classes, which is a pretty typical profile of the kind of student who resorts to cheating. ↩︎
I think for all of my students last year, I was the only teacher who had an AI policy like this. As a result, the genre of an “AI Use Statement” was obviously unfamiliar, and their responses were all over the map. So one of the things on my “to do” list for preparing to teach in the fall is to develop some better models and better language about how much detail to include. ↩︎
As I’ve already mentioned, this is one of the reasons why I use Google Docs: I can look at the document’s “Version History” and see how they put their essays together. Between looking at that and just reading the essay, I can usually spot something suspicious. When I think the student is cheating with AI (and even though I spend a lot of time explaining to students what I think is acceptable and unacceptable AI use, this still happened several times last school year in first year writing), I talk to the student and tell them why I think it’s AI. So far, they’ve all confessed. I let them redo the assignment without AI, and I tell them if they do it again, they’ll fail the class. That too happened last school year, but only once. ↩︎
Bollinger, who is a lawyer and a First Amendment scholar, argues that universities, similar to “the press,” depend upon and are protected by the First Amendment to do their work, and the work of both universities and the press is what makes democracy possible in the first place. Here’s a long quote that I think gets Bollinger’s main point across:
So, here is my thesis: American universities are rooted in the bedrock of human nature and the foundations of our constitutional democracy. They are every bit as vital to our society as the political branches of government or quasi-official institutions such as the press (often even referred to as the “fourth branch” of government). Universities, as institutions, are the embodiment of the basic rationale of the First Amendment, which affirms our nation’s commitment to a never-ending search for truth.
In some ways, universities are a version of the press: They make a deep inquiry into public issues and are always on call to serve as a check on the government. But if their deadlines are far longer, the scope of their work and remit in pursuing truth reach to everywhere that knowledge is or may yet be. Their role in society touches the full panoply of human discovery, never limited by what may be newsworthy at a given moment. And, as many have noted in today’s debate over federal funding, the results of academic research and discovery have benefited society in more obviously utilitarian ways, including curing disease, cracking the atom, and creating the technologies that have powered our economic dynamism and enhanced our quality of life.
I agree with this. Certainly, there have been a lot of times when universities have failed at embodying the values of free speech and the search for truth or enhancing everyone’s quality of life– and the press has failed their “fourth estate” check on the government role often enough as well. But the principle Bollinger is trying to make here is completely true.
The problem here though is the “they” Bollinger is talking about are people like him, university faculty and administrators, and particularly those who are tenured. He’s at best only talking about everyone else on university campuses– students, and also the staff and the legions of non-tenure-track instructors who make these places run– indirectly. He’s talking about academic elites.
I suppose I’m one of the “theys” Bollinger is describing because I am a tenured professor at a university. Though besides being at a “third tier” university, I also have always felt that the institution that best protects my rights to teach, to write, and to say what I want without fear of losing my job is not tenure or “the university” as an institution. Rather, it’s the union and the faculty contract.
In any event, arguing to anyone outside of the professoriate that universities (or university professors) are “special” and should be able to say or do anything without ever having to worry about losing their jobs in the name of the “search for truth” does not go over well. Believe me, I’ve offered a version of Bolliger’s argument to my extended family at Thanksgiving and Christmas gatherings over the years, and they are skeptical at best. And these people are not unfamiliar with higher education: everyone in my family has some kind of college degree, and Annette and I are not the only ones who went to graduate school.
Besides, if you want to convince normal people that universities deserve a special place in our society, making the comparison to “the press,” which the general public also distrusts nowadays, might not be the best strategy.
Like Bollinger, I have spent my professional life in academia, so I’m biased. But I do think universities as institutions are important to everyone, including those who never step on campus. For starters, there is the all that scientific research: the federal government pays research universities (via grants) to study things that will eventually lead to new cures and discoveries. That accounts for almost all of the money Trump (really, Musk) are taking away from universities.
More directly, large research universities (which usually have medical schools) also run large hospitals and health care systems, and these are the institutions that often treat the most complicated and expensive problems– organ transplants, the most aggressive forms of cancer, and so forth. University-run healthcare systems are the largest employer in several states, and universities themselves are the largest employer in several more, including Hawaii, California, New York, and Maryland. (By the way, Walmart is the largest employer in the U.S.). And of course, just about every employer I can think of around here is indirectly dependent on universities. I mean, without the University of Michigan, Ann Arbor would not exist.
There’s also the indirect community-building functioning of universities that goes beyond the “college town.” Take sports, for example. I’m reluctant to bring this up because I think EMU would be better off if we didn’t waste as much money as we do on trying to compete in the top division of football. Plus college sports have gotten very weird in the age of Name, Image, and Likeness deals and the transfer portal system. But it’s hard to deny the fandom around college sports, especially living in the shadow of U of M.
And of course, the main way that everyone benefits from universities is we offer college degrees. Elite universities (like the ones that have been in the news and/or the target of Trump’s revenge) don’t really do this that well because they are so selective– and they need to be selective because so many people apply. This year, 115,000 first-year and transfer students applied to Michigan, and obviously, they can only admit a small percentage of those folks.
But the reality is that only the famous universities that everyone has heard of are this difficult to attend. Most universities, including the one where I work, admit almost everyone who applies. We give everyone who otherwise couldn’t get into an elite university the chance to earn a college degree. That doesn’t always work out because a lot of the students we admit don’t finish. But I also know the degrees our graduates earn ultimately improve their lives and futures, and our graduates.
I could go on, but you get the idea. I understand Bollinger’s point, and he’s not wrong. But academics like us need to try to convince everyone else that they have something to gain from universities as well.
I turned in grades Friday and thus wrapped up the 2024-25 school year. I have a few miscellaneous things I’ll have to do in the next few months, but I’m not planning on doing too much work stuff (other than posts like this) until late July/early August when I’ll have to get busy prepping for the fall. Of course, it’s difficult for me to just turn off the work part of my brain, and I’ve been reflecting on teaching the last couple of days: what I’ll do differently next time, what worked well, which assignments/readings need to be altered or retired, and also what I learned from my students. That was especially true with my sections of first-year writing this year.
This past year, the topic in my first-year writing courses was “Your Future Career and AI.” It was part of a lot of “leaning in” to AI for me this year. As I wrote back in December, we read and talked about how AI might be useful in some parts of the writing process, but also about AI’s limitations, especially when it comes to the key goals of the class. AI is not good at researching (especially researching anything academic/behind a library’s firewall), it cannot effectively or correctly quote/paraphrase/cite that research in an essay in MLA style, and AI cannot tell students what to think.
In other words, by paying attention to AI (rather than resisting, refusing, or wishing AI away), I think my students learned that ChatGPT is more than just a cheating device, and I think I learned a lot about how to tweak/redesign my first year writing class to make AI cheating less of a problem. Again, more details in my post “Six Things I Learned After a Semester of Lots of AI,” but I think what it boils down to is teach writing as a process.
But I also learned a lot from my students’ research about the impact of AI on all sorts of careers and industries beyond my own. So the other day, when I read this fuzzy little article by Jack Kelly on the Forbes website, “The Jobs That Will Fall First as AI Takes Over The Workplace,” I thought that seems about right, at least based on what my students were telling me with their research.
Now, two caveats on what I’ve learned from my freshmen: first, they’re freshmen (mostly– I had a few stray sophomores and juniors in there too), and thus these are inexperienced and incomplete researchers. Second, one of the many interesting (and stressful and fun) things about short and long-term projections of the future of Artificial Intelligence (both generative AI, which is basically where we are now, and artificial general intelligence or artificial superintelligence, which is where the AI is as “smart” or “smarter” than humans) is no one knows.
That said, I learned a lot. In a nutshell: while it’s likely that everything will eventually be impacted by AI (just as everything was affected by one of the more recent general-purpose technologies, the internet), I don’t think AI will transform education as much as a lot of educators fear. Though like I just said, every prediction about the future of AI has about the same chance of being right as being wrong.
For starters, all of my students were able to find plenty of research about “x” career and AI. No one came up empty. Predictably, my students interested in fields like engineering, accounting, finance, business, law, logistics, computer science, and so on had no problems finding articles in both MSM and academic publications. But I was surprised to see the success everyone had, including students with career ambitions in nursing, physical therapy, sports training, interior design, criminology, gaming, graphics, aviation, elementary school teaching, fine art, music, social work. I worried about the students who wanted to research AI and careers in hotel and restaurant management, theatre, dance, and dermatology; they all found plenty of resources. The one student who came closest to stumping the topic was a young man researching AI and professional baseball pitching. But yeah, there were some articles about that too.
Second, the fields/careers that will probably be impacted by AI the most (and this is already happening) involve working and dealing with A LOT of complex data, and ones that involve a lot of repetitive tasks which nonetheless take expertise. Think of something like accounting, finance, basic data analysis. None of my students researched this, but as that Forbes article mentioned, AI is also already reshaping careers like customer service, data processing, and simple bookkeeping.
None of my students wrote much about how AI will replace humans in “X” careers, though some of them did include some research on that for careers like nursing and hospitality. Perhaps my students were researching too selectively or too optimistically; after all, they were projecting their futures with their research and none of them wanted AI to put them out of a career before they even finished college. But most of what my students wrote about was how AI will assist but not replace professionals in careers like engineering and aviation. And as one of my aviation students pointed out, AI in various forms has been a part of being a pilot for a long time now. (I was tempted to include here a link to the autopilot scene from the movie Airplane!). Something similar was true in a lot of fields, including graphic design and journalism.
For a lot of careers, AI’s impact is likely to be more indirect. I heard this analogy while listening to this six-part podcast from The Atlantic: AI is probably not going to have a lot of impact on how a toothpaste factory makes and puts toothpaste into tubes, but it will change the way that company handles accounting, human resources, maybe distribution and advertising, and so forth. I think there are a lot of careers like that.
I only had a few students researching careers in education– which is surprising because EMU comes out of the Normal School tradition, and we certainly used to have a lot more K -12 education majors than we do now. The two students who come to mind right now were researching elementary education and art education, and both of those students argued AI can help but not replace teachers or the curriculum for lots of different reasons. This squares with what I’ve read elsewhere and in this short Forbes article as well: jobs in “teaching, especially in nuanced fields like philosophy or early education” and other jobs that “rely on emotional intelligence and adaptability, which AI struggles to replicate,” are less likely to be replaced by AI anytime too soon.
Don’t get me wrong: besides the fact that no one knows what is going to happen with AI in the next few years (that’s what makes predicting the future of AI so much fun because quite literally anything might be true!), AI already has impacted and altered how we teach and learn things. As I discussed in my CCCCs talk, the introduction of personal computers and the internet also changed how we practice and teach writing. As I’ve written about a lot here lately, if the goal of a writing class is to have students to use AI as an aid (or not at all) in their learning and process, then teachers need to teach differently than they did before the rise of AI. And of course teachers (and everyone else) are going to have to keep adapting as AI keeps evolving.
But when I wonder about the current and near future threats to my career of choice, higher education, I think about falling enrollments, declining funding from the state, the insane Trump/Musk cuts to research institutions, deporting international students, axing DEI initiatives and other programs meant to help at risk students, and the growing distrust of expertise and science. I don’t think about professors being replaced or made irrelevant because of AI.
I am home from the 2025 Conference for College Composition and Communication, after leaving directly after my 9:30 am one man show panel and an uneventful drive home. I actually had a good time, but it will still probably be the last CCCCs for me. Probably.
The first part of the original title, “Echoes of the Past,” was just my lame effort at having something to do with the conference theme, so disregard that entirely. This has nothing to do with sound. The first part of my talk is the part after the colon, “Considering Current Artificial Intelligence Writing Pedagogies with Insights from the Era of Computer-Aided Instruction,” and that is something I will get to in a moment, and that does connect to the second title,
“The Importance of Paying Attention To, Rather Than Resisting, AI.” It isn’t exactly what I had proposed to talk about, but I hope it’ll make sense.
So, the first part: I have always been interested in the history of emerging technologies, especially technologies that were once new and disruptive but became naturalized and are now seen not as technology at all but just as standard practice. There are lots of reasons why I think this is interesting, one of which is what these once-new and disruptive technologies can tell us now about emerging writing technologies. History doesn’t repeat, but it does rhyme, and history prepares the future for whatever is coming next.
For example, I published an essay a long time ago about the impact of chalkboards in 19th-century education, and I’ve presented at the CCCCs about how changes in pens were disruptive and changed teaching practices. I wrote a book about MOOCs where I argued they were not new but a continuation of the long history of distance education. As a part of that project, I wrote about the history of correspondence courses in higher education, which emerged in the late 19th century. Correspondence courses led to radio and television courses, which led to the first generation of online courses, MOOCs, and online courses as we know them now and post-Covid. Though sometimes emerging and disruptive technologies are not adopted. Experiments in teaching by radio and television didn’t continue, and while there are still a lot of MOOCs, they don’t have much to do with higher education anymore.
The same dynamic happened with the emergence of computer technology in the teaching of writing beginning in the late ’70s and early ’80s, and that even included a discussion of Artificial Intelligence– sort of. In the course of poking around and doing some lazy database searches, I stumbled across the first article in the first issue– a newsletter at the time– of what would become the journal Computers and Composition, a short piece by Hugh Burns called “A Note on Composition and Artificial Intelligence.”
Incidentally, this is what it looks like. I have not seen the actual physical print version of this article, but the PDF looks like it might have been typed and photocopied. Anyway, this was published in 1983, a time when AI researchers were interested in the development of “expert systems,” which worked with various programming rules and logic to simulate the way humans tend to think, at least in a rudimentary way.
Incidentally and just in case we don’t all know this, AI is not remotely new, with a lot of enthusiasm and progress in the late 1950s through the 1970s, and then with a resurgence in the 1980s with expert systems.
In this article, Burns, who wrote one of the first dissertations about the use of computers to teach writing, discusses the relevance of the research in the field of artificial intelligence and natural language processing in the development of Computer Aided Instruction, or CAI, which is an example of the kind of “expert system” applications of the time. “I, for one,” Burns wrote, “believe composition teachers can use the emerging research in artificial intelligence to define the best features of a writer’s consciousness and to design quality computer-assisted instruction – and other writing instruction – accordingly” (4).
If folks nowadays remember anything at all about CAI, it’s probably “drill and kill” programs for practicing things like sentence combining, grammar skills, spelling, quizzes, and so forth. But what Burns was talking about was a program called Topi, which walked users through a series of invention questions based on Tagmemic and Aristotelian rhetoric.
There were several similar prompting, editing, and revision tools at the time. One was Writer’s Workbench, which was an editing program developed by Bell Labs and initially meant as a tool for technical writers at the company. It was adopted for writing instruction at a few colleges and universities, and
John T. Day wrote about St. Olaf College’s use of Writer’s Workbench in Computers and Composition in 1988 in his article “Writer’s Workbench: A Useful Aid, but not a Cure-All.” As the title of Day’s article suggests, the reviews to Writer’s Workbench were mixed. But I don’t want to get into all the details Day discusses here. Instead, what I wanted to share is Day’s faux epigraph.
I think this kind of sums up a lot of the profession’s feelings about the writing technologies that started appearing in classrooms– both K-12 and in higher education– as a result of the introduction of personal computers in the early 1980s. CAI tools never really caught on, but plenty of other software did, most notably word processing, and then networked computers, this new thing “the internet,” and then the World Wide Web. All of these technologies were surprisingly polarizing among English teachers at the time. And as an English major in the mid-1980s who also became interested in personal computers and then the internet and then the web, I was “an enthusiast.”
From around the late 1970s and continuing well into the mid-1990s, there were hundreds of articles and presentations in major publications in composition and English studies like Burns’ and Day’s pieces, about the enthusiasms and skepticisms of using computers for teaching and practicing writing. Because it was all so new and most folks in English studies knew even less about computers than they do now, a lot of that scholarship strikes me now as simplistic. Much of what appeared in Computers and Composition in its first few years was teaching anecdotes, as in “I had students use word processing in my class and this is what happened.” Many articles were trying to compare writing with and without computers, writing with a word processor or by hand, how students of different types (elementary/secondary, basic writers, writers with physical disabilities, skilled writers, etc.) were harmed or helped with computers, and so forth.
But along with this kind of “should you/shouldn’t you write with computers” theme, a lot of the scholarship in this era raised questions that have continued with every other emerging and contentious technology associated with writing, including, of course, AI: questions about authorship, the costs (because personal computers were expensive), the difficulty of learning and also teaching the software, cheating, originality, “humanness” and so on. This scholarship was happening at a time when using computers to practice or teach writing was still perceived as a choice– that is, it was possible to refuse and reject computers. I am assuming that the comparison I’m making here to this scholarship and the discussions now about AI are obvious.
So I think it’s worth re-examining some of this work where writers were expressing enthusiasms, skepticisms, and concerns about word processing software and personal computers and comparing it to the moment we are in with AI in the form of ChatGPT, Gemini, Claude, and so forth. What will scholars 30 years from now think about the scholarship and discourse around Artificial Intelligence that is in the air currently?
Anyway, that was going to be the whole talk from me and with a lot more detail, but that project for me is on hold, at least for now. Instead, I want to pivot to the second part of my talk, “The Importance of Paying Attention To, Rather Than Resisting, AI.”
I say “Rather Than Resisting” or Refusing AI in reference to Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes website “Refusing Generative AI in Writing Studies,” but also in reference to articles such as Melanie Dusseau’s “Burn It Down: A License for AI Resistance,” which was a column in Inside Higher Ed in November 2024, and other calls to refuse/resist using AI. “The Importance of Paying Attention To,” is my reference to Cynthia Selfe’s “Technology and Literacy: A Story about the Perils of Not Paying Attention,” which was first presented as her CCCC chair’s address in 1998 (published in 1999) and which was also expanded as a book called Technology and Literacy in the Twenty-first Century.
If Hugh Burns’ 1983 commentary in the first issue of Computers and Composition serves for me as the beginning of this not-so-long-ago history, when personal computers were not something everyone had or used and when they were still contentious and emerging tools for writing instruction and practice, then Selfe’s CCCCs address/article/book represents the point where computers (along with all things internet) were no longer optional for writing instruction and practice. And it was time for English teachers to wake up and pay attention to that.
And before I get too far, I agree with eight out of the ten points on the “Refusing Generative AI in Writing Studies” website, broadly speaking. I think these are points that most people in the field nowadays would agree with, actually.
But here’s where I disagree. I don’t want to go into this today, but the environmental impact of the proliferation of data centers is not limited to AI. And when it comes to this last bullet point, no, I don’t think “refusal” or resistance are principled or pragmatic responses to AI. Instead, I think our field needs to engage with and pay attention to AI.
Now, some might argue that I’m taking the call to refuse/resist AI too literally and that the kind of engagement I’m advocating is not at odds with refusal.
I disagree. Word choices and their definitions matter. Refusing means being unwilling to do something. Paying attention means to listen to and to think about something. Much for the same reasons Selfe spoke about 27 years ago, there are perils to not paying attention to technology in writing classrooms. I believe our field needs to pay attention to AI by researching it, teaching with it, using it in our own writing, goofing around with it, and encouraging our students to do the same. And to be clear: studying AI is not the same as endorsing AI.
Selfe’s opening paragraph is a kidding/not kidding assessment of the CCCCs community’s feelings about technology and the community’s refusal to engage with it. She says many members of the CCCCs over the years have shared some of the best ideas we have from any discipline about teaching writing, but it’s a community that has also been largely uninterested in the focus of Selfe’s work, the use of computers to teach composition. She said she knew bringing up the topic in a keynote at the CCCCs was “guaranteed to inspire glazed eyes and complete indifference in that portion of the CCCC membership which does not immediately sink into snooze mode.” She said people in the CCCCs community saw a disconnect between their humanitarian concerns and a distraction from the real work of teaching literacy.
It was still possible in a lot of English teacher’s minds to separate computers from the teaching of writing– at least in the sense that most CCCCs members did not think about the implications of computers in their classrooms. Selfe says “I think [this belief] informs our actions within our home departments, where we generally continue to allocate the responsibility of technology decisions … to a single faculty or staff member who doesn’t mind wrestling with computers or the thorny, unpleasant issues that can be associated with their use.”
Let me stop for a moment to note that in 1998, I was there. I attended and presented at that CCCCs in Chicago, and while I can’t recall if I saw Selfe’s address in person (I think I did), I definitely remember the times.
After finishing my PhD in 1996, I was hired by Southern Oregon University as their English department’s first “computers and writing” specialist. At the 1998 convention, I met up with my future colleagues at EMU because I had recently accepted the position I currently have, where I was once again hired as a computer and writing specialist. At both SOU and EMU, I had colleagues– you will not be surprised to learn these tended to be senior colleagues– who questioned why there was any need to add someone like me to the faculty. In some ways, it was similar to the complaints I’ve seen on social media about faculty searches involving AI specialists in writing studies and related fields.
Anyway, Selfe argues that in hiring specialists, English departments outsourced responsibility to the rest of the faculty to have anything to do with computer technology. It enabled a continued belief that computers are simply “tool[s] that individual faculty members can use or ignore in their classrooms as they choose, but also one that the profession, as a collective whole–and with just a few notable exceptions–need not address too systematically.” Instead, she argued that what people in our profession needed to do was to pay attention to these issues, even if we really would rather refuse to do so: “I believe composition studies faculty have a much larger and more complicated obligation to fulfill–that of trying to understand and make sense of, to pay attention to, how technology is now inextricably linked to literacy and literacy education in this country. As a part of this obligation, I suggest that we have some rather unpleasant facts to face about our own professional behavior and involvement.” She goes on a couple of paragraphs later to say in all italics “As composition teachers, deciding whether or not to use technology in our classes is simply not the point–we have to pay attention to technology.”
Again, I’m guessing the connection to Selfe’s call then to pay attention to computer technology and my call now to pay attention to AI is pretty obvious.
The specific case example Selfe discusses in detail in her address is a Clinton-Gore era report called Getting America’s Children Ready for the Twenty-First Century, which was about that administration’s efforts to promote technological literacy in education, particularly in K-12 schools. The initiative spent millions on computer equipment, an amount of money that dwarfed the spending on literacy programs. As I recall those times, the main problem with this initiative was there was lots of money spent to put personal computers into schools, but very little money was spent on how to use the computers in classrooms. Self said, “Moreover, in a curious way, neither the CCCC, nor the NCTE, the MLA, nor the IRA–as far as I can tell–have ever published a single word about our own professional stance on this particular nationwide technology project: not one statement about how we think such literacy monies should be spent in English composition programs; not one statement about what kinds of literacy and technology efforts should be funded in connection with this project or how excellence should be gauged in these efforts; not one statement about the serious need for professional development and support for teachers that must be addressed within context of this particular national literacy project.”
Selfe closes with a call for action and a need for our field and profession to recognize technology as important work we all do around literacy. I’ve cherry-picked a couple of quotes here to share at the end. Again, by “technology”, Selfe more or less meant PCs, networked computers, and the web, all tools we all take for granted. But also again, every single one of these calls applies to AI as well.
Now, I think the CCCCs community and the discipline as a whole have moved in the direction Selfe was urging in her CCCCs address. Unlike the way things were in the 1990s, I think there is widespread interest in the CCCC community in studying the connections between technologies and literacy. Unlike then, both MLA and CCCCs (and presumably other parts of NCTE) have been engaged and paying attention. There is a joint CCCC-MLA task force that has issued statements and guidance on AI literacy, along with a series of working papers, all things Selfe was calling for back then. Judging from this year’s program and the few presentations I have been able to attend, it seems like a lot more of us are interested in engaging and paying attention to AI rather than refusing it.
At the same time, there is an echo–okay, one sound reference– of the scholarship in the early era of personal computers. A lot of the scholarship about AI now is based on teachers’ experiences of experimenting with it in their own classes. And we’re still revisiting a lot of the same questions regarding the extent to which we should be teaching students how to use AI, the issues of authenticity and humanness, of cheating, and so forth. History doesn’t repeat, but it does rhyme.
Let me close by saying I have no idea where we’re going to end up with AI. This fall, I’m planning on teaching a special topics course called Writing, Rhetoric, and AI, and while I have some ideas about what we’re going to do, I’m hesitant about committing too much to a plan now since all of this could be entirely different in a few months. There’s still the possibility of generative AI becoming artificial general intelligence and that might have a dramatic impact on all of our careers and beyond. Trump and shadow president Elon Musk would like nothing better than to replace most people who work for the federal government with this sort of AI. And of course, there is also the existential albeit science fiction-esque possibility of an AI more intelligent than humans enslaving us.
But at least I think that we’re doing a much better job of paying attention to technology nowadays.
The first time I attended and presented at the CCCCs was in 1995. It was in Washington, D.C., and I gave a talk that was about my dissertation proposal. I don’t remember all the details, but I probably drove with other grad students from Bowling Green and split a hotel room, maybe with Bill Hart-Davidson or Mick Doherty or someone like that. I remember going to the big publisher party sponsored by Bedford-St. Martin’s (or whatever they were called then) which was held that year at the National Press Club, where they filled us with free cocktails and enough heavy hors d’oeuvres to serve as a meal.
For me, the event has been going downhill for a while. The last time I went to the CCCCs in person was in 2019– pre-Covid, of course– in Pittsburgh. I was on a panel of three scheduled for 8:30 am Friday morning. One of the people on the panel was a no-show, and the other panelist was Alex Reid; one person showed up to see what we had to say– though at least that one person was John Gallagher. Alex and I went out to breakfast, and I kind of wandered around the conference after that, uninterested in anything on the program. I was bored and bummed out. I had driven, so I packed up and left Friday night, a day earlier than I planned.
And don’t even get me started on how badly the CCCCs did at holding online versions of the conference during Covid.
So I was feeling pretty “done” with the whole thing. But I decided to put in an individual proposal this year because I was hoping it would be the beginning of another project to justify a sabbatical next year, and I thought going to one more CCCCs 30 years after my first one rounded things out well. Plus it was a chance to visit Baltimore and to take a solo road trip.
This year, the CCCCs/NCTE leadership changed the format for individual proposals, something I didn’t figure out until after I was accepted. Instead of creating panels made up of three or four individual proposals, which is what the CCCCs had always done before– which is whatevery other academic conference I have ever attended does with individual proposals— they decided that individuals would get a 30-minute solo session. To make matters even worse, my time slot was 9:30 am on Saturday, which is the day most people are traveling back home.
Oh, also: my sabbatical/research release time proposal got turned down, meaning my motivations for doing this work at all has dropped off considerably. I thought about bailing out right up to the morning I left. But I decided to go through with it because I was also going to Richmond to visit my friend Dennis, I still wanted to see Baltimore, and I still liked the idea of going one more time and 30 years later.
Remarkably, I had a very good time.
It wasn’t like what I think of as “the good old days,” of course. I guess there were some publisher parties, but I missed out on those. I did run into people who I know and had some nice chats in the hallways of the enormous Baltimore convention center, but I mostly kept to myself, which was actually kind of nice. My “conference day” was Friday and I saw a couple of okay to pretty good panels about AI things– everything seemed to be about AI this year. I got a chance to look around the Inner Harbor on a cold and rainy day, and I got in half-price to the National Aquarium. And amazingly, I actually had a pretty decent-sized crowd (for me) at my Saturday morning talk. Honestly, I haven’t had as good of a CCCCs experience in years.
But now I’m done– probably.
I’m still annoyed with (IMO) the many many failings of the organization, and while I did have a good solo presenting experience, I still would have preferred being on a panel with others. But honestly, the main reason I’m done with the CCCCs (and other conferences) is not because of the conference but because of me. This conference made it very clear: essentially, I’ve aged out.
When I was a grad student/early career professor, conferences were a big deal. I learned a lot, I was able to do a lot of professional/social networking, and I got my start as a scholar. But at this point, where I am as promoted and as tenured as I’m ever going to be and where I’m not nearly as interested in furthering my career as I am retiring from it, I don’t get much out of all that anymore. And all of the people I used to meet up with and/or room with 10 or so years ago have quit going to the CCCCs because they became administrators, because they retired or died, or because they too just decided it was no longer necessary or worth it.
So that’s it. Probably. I have been saying for a while now that I want to shift from writing/reading/thinking about academic things to other non-academic things. I started my academic career as a fiction writer in an MFA program, and I’ve thought for a while now about returning to that. I’ve had a bit of luck publishing commentaries, and of course, I’ll keep blogging.
Then again, I feel like I got a good response to my presentation, so maybe I will stay with that project and try to apply for a sabbatical again. And after all, the CCCCs is going to be in Cleveland next year and Milwaukee the year after that….
The two big things on my mind right now are finishing this semester (I am well into the major grading portion of the term in all three of my classes) and preparing for the CCCCs road trip that will begin next week. I’m sure I’ll write more on the CCCCs/road trip after I’m back.
But this morning, I thought I’d write a post about a course I’m hoping to teach this fall, “Writing, Rhetoric, and AI.” I’ve set up that page on my site with a brief description of the course– at least as I’m imagining it now. “Topics in” courses like this always begin with just a sketch of a plan, but given the twists and turns and speed of developments in AI, I’ve learned not to commit to a plan too early.
For example: the first time I tried to teach anything about AI was in a course I taught in fall 2022 in a 300-level digital writing course. I came up with an AI assignment based in part on an online presentation by Christine Photinos and Julie Wihelm for the 2023 Computers and Writing Conference, and also on Paul Fyfe’s article “How to Cheat on Your Final Paper: Assigning AI for Student Writing.” My plan at the beginning of that semester was to have students use the same AI tools these writers were talking about, which was OpenAI’s GPT-2. By the time we were starting to work on the AI writing assignment for that class, ChatGPT was released. So plans changed, English teachers started freaking out, etc.
Anyway, the first thing that needs to happen is the class needs to “make”– that is, get enough students to justify it running at all. But right now, I’m cautiously optimistic that it is going to happen. The course will be on Canvas and behind a firewall, but my plan for now is to eventually post assignments and readings lists and the like here. Once I figure out what we’re going to do.