What Happens After AI Destroys Gen Ed?

In all three of the classes I’m teaching this semester (and, like last semester, I’m teaching two sections of first-year writing and Writing, Rhetoric, and AI), I am once again having students read and discuss Hua Hsu’s New Yorker essay from last summer, “What Happens After AI Destroys College Writing?” I blogged about this essay here last semester. As I wrote then, I think Hsu does a good job of capturing the anxieties that both students and teachers have about AI. Ultimately, Hsu’s answer to the question posed by the title (probably written by an editor rather than him) is that AI doesn’t “destroy” college writing, but AI is challenging a lot of assumptions about what college itself is for. Is getting a college degree simply a series of hoops to jump through to earn a degree that assures membership in the upper-middle class, or is it about learning/teaching things?

This time around, I am reading this essay a bit differently. Hsu interviews several students across the country about AI, and the one that stands out the most to my students and me is “Alex,” a pseudonym for a student at New York University, because he is so unusual. Unlike every other student I have caught cheating with AI or anything else, Alex is the unicorn of AI think pieces like this: a criminal mastermind cheater. Hsu quotes Alex as he explains how he uses ChatGPT to write papers and summarize readings for almost everything, and, if he is to be believed, Alex always gets away with it.

I don’t know, maybe Alex told Hsu the truth, and maybe there are students who have cheated like Alex in my classes that I never knew about. But as I’m re-reading and thinking about my own experiences with students talking about how they use AI or the students who try to cheat with it, I’m wondering if Hsu is taking a little “creative license.” Or perhaps Alex is a kind of composite character included because he represents the worst (and mostly exaggerated) fears about AI in education.

In contrast, most of Hsu’s interviewees describe using AI in ways that ranged from things everyone, including me, thinks is blatant cheating (for example, AI “writes” the entire paper the student does a little light editing before turning it in) to using it for stuff like brainstorming, for feedback, and other activities that some professors forbid and that I encourage. But within this range of use, most of these students were selective about the courses and assignments where they use AI, especially when it came to blatant cheating: not usually courses in their majors or courses they care about, but the courses they think are irrelevant and just another hoop. I am, of course, talking about General Education.

The role of Gen Ed in undergraduate studies has always been fraught. As a student in the 1980s, I enjoyed and learned a lot from some of the gen ed courses I took. But I also thought a lot of them weren’t worth it, and given the way these courses figured into my undergrad experience, the university seemed to have felt the same way. For starters, a lot of the Gen Ed classes were about subjects I didn’t like and/or I was not good at. Being required to take a science class with a lab felt a lot like being required eat a big bowl of (insert the name of the vegetable you will not eat here) because it’s “good for you.” These were the classes you had to take before you were allowed to have the good stuff.

Plus, the university didn’t seem to think the Gen Ed classes were that important, certainly not as important as the “real” classes in the major. I got credit for about a semester’s worth of gen ed classes in part because of some high school classes I took, but mostly because I did well on the CLEP Tests. The gen ed classes I took were mostly large lecture hall courses with a sage on the stage talking away, a couple of tests and a final. If they were smaller discussion sections, they were almost always taught by graduate students or part-timers– that is, the lowest paid and least empowered instructors on campus.

As an educator, I believe in the theory behind Gen Ed. It provides students with introductions to fundamental skills and subjects they will use in many other courses and throughout life (writing and math classes immediately come to mind), and it helps students be “well-rounded” in terms of diverse experiences, critical thinking, “broader knowledge” of the world beyond their specialization, and so forth. Back when I was a student and it was a lot more common to start college as an “undeclared” major, Gen Ed was also a way of trying out different possibilities.

But that, as they say, was then, and this is now. Attending college has never been cheap, but the cost of attendance at supposedly affordable and public universities (like EMU) has gone through the roof. Taking classes that I thought were a waste of time didn’t cost that much when I was in college; it does now. “Students these days view college as consumers, in ways that never would have occurred to me when I was their age,” Hsu writes. “They’ve grown up at a time when society values high-speed takes, not the slow deliberation of critical thinking.” I think students have always viewed college at least partially as consumers, but higher costs and the increased value society puts on speed have made this worse. Stir in the rest of the shit of the world in (social media, global warming, pandemics, depression, working too many hours to make ends meet, etc.), and the idea of a busy and stressed-out student cutting some corners in classes they think are a waste of time with AI makes a lot of sense.

As I’ve observed here and elsewhere many times before, this is the pattern of AI cheating I’ve seen in my own teaching. I see social media posts from colleagues and I read hair-on -fire MSM pieces and Substack posts about how ALL the students are ALL cheating ALL the time, but that has not been my experience. I don’t get a lot of students cheating with AI– well, unless I’ve had a bunch of Alexes that I never caught– because teaching writing as a process deters plagiarism. Plus I teach small classes (which also deters cheating), and I don’t have any “one-shot deal” paper assignments (as in “write a 5-page paper about ‘x’ and turn it in”) or any multiple choice/short answer tests or whatever, all things that AI can do well.

That said, my students do sometimes cheat, and at least 90% of the AI cheating incidents where I either required the student to redo the assignment (generally the first offense) or failed them (a second offense) have been in first year composition. The more advanced students in the courses I teach in the major or the MA program are all adamant about never cheating on their writing and certainly not with AI, and I believe them.

Like I said before, I skipped a lot of Gen Ed partly because I did well on those CLEP tests but also because I was an English major. That meant I never had to take a math class in college, and I only had to take that one science class I mentioned– astronomy, incidentally, which was the science class most students thought was the easiest option. But if I were a student now and I had to pass a Gen Ed math class along the lines of pre-calculus, I would be screwed.

Maybe I’d do the right thing and seek help from a tutor or someone, or if I got over my math phobias and if I really tried, maybe I could eek out a passing grade on my own. But if AI could help me in various ways that might or might not be allowed by the instructor, that might cross the line from “help” to doing the work for me, and I thought I could get away with it, I would use AI. And if I had to take some kind of class that I thought was a waste of my time, and it turned out that all of the assignments for the class– quizzes, tests, essays, etc.– are ones that anyone could complete with AI, then I would use AI, at least to help cut some corners. Why wouldn’t I?

“Because that wouldn’t be ethical!” you might be thinking. Sure, but if my options are to take the risk to cheat or to fail out of college, I’m going to take the risk. And really, isn’t it unethical (or at least “problematic”) for universities to make students go up to their eyeballs in debt and to spend all that time taking courses that are nothing but a series of busywork and hoop-jumping assignments that some clanker could do just as well?

Now, I don’t really think AI “destroys” Gen Ed (see what I did there?), but it clearly changes how we should teach these classes and also what these classes are for. To me, that mostly means teaching in ways that minimize the usefulness or temptation of cheating with AI: small classes focused more on learning processes rather than making products, and writing assignments that value reflection and research, especially over simple regurgitation of whatever the professor said in the lecture will be important on the test.

We also need to make these classes “real” and as meaningful and important as the classes students get to take in their majors. I can’t begin to imagine what the solution is to all this, but I suppose the first place to start with rethinking Gen Ed is to take a hard look at the courses and their formats. A class that can be successfully completed by someone who was never actually a student in the class– that is, there’s no participation component, there’s no specific context for the subject, there’s no assessment of material that is only delivered/discussed in the class–is a class that could probably be completed with AI. And it is also probably a pretty shitty Gen Ed class.

The Year that was 2025

The two things that ran continuously in the background for me in 2025 (when they weren’t in the foreground) were AI and Trump, Trump and AI, AI AI, Trump Trump, again and again and again. So there’s not a lot of need to say too much about all that, no reason to remind myself about everything Trump and all that his people have screwed up and/or destroyed this year. You know what I mean. Anyway, here’s a more personal and less apocalyptic post about the year that was 2025.

January was a “not much new around here” month, though one thing I noticed when I was paging through my journal was I had some version of a cold or something for most of the month. Let’s hope there’s not a repeat of that.

In February (most of it was typical, the work was work, Trump was very busy setting the world on fire, etc.), we went to Cancún for our winter break to one of the zillions of “all inclusive” resorts around there. We really stayed south of the city in the Mayan Riviera area, and the closest we got to the city of Cancún itself was the airport. I don’t think either Annette or I are too interested in staying at a resort like that again, but I’m interested in more travel in Mexico one of these years. Chichen Itza was particularly cool.

We had a weekend of fun in Detroit at the end of March, and then in April, I took a road trip out east for the last CCCCs of my career. Or so I thought! I’m not planning on presenting at the CCCCs in Cleveland this year, but I will probably go since there are likely to be more folks I know this coming year than there have been for a while, and it’s a short drive. Stay tuned. Oh, but before Baltimore, I swung through Richmond for a quick overnight visit to my great old friend Dennis and is wife Sarah for some hanging out and catching up face to face.

The school year wrapped up in April, and May rolled around. The first two-thirds of the month were about getting ready for the trip and just hanging out, and then we were off on our super-epic European 31st wedding anniversary vacation/travel extravaganza. We were originally going to go for our 30th anniversary, but ended up buying a new house instead. I wrote a recap post about the trip here, though I was also all over Facebook and Instagram the whole time too. I’m pretty sure this is the longest trip/vacation like this I’ve ever been on; we left May 21 and flew back on June 20. Also at the end of June: Annette and I went to a big Krause side of the family get together in Door County for my parent’s 60th wedding anniversary.

For July, we were mostly just here and enjoying one of the nicer times of the year in Southeast Michigan, so some reading/writing things, a bit of golf, the Ann Arbor Art Fair, etc. In the beginning of August, we went out to see Will and Maia, along with Annette’s parents, in New Haven, which included a fantastic stop at Sally’s Apizza. Then there was an impromptu get together with just my parents and sisters in Des Moines (which included an overnight stop in Iowa City), and the month wrapped up with a Joan Jett & the Blackhearts opening for Billy Idol. Kind of a hoot.

September meant mostly work, but somewhere in that month (I think?) and also in October, Annette and I went to a couple of No Kings protests. The first one was in downtown Saline where we stood around the intersection of Michigan Ave. and Saline-Ann Arbor Road and held and waved our signs at people driving by. It reminded me a lot of picketing while on strike. The much bigger event was on October 18 in Ypsilanti, where there was easily a couple thousand people. I don’t know how well these protests persuade anyone, but they give me hope. Stuff with Trump et al are bad, but the resistance is real and powerful enough to bring out lots of other people who are as baffled by our current politics as I am. It’s reassuring.

Anyway, we also had a nice two night get-away to Traverse City during our fall break, some early snow and colder than usual weather in November at the beginning of the month, and then a lot more snow during our annual Krause-side of the family get-together at Thanksgiving. We all had to end that a little early this year though because of a winter storm rolling through the midwest that weekend. Luckily, we beat the storm on the way home.

And now here we are in December, after the semester wrapped and now that we are in that weird timeless space between Christmas and New Year’s– or maybe more accurately the weekend after whenever New Year’s eve/day fall every year. It’s been a little different because for reasons, we stayed here and it was just the three of us, too.

So there you have it. I can’t deny it’s been kind of a shitty year, broadly speaking. I mean, I know several people quite well who were DOGED or NIH-ed out of a job, more or less, higher education in general is a dumpster fire, etc. But even though it is buried on the crap of the year, I’m holding on to a tiny piece of hope that things will be better in 2026. Fingers crossed.

Advice from IHE for Teaching With and Around AI

It’s been three years since ChatGPT was released and AI exploded all over the place, causing a whole new moral panic in academia that is still going strong. And some of these panicking professors are on the verge of losing their minds! For example, consider Ronald Purser’s Current Affairs essay “AI is Destroying the University and Learning Itself,” which is easily summarized by the title. To be fair, Purser is not completely wrong. I agree that Cal State’s “partnership” with OpenAI is problematic at best. As a professor who also works at a regional university, the labor issues and poor allocation of resources at SFSU, along with poor decisions about how to spend those limited funds, sound very familiar to me. I completely agree that a lot of our working-class students see through the con– more on that below. But he also devotes a whole lot of time to trotting out the standard angry complaints about AI from academics, that it is a cheating machine, it’s turning students into zombies, that the only solution is to return to paper and oral exams, blah-blah-blah.

A lot of people I know on Facebook shared and endorsed this story, and I get that it strikes a nerve– really, a lot of nerves itching higher education right now, not all of which are about AI. But it’s also a bit too “my hair is on fire” panicky for my tastes, especially when it comes to his basic thesis, which is, well, “AI is Destroying the University and Learning Itself.” That is simply not true.

In contrast, I was pleased to read the Inside Higher Ed article “You Can’t AI-Proof the Classroom, Experts Say. Get Creative Instead,” by Emma Whitford. Whitford interviewed several professors and instructors from a variety of institutions who believe that yes, learning and higher education still exist in a world of AI. It’s just that faculty have to change the way they teach things, particularly changing up assignments and activities that were a bad idea long before AI demonstrated them to be bad teaching. That’s more or less what I’ve been saying about AI ever since I wrote what has been the most popular post on this site for a couple of years, “AI Can Save Writing by Killing ‘The College Essay.'” Of course, it’s always reassuring to read others saying similar things.

I think the first step for any teacher who hates all things AI and who believe AI is the end of the university and learning and anything meaningful in the world (blah-blah-blah) is to take a deep breath and to play around with a few different chatbots for a day or two, trying out different prompts (enter in your assignments and see what happens!), and just generally goof around with it so you can experience first hand what AI actually is. This is a tough sell because a lot (most?) of my colleagues and students who passionately hate AI have also taken what they see as the principled stance of refusing themselves to ever use any AI chatbot for anything.

Well, besides the fact that I never think willful ignorance is a good idea, AI is already baked into everything we do directly and indirectly with computers already. Take the “blue book” strategy designed to prevent students from using ChatGPT, an approach to teaching I know some of my EMU colleagues have tried. As Luke Hobson notes at the beginning of Whitford’s article, we already have AI-infused wearables like smartwatches, rings, and Ray-Ban Meta Glasses. “What is to stop someone from sitting in the back of a classroom and whispering into their glasses to say, ‘Hey, I need help with solving this problem.”

AI is now embedded into browsers, it is built into Canvas and other Learning Management Systems, it’s built into word processors and email applications and search engines, and everything else to the point where refusal is not an option. At least it’s not an option if you want to stay connected to things like the internet, social media, streaming entertainment, online shopping– that is, if you want to stay connected to contemporary life in the western world.

I do not think this means giving up and ignoring AI cheating, and I certainly don’t think it means that the rest of higher ed ought to follow in the footsteps of Cal State and cut deals with OpenAI or whoever. Also, learning about AI by monkeying around with it or reading about it is not the same as “liking” AI. Rather, learning the basics of AI is important to understand what it is and to recognize that AI is not a fad, it is not going away, and it is going to shape the future for years to come, both for better and for worse. We can’t “refuse” AI, but we can learn about AI and make changes to how we teach to minimize cheating and promote learning.

Once reaching the “acceptance” phase, the next step is for the AI hating teacher to ask themselves what can I do about it? I think all of the ideas for dealing with AI in the classroom in Whitford’s article (and a few other ideas I’ve seen elsewhere) all lean into two kinds of learning activities that AI can’t do very well:

  • demonstrate presence in the classroom and the physical world; and
  • emphasize the process of learning rather than the products students produce.

And if you are familiar with the theory and practice of teaching writing as a process (rather than assigning writing as a product or assessment), you already know what I’m talking about.

Hobson’s LinkedIn post titled “5 AI-Proof Assessment Ideas” is a good example of this, though some of these ideas are easier to implement than others. Oh, also worth noting: Hobson mostly teaches online, and his ideas reflect that “presence” does not have to be physical or synchronous. Anyway, one idea I like is self-recorded/almost TikTok-style “journal entries.” As Whitford puts it in her IHE article, recorded journals “require students to regularly record and upload five-minute videos of themselves talking about what they’ve learned in class, how it connects to their past experiences and how they might use it in the future.” I suspect students would enjoy this activity, actually.

Oral exams seem less practical to me, though asking students to conduct interviews for various reasons is another interesting idea. I’ve done this sometimes in the past, and the main problem I’ve had with these kinds of assignments there are always scheduling and other logistic snafus. Still, something I could see working well.

On his LinkedIn post, Hobson also mentions “community-based learning,” which for me would also include the fairly common practice in technical/professional communication classes of students working with “clients” on projects for that client. But community-based learning can also mean almost anything that requires students to interact with the world around them, either as a group or individually.

I think this also includes valuing (and grading) participation, and lots of activities where students have to interact with each other. In f2f classes, I do a lot of group discussions/activities, peer review activities, all that kind of stuff, plus I keep track of attendance. In online classes, most of the participation involves discussion boards about readings and activities and the like. Basically, students are required to post to a discussion their initial thoughts/reactions to something (usually a reading assignment) before seeing anyone else’s post, and then they need to read and respond to other students’ posts. I base the grade for these discussions not so much on their quality, but that they complete them. So, to get an “A” on a discussion, students have to post once initially on time, and then follow that up over the next couple of days by responding to at least two other students’ posts.

Note that this is a conversation and not an assignment like “write 250-500 words about ‘x,'” where “x” is some kind of reading or whatever. That’s the sort of “one-shot deal” writing assignment that AI can do really really well. Rather, it’s an interaction with other students participating in the same discussion in the same space. AI can’t do that very well.

Hobson’s last suggestion is to “critique” AI, which has been a big part of my teaching lately because I’m teaching a lot about AI in my classes. But there are all kinds of ways to do this in small ways too; for example, have students and AI both complete a short writing task and then have the class compare them.

I tend to critique AI while demonstrating writing things I think it can do fairly well, along with things it can’t do well– or it can’t do, period. For example, I think AI is good at proofreading, and, with a detailed prompt, it is okay to pretty good at giving feedback akin to peer review. Interestingly, when I ask my students to compare AI feedback and human feedback, they generally prefer the human feedback. AI provides more feedback than human peers, but that feedback can be misleading, in part because AI has no presence in the class (or on the online discussion) where we talked about the assignment.

But my favorite approach to critiquing AI is finding stuff it cannot do, which often can serve as a not-so-subtle warning to students. For example, in my experience, when I upload a PDF of an academic article and ask AI to summarize it for me, it can do a pretty good job. Even more useful is AI can do well at explaining complex passages from articles for non-experts. But if you ask AI to give you some good sentences from the article to quote in a paper, it will frequently make those quotes up. Seriously, give it a try.

The last strategy for dealing with AI that a few folks talk about in Whitford’s article might be best described as having honest and earnest discussions with students about the whole point of college is learning something. This is kind of what I blogged about back in July 2025, and what I think of as “the AI talk.” I think the only thing worse than limiting any discussion about AI to something like “don’t use it because it’s bad” is not saying anything at all, which (according to my students) still seems to be what happens in most classes.

Whitford talked to Carlo Rotella, a professor at Boston College who doesn’t ban students from using AI because he realizes it can be impossible to detect. “I explain to my students why it’s a waste of their time and mine. I explain that they’re paying $5 a minute for classes at Boston College, and to spend that time practicing to be replaceable by AI is a complete waste of their money and time, and my time.” Later in this article, Rotella said, “The entire point of this class is the labor, so a labor-saving device would be beside the point. It’s like joining the track team and doing your laps on an electric scooter. You went around the track. Congratulations.”

Of course, one of the reasons why this works for Rotella is that he bans technology in his classes– no devices, and bring hard copies of the reading– and a lot of tests and quizzes are based only on class discussions. That’s a bridge too far for me. I also assume that Rotella is able to get away with this because (like me) he’s teaching comparably small classes.

That said, having honest and frank discussions about the whole academic enterprise– that we’re here not just to get through the class and collect the credits but to actually learn something– does help. Like Purser at San Francisco State, I teach a lot of working class, first gen, and older/returning students. When I bring up the Rotella argument, that trying to cheat with AI is really self-sabotaging and it defeats the whole purpose of college, these students (well, at least the better ones) completely get it.

Zepbound Thoughts, Almost Two Years In

It’s been a while since I’ve written much here, especially about AI things, I think mainly because of teaching. I am once again “all AI/all the time” with two sections of first year writing and an advanced “special topics” course called “Writing, Rhetoric, and AI.” It’s all going well, but because I’ve been reading and writing about AI for all of those courses all semester, I haven’t had a lot of time or energy to write more about it here.

More on that special topics class later because I’m scheduled to teach it again next term– and if you are an EMU student interested in an advanced online writing class, go sign up! Soon I am also going to write more about AI here, about how this last semester has gone and perhaps a response/alternative to the draft of some AI Guidance from the CCCCs. But in the meantime, Zepbound news.

I started Zepbound in January 2024, and by Thanksgiving 2024, I had lost a total of about 38 pounds. As of thi past Thanksgiving, I have lost a total of about 43 pounds, give or take. This is the thing about Zepbound and all of these drugs, I think: I lost as much weight as I did in the first year or so because I simply was not as hungry, and when I ate a meal, I did not eat as much. This is still all true, but I’ve obviously plateaued a bit. It’s certainly better to still be losing at least a little weight rather than gaining it back.

All of which is to say that this stuff has worked great for me, but only up to a point. I’d like to lose about another 20 or so pounds, but to do that, I’m actually going to have to try.

Anyway, in Zepbound and similar drug news:

  • Since the last time I blogged about Zepbound, I’ve been to the doctor for my annual physical. All of my various numbers and measures for things continue to improve, which is good, obviously. The main reason I wanted to do these drugs is to avoid becoming diabetic, and that mission seems accomplished, at least for now. I would like to lose more weight, but I’m not sure it would make me a lot “healthier,” if that makes sense.
  • In the “what’s it like” to be on these drugs genre of essays and articles, I liked this one from Kara Baskin at the Boston Globe, “Zepbound, six months in. The good, the bad, the awkward.” (I accessed it via archive.is) We’re quite different people– Baskin is a 5-foot-1 woman who was about 30 pounds overweight– but with similar not great health issues related to being fat and an interest in addressing that. It’s a good read, and I can relate to a lot of it. “Dining out requires strategy,” she writes, and then tells a story about how she and her husband were celebrating something with an elaborate out to yeat experience, she went all out, and then puked it all up an hour later. I haven’t had that experience, but if I’m going to eat out at a restaurant, I have to plan my day around it, like eating a small lunch if I’m going out for a big fancy steak dinner or something. Like Baskin, “I sure do miss that wild abandon,” because I kinda liked pigging out once in a while. Of course, that’s probably why I needed to go on these drugs in the first place.
  • One thing I have not experienced that Baskin says she has is any sense of guilt or judgement from others because I’m on these drugs. No doubt a lot of that is heavily gendered, but I also think it’s becoming more “acceptable,” for lack of a better word for it. About a year and a half ago, KFF published a poll that said (among many other things) that 12% of adults in the US have tried a GLP-1 type drug, and about 6% are on these drugs right now. Also (and not surprisingly) the percentage of people on these drugs who have been told by a doctor they have health risks because of weight is a lot higher than that.
  • Lots more famous people are on these drugs and have told their stories– Serena Williams made some news about these meds and now she’s a spokesperson for ro. I think that has also had a way of destigmatizing these drugs as well.
  • The Trump administration rolled out a deal with Eli Lilly and Novo Nordisk to reduce prices and the like, though it’s probably still too early if it’s made a difference– or, given this administration, if it’s even a real thing. Oh, BTW, do you remember that press conference in the Oval Office where the guy fainted and where there was a photo of Trump just standing there looking like a zombie? That was about this deal.
  • These companies are getting crazy rich. Novo Nordisk (the maker of Ozempic and Wegovy) has turned Denmark into what Planet Money last year called a “Pharmastate.” Among other fun facts: “Nearly 1 out of every 5 Danish jobs created last year was at Novo,” and if you include the indirect jobs (suppliers, businesses like restaurants where Novo workers eat, etc.), it’s closer to half. And Eli Lilly (the maker of Zepbound) is the first corporation in health care to have a trillion dollar market value. Obviously, the downside of that for users is these drugs are super expensive. I’m cautiously optimistic that my insurance will keep covering it (though a lot of insurers are dropping it) and/or the prices will come down soon.
  • The other thing I’m optimistic about is the new drugs that are coming along. For one thing, the pill form of these drugs is almost here– supposedly. The Oprah Daily was very excited that these pills were “Almost Here” at the beginning of December 2025, but I’ve been reading about the pills arriving on the market any day now for about a year, so…. The other big news is that Eli Lilly said that retatrutide, their “next-generation (of) obesity and diabetes medication” is even more effective than Zepbound and it also reduces knee pain, and that includes patients who are not obese.

Anyway, still on the Zepbound, losing a little at a time….

Remembering Marcel Cornis-Pop(e)

A couple of weeks ago, I learned that Marcel Cornis-Pop passed away at the age of 79. I had heard a while before this that he had been ill for some time.

Marcel, who often spelled his last name Cornis-Pope, I think because that’s closer to how it was pronounced in Romanian, was a long-time faculty member at Virginia Commonwealth University who came to VCU in 1988, the same year I began work on my MFA in fiction writing. Back in the day, he was quite the influence and mentor.

Our paths actually overlapped before VCU, sort of. Marcel, along with his wife (I believe his children were born in the U.S.), came to America from Romania, first to the University of Northern Iowa in my hometown of Cedar Falls. I have a good friend who took a class or two from him while he was at UNI on Fulbright Scholar appointment. At the time, Romania was a Soviet satellite and one of the most repressive and brutal regimes in the Eastern Bloc, led by Nicolae Ceaușescu and his wife Elena. I don’t know if Marcel was ever imprisoned or threatened per se, but I do remember him talking about how he was involved in the underground publishing and distribution of books published by famous American authors. So I was always under the impression that, really, he had to leave Romania.

Marcel was my introduction to critical theory, I believe in my first semester at VCU. I don’t remember a lot of the details, but there are two things about that seminar that stand out for me still. First, Marcel was not all that interested in “covering” every theory and topic he had on his syllabus if the natural progression of the course took things in a different direction. Someone told me (it might have been my friend at UNI) that they had a class with him where Marcel and his students abandoned most of the planned readings and spent the entire semester analyzing the Henry James short story/novella “The Figure in the Carpet.” Second, the one school of thought/critical theory that he was not at all interested in teaching or entertaining in any serious way (at least way back when) was Marxism, probably for obvious reasons.

As an undergraduate English major at the University of Iowa in the mid-1980s, I had no direct exposure to literary/critical theory in any of my classes. I think that was fairly common then. I knew a couple of different people from Iowa who went off to PhD programs in English after undergrad and then bailed out early when they figured out that at the graduate level, it was no longer about reading and “appreciating” literature. I found the theory all quite fascinating, in no small part because of how Marcel introduced it to his students.

I took an independent study with Marcel, I think in my second year. I remember meeting with him about what this independent study would be about. I suggested a couple of different authors he rejected, and then I mentioned that I had read Thomas Pynchon’s The Crying of Lot 49 as an undergraduate, and I think I had also by that point read V. on my own. That piqued his interest. I said, “I am kind of interested to try to read Gravity’s Rainbow,” but…” and before I could even get out the rest of my sentence, that Gravity’s Rainbow might be way too much of a project to take on, Marcel said, “That, do that. I’ll do an independent study with you about Gravity’s Rainbow.”

That was the most intense self-study experience in close reading that I have ever had. For those unfamiliar: Gravity’s Rainbow is a 760-page novel that is perhaps best compared to books like James Joyce’s Finnegans Wake in that the complexity of it all is intentionally baffling. Sometimes it would take me a couple of days to read five or six pages of it, and without the help of the excellent book by Steven Weisenburger, A Gravity’s Rainbow Companion: Sources and Contexts for Pynchon’s Novel, I’m not sure I would have made it through. So an intense reading experience, and I did finish the book, though I don’t know if I could tell you now anything about what it was “about.” As I recall, I wrote an essay that focused on the trajectory of the V2 rocket; the book begins with the line “A screaming comes across the sky,” and it ends on the last page in a section called “Descent,” where “it was not a star, it was falling, a bright angel of death.”

Mostly though, I remember Marcel for various pieces of advice about academia at the time. I asked him his thoughts on whether or not I should go into a PhD program and what kind of program, something more like literary studies, or something like this new thing I was exposed to at VCU called “composition and rhetoric.” The main thing he advised, something I tell students now when they ask about graduate school, is to go as quickly as possible because there is no point in being a graduate student any longer than necessary. I perhaps took that to an extreme in my PhD (I finished in 3 years), but I still think he was right about that.

Marcel went on to a long and illustrious career at VCU: he was chair of the department in the early 2000s, was one of the founders of a PhD program in Media, Art, and Text, and I believe at one point he was a dean as well. I never thought about it when I knew him way back when (our paths crossed a couple of times after I left Richmond in 1993, at the MLA convention and only briefly), but he too was more or less at the beginning of his academic career in the US when we met.

Rest in peace, old friend.

It’s Not Brave to Piss People Off

Paul Bloom had a column in The Chronicle of Higher Education at the end of September that asked the question “Why Aren’t Professors Braver?” I was able to access it via archive.is, so if you, like me, like to read CHE once in a while but you don’t want to spend a stupid amount of money for a subscription…. This commentary is closely based on a post on Bloom’s Substack, “Why are so few professors troublemakers?”

Bloom is a psychology professor, formerly at Yale and presently at the University of Toronto, and, among other works, is the author of Against Empathy: The Case for Rational Compassion and also The Sweet Spot: The Pleasures of Suffering and the Search for Meaning. I don’t know if his books make him “brave” or a “troublemaker” and everything I know about Bloom comes from this op-ed and whatever I could glean from a quick Google Search, but I get the impression that he is perhaps best known for making controversial and counter-intuitive arguments. And I don’t know a lot about the different schools of thought within the study of psychology, but Bloom is a rational psychologist, which “emphasizes philosophy, logic, and deductive reason as sources of insight into the principles that underlie the mind and that make experience possible.” That might explain why he’s “against” empathy.

It is an odd essay. For starters, there’s his fuzzy description of bravery. Referring to a study of faculty in psychology about taboo subjects and self-censorship, Bloom seems to be saying bravery means being “bold” enough to speak out, to be willing to discuss (in public, in classrooms, in scholarship) some of the “taboo” positions psychology professors avoid– for example, “transgender identity is sometimes the product of social influence.” The other trait of the brave professor is to be a “troublemaker,” and his only example in this essay is Noam Chomsky. That sets the bar mighty high, both in terms of academic achievements and taking bold (and sometimes taboo and occasionally kind of crazy) political stances.

Rather than being brave, Bloom believes faculty are timid and mostly go-along to get-along. Why do faculty do this? According to Chomsky (as quoted by Bloom), it is because we all have been trained into conformity by rigorous educational and professional training which enabled us to get these positions in the first place. “Most of the people who make it through the education system and get into the elite universities are able to do it because they’ve been willing to obey a lot of stupid orders for years and years.”

Bloom disagrees. “The explanation I like better,” he writes, “has to do with the nature of academe and the importance of not pissing people off” because of the potential career costs of offending our colleagues and because none of us wants to be disliked. So instead of “pissing people off,” we do things like sign political petitions we don’t agree with, don’t express an opinion on Israel-Gaza, hide our conservative views or other non-conforming opinions. Basically, keep your head down and do your work.

Of course, neither of them consider the possibility that faculty try not to piss people off because they don’t want to be rude and because at the end of the day, being a professor is a lot more similar to any other white collar job where one of the understood but unspoken qualifications is “plays well with others.” But I digress…

I don’t think Chomsky is exactly right, but there is no question that all faculty, regardless of discipline, spend years jumping through A LOT of hoops to get one of these jobs and then more hoops to get tenure. Plus a lot of faculty (though far from all) were the kind of students who begged for the gold stars and extra homework and who loved schooling so much they never left. That does create a “one of us” cult religion rule follower club feel to the profession, no question.

But Bloom is wrong, I think mainly because pissing people off is counterproductive and not brave. To me, bravery is the willingness to make a personal sacrifice for a greater good. Firefighters, police officers, military personnel are all easy and obvious examples, as are protesters who are at risk of being tear-gassed or arrested or worse. Refusing to sign a political petition I disagree with or being a troublemaker is not even close to being brave, and this is especially true for tenured faculty and even more especially true for tenured faculty at a university with a strong union located in a blueish/purple state.

I have been blogging here for decades, and while I suppose some people think I’m a troublemaking asshole, that’s not why I do it. I write here because I am looking for an audience, and also because every once in a while, a post will lead to something else, like my work a decade ago about MOOCs or some other publication. But none of this takes much bravery, especially at this stage of my career.

The same was true for Chomsky. Bloom implies Chomsky’s past arrests were a result of his outspoken politics, but as far as I can tell, he got arrested a couple of times in the late 60s/early 70s at protests against the Viet Nam war. I suspect a healthy percentage of college faculty around at that time also spent a few hours in jail for protesting the war, not to mention students back then. No, Chomsky spent about 50 years as a tenured professor at one of the most prestigious universities in the world and also as one of the most cited scholars ever. Maybe that counts as “troublemaking,” but being that successful is not brave.

And why should faculty be “brave,” anyway? Bloom wonders the same thing, briefly:

I really don’t know if professors are more timid than real-estate agents, accountants, nurses, and so on. If I’m right, our timidity arises from a fact about our profession — the career cost of offending even a small proportion of the people who are in power. But maybe this is also true for other jobs. If so, it’s a more general problem. Something is lost if real-estate agents, say, feel that they will be punished if they express their views on Israel-Gaza.

I sensed that the politics of the real-estate agent my wife and I hired last year to sell our previous house were generally similar to ours, though of course he never brought up his feelings about the Israel-Gaza war. That would have been quite odd. Similarly, especially when it comes to teaching, I think my students understand I’m a liberal college professor, but my specific beliefs about the taboo topics Bloom brought up earlier, about MAGA, about Palestine, etc., rarely come up. That’s partly because I don’t think it’s good teaching to dwell too much on my own political beliefs, but mostly because of the nature of the classes I teach. It’s a lot harder to avoid politics in fields like women’s studies, African-American studies, political science, a lot of literary fields, and so forth.

Anyway, Bloom argues that professors are different from real-estate agents or whatever because we are in the “truth business” and, with tenure, “we can’t be fired, no matter what we say and who we piss off.” Well, as we’ve seen recently with faculty all over the country being fired for posting something bad about the Charlie Kirk shooting, that’s not necessarily true. In fact, according to The Guardian, somewhere around 40 academics have been dismissed or punished in the U.S. for something they said about Kirk. Most of these punishments happened to faculty in states where they were already going after academic freedom, places like Texas, Florida, Indiana, and South Dakota. The academics who got in trouble– some of them tenured, some not– were trying to piss people off with social media posts claiming Kirk was a Nazi and a racist, that they were glad he was dead, and so forth.

I certainly do not think any of these people should have been disciplined or fired, and I also suspect that once these cases get to the courts, most of those fired will get their jobs back. That said, perhaps this is a case where how and when someone says something matters just as much as what they say. I wrote a post about Kirk and his killer in which I discussed how Kirk reminded me a lot of some of the guys I met in high school and college debate who were into it just for the chance to argue with others about anything, and how his shooter, Tyler Robinson, reminded me of some of the young men I see in college classes who have been radicalized by a weird underground world of internet/game/meme culture that is neither left or right wing in any conventional sense. I began that post with the completely uncontroversial opinion that no one deserves to be gunned down in cold blood on a college campus or anywhere else, including Kirk. I didn’t call Kirk a Nazi or a racist or a sexist; rather, I just shared a video clip of him doing what he did on college campuses and suggested readers make their own conclusions.

Now, that post (like 98% of the things I post here) didn’t find much of an audience– so far, it’s received less than 40 views here and about that many on Substack– but I also quite purposefully wrote that in a way as to not piss people off. Maybe Bloom thought that I should have been a lot more direct in calling out Kirk as an up and coming proto-fascist/Christian Nationalist leader dangerous to the future of American democracy. I didn’t do that because I was trying to make my points while still being professional and civil, and I am very aware how anything anyone posts anywhere online lives on well past the moment. Maybe that doesn’t make me brave, but it isn’t timid. Faculty who are timid don’t say anything.

Writing, Rhetoric, and AI (so far)

I meant to post about this quite a while ago, but I got busy getting ready for the beginning of the semester, and then of course I got busy actually teaching, and then whammo, we’re starting the fifth week of the semester already. Flyin’ time.

I’m teaching a “special topics” class this semester called “Writing, Rhetoric, and AI,” and it’s a 400/500 level class– that is, both undergraduate (22) and graduate (3) students. The actual course is in Canvas and thus behind a firewall, but there is a website at rwai.stevendkrause.com. The website name– Rhetoric, Writing, and AI– should be Writing, Rhetoric, and AI, but it’s too complicated to change now. It’s the type of typo/error I was always looking for when working on MOOC stuff. Massive Open Online Courses? Massive Online Open Courses? Anyway…

The website is mainly for one of the three major projects for the class, the AI News & Updates Collaborative Annotated Bibliography Website and Report. I landed on using WordPress and running it on the server space where I’ve hosted this blog forever because it seemed like the least bad option. I wanted a space/platform where students could submit entries and I could approve and organize them, and I wanted it to be public. I thought about Substack, but I think that would have required all of my students to sign up for Substack, and while I like it, I didn’t want to force it on anyone. (I suggested a link to something on Substack to a friend/colleague of mine in the field, and this person said they wouldn’t have anything to do with that platform because of the “Nazi problem).”

So the body/”blog-like” part of that site is where students’ entries are published. I have the Syllabus and assignments on the page “Course Documents.” There are three major assignments for everyone and an additional assignment for the graduate students. The first essay project is a reflection essay based on a series of AI writing “experiments” we’re trying out. Then there’s that already mentioned annotated bibliography assignment, and finally a research essay project assignment.

We’re also doing plenty of reading and discussing of the readings, which I list here— at least so far. Because this is a class that is new and a crazily fast-moving target, I thought I’d plan the first part of the class first and then adjust for the second part of the semester, depending on what students are interested in researching/talking about. I know we’re going to talk about the environmental issues with AI, but beyond that, I’ll have to see what students think.

For the first half of the semester, we have mostly been reading/discussing AI and writing fairly directly– comp/rhet, pedagogy, creative writing, tech writing (and AI in the workplace), and so forth. It’s all been a mix of MSM, websites, along with a handful of academic articles.

The graduate students also need complete a book review project where they will each make a short video about a book they read about AI, and then also lead a discussion about their book.

I think things are going reasonably well, though one of the challenges of teaching online is sensing “the vibe,” if you will. Everyone seems friendly enough and engaged, so that is a good thing. I am surprised about two things with this group so far. First, most of these students “hate” AI– at least so far and before this course. That squares with my experiences in introducing AI things into a class called “Digital Writing” last fall, where almost all of them were “against” AI, especially when it came to writing. But I thought a course explicitly about AI’s connections to writing would attract more “pro” AI students than it (apparently) has.

Second, most of them have little experience with AI. Some have even said that this class was the first time they ever used AI at all. Now, maybe some of these students are kinda/sorta underestimating their experiences with AI; after all, there’s good evidence that most students who use AI don’t want to admit it, and also good evidence that the vast majority of students use AI at least occasionally.

Then again, these are almost all English major types. My first year writing classes are almost always composed entirely of students from majors not in the humanities. A lot of these students are not crazy about AI either, but that is definitely less true for the ones majoring in anything STEM or business-related.

Anyway, so far/so good– and it looks like I’m on the schedule to teach this course again next term. Probably. Stay tuned….

Thoughts on the Kirk and Robinson “Types”

I don’t know if the world needs my “thoughts” about Charlie Kirk and the young man who has admitted to the murder, Tyler Robinson. But not knowing a lot has never stopped me from blogging/posting about something before, so…

Before I go any further, let me be crystal clear about two key points.

First, I am against anyone getting shot for speaking on a college campus– or just being anywhere. Cold-blooded murder is bad. I know, a bold position. Kirk didn’t deserve to be shot any more than the two high school kids in Colorado who were shot on the same day. Kirk did not deserve to die, just as the hundreds/thousands of people who are killed every year by confused young men like Robinson, and like the shooter at that high school in Colorado.

Second, I think the reason why the reaction from MAGA world and conservative media is so strong and emotional is because in those worlds, Kirk was a huge presence and friend. TPUSA was instrumental in Trump winning votes among college-aged men, and Kirk raised A LOT of money for Republican causes. He seemed to know every Republican member of Congress, but beyond that, Kirk was friends with lots of people on Fox News and in the right-wing podcasting world, of Don Jr., JD, and many others in that circle. By all accounts, Kirk was incredibly charismatic and personable. I’ve read or heard multiple accounts of people who knew him saying things like “I didn’t agree with him about anything, but I always felt like he listened to me and cared about me.” Obviously, I’m hoping this does not become a full-blown McCarthyist-like effort to “punish what (Trump and his advisers) alleged was a left-wing network that funds and incites violence,” and I don’t support Kirk’s politics in any possible way. But I understand why Kirk’s millions of social media followers are upset.

Obama gave a very good speech the other day where he spoke in part about the Kirk shooting and the dangers of political violence in this country, and the importance of not letting political disagreements turn into shootings. Here’s a quote from The Guardian’s story about this:

While he believed that Kirk’s ideas “were wrong”, Obama said that “doesn’t negate the fact that what happened was a tragedy and that I mourn for him and his family”. Denouncing political violence and mourning its victims “doesn’t mean we can’t have a debate about the ideas” that Kirk promoted, he added.

Exactly. So, with that out of the way:

I didn’t pay much attention to Kirk before he was gunned down, and obviously, we’re all still learning more about Robinson. But as I’m learning a lot more about both of these guys, I’m beginning to recognize both of the Kirk and Robinson “types” in other men I’ve met and known.

I have a better handle on the Kirk type because he reminds me a lot of guys I knew from debate. I was active in debate throughout high school, I dabbled in it a bit as a competitor in college, and I did a fair amount of coaching and judging of high school debate as a college student. Debate was for me (and for everyone I knew who was involved in it) my “sport,” and it was just as much about competing and winning as football or wrestling or gymnastics or any other sport you can think of. I went with my team to tournaments all over Iowa and the Midwest, where dozens of different schools competed for championships, trophies, and bragging rights. Just like football, there were some schools that had powerhouse debate programs, teams that would win most of the time. (FWIW, I did not go to such a school, and I was a pretty mediocre debater, too).

Debate teaches participants how to take any position and to “win” the argument, regardless of what that debater actually believes. In the style of debate I did, each team of two people would take the affirmative side of a resolution one round and the negative side the next. I’m simplifying this, but that meant that in one round, you might passionately argue that gun control was bad, and then, in the next round, passionately argue that gun control was good. It didn’t matter if you believed one position or another because it was all part of the game. In other words, competitive debate is not some kind of Platonic dialogue that leads to a philosophical truth any more or any less than the outcome of a football game conveys a “truth”. 1

Naturally, debate attracted people interested in arguing for fun and as a thought experiment, and also people interested in public speaking, research, politics, and so forth. It is no wonder that a lot of famous people in politics and the media had experience in competitive debate. Most of the debate kids I knew had (like me) left-leaning political beliefs, but I also knew staunch Reagan conservatives as well. A lot of these folks were great guys– fun to hang around with, smart, charming, great speakers– who treated their politics as part of the sport. Kirk would have fit right in with this group.

But debate– the academic kind, but also the Platonic kind as well– has rules, and it is more than only an argument. For one thing, you need evidence to support your points, and that required hours in the library researching.2 Being good at arguing was not enough.

There has been a lot of praise heaped on Kirk for his “debate skills” and willingness to engage with anyone anywhere and on any topic, notably on college campuses. But as far as I can tell, what Kirk was good at was not the kind of debate I did in school (because he doesn’t use evidence to make his points), nor was he good at a more idealistic/truth-seeking Platonic debate/dialogue (because there is no mutual exchange trying to learn some truth). Rather, Kirk was good at arguing with people. Or maybe more accurately, at people.

YouTube is awash with videos of Kirk doing his “ask me anything” bit on college campuses and in podcasts, but here’s a simple example of what I mean:

It’s entertaining, Kirk has his moments of charm and wit (well, if you overlook his sexist ideas about dating and his berating of most of the people who step up to the microphone), and he’s very quick on his feet. But this is just a trick. It is arguing, and being willing and able to argue about anything regardless of how you feel about it. Given that Kirk’s goal with the Professor Watchlist website was to intimidate and silence academic freedom, it’s hard for me to believe that Kirk was always that sincere about these performances being an “exchange of ideas.”

Now, while I feel like I knew some Kirk types in debate and also in college politics, I feel like I know less about his (alleged/presumed) killer. But I do recognize the type in some of my late teen/early 20-something male students. Like Robinson– and also the guy who shot a couple of high school kids on the same day as Kirk’s murder in Colorado, the shooter behind the killing/injuring of Minnesota legislators, the guy who fire-bombed Josh Shapiro’s house in Pennsylvania, on and on and so forth–these are men who have been sucked into a baffling mix of shady internet discussion groups, Discord/gaming communities, the “manosphere,” crypto or other get rich schemes, conspiracy theory sites, fringe political and extremist group sites, etc.

I’ve never had a student about whom I thought, “hey, this guy could be a shooter,” and I’ve never felt like I needed to refer one of these students to the support services at EMU as someone who needed “help.” But some of the young men in my classes, sitting in the back of the room in first year composition with baseball hats pulled down over their foreheads and staring at some kind of screen, some of these young men espouse some of the sort of strange theories and confusing politics that are an emerging story about Robinson, and I think these students inhabit some of the same kinds of online spaces as Robinson. The Robinson type represents the most extreme version of the crisis among young men I’ve been reading about for the last year or so, and 99.99% of these confused young men are not dangerous. But the problem iof troubled and struggling young men in this country is real.

Kirk’s supporters in MAGA world are convinced Robinson and similar shooters are motivated by dangerous leftist ideologies. Kirk’s critics and many on the left argue that political violence in this country is mostly coming from right wing ideologues. My gut feeling is Robinson and his type aren’t motivated by left/right Democrat/Republican politics as we commonly understand them, but more by a messy stew of contradictory political views, internet memes and popular culture, gaming, and just overall “confusion,” for lack of a better way of putting it.

I have a hard time articulating the details of why I feel this way. Fortunately, I saw on the PBS News Hour an extremely helpful interview with Ryan Broderick, the primary writer of Garbage Day, which is “a Webby Award-winning newsletter about the internet and it comes out every Monday, Wednesday, and Friday.” He writes about all kinds of online culture, and man, he goes deep in places– a really interesting Substack site/newsletter. In this interview, Broderick explains in compelling detail what he sees as the likely meanings of the engravings Robinson made on the bullets that killed Kirk and that were recovered at the scene. Here’s an interview Broderick did with PBS News Hour on September 16 (the clip starts with the interview, which is about 10 minutes long, though this links to the entire episode).

If you are interested in the much longer and detailed version, I’d recommend the post on Garbage Day, “Charlie Kirk was killed by a meme.” Again, Broderick goes deep and with compelling documentation, explaining different internet/game/meme culture connections invoked by the evidence Robinson left behind with the shooter at a New Zealand mosque in 2019, Luigi Mangione, and other similarly confused shooters. The detail defies summary, but if you want the very short/”what’s the point” argument, I’d say Broderick sums it up well in the concluding paragraph:

We have let school shootings in America persist long enough that we have created a culture where kids grow up seeing them as a path towards fame and glory. Another consequence of how thoroughly the internet has flattened pop culture, politics, and real-life violence. All of it now is just another meme you can participate in to go viral. Made even more confusing by a new nihilistic accelerationist movement that delights in muddying the waters for older people who still adhere to a traditional political spectrum. Many young extremists now believe in a much simpler binary: Order and chaos. And if you are spending any time at all trying to derive meaning from violent acts like this then you are, by definition, their enemy.

I think this is spot on: I don’t think these shooters were radicalized by leftist professors,3 and they aren’t especially motivated by right-wing politics either. I think Broderick is right that the online culture inspiring (if not creating) shooters like Robinson defies our normal polarized sense of left and right.

In that Guardian article I mentioned earlier, Obama said “we” (as in all of us, I think) want to identify a clear enemy, and “We’re going to suggest that somehow that enemy was at fault, and we are then going to use that as a rationale for trying to silence discussion around who we are as a country and what direction we should go … And that’s a mistake as well.”

Trump and the Republicans are making this mistake right now, though going after “liberal extremists” who disagree with Trump is also a move consistent with the other steps toward authoritarian rule he has taken (and with no resistance from other Republicans). But folks on the left are just as polarized. If the victim of this recent shooting had been a prominent left-wing activist, I guarantee Democrats would be sifting through clues to try to prove the shooter’s right-wing political motivations.

But make no mistake, our conventional assumptions about right/left or red/blue politics in this country are not going to answer the question of these shooters’ motivations, and it is not going to prevent the next shooting from one of these types of troubled young men. As a society, we should be striving for a way to save these young men from being consumed by this culture and turned into killers.

Unfortunately, there is no way Trump or anyone else in DC will do this, and as a result, more politicians, school children, and just innocent people minding their own business are going to be killed. That is a sad and frightening reality of our times.

  1. I should note that my experiences in competitive debate are almost entirely limited to the 1980s– obviously, a long time ago. I don’t follow it anymore, but as I understand it, a lot of the strategies and approaches have changed in recent years. I think it’s still seen by participants as being more about a competition deciding winners and losers and less an actual exchange between people who hold different views, but I could be wrong about that. ↩︎
  2. In fact, I think the main skill I took away from debate was actually not “public speaking” at all. Rather, it was my introduction to how to do library research, how to find quotes to support you points, and how to keep track of/cite all of that evidence. ↩︎
  3. I wish I could indoctrinate students into left-leaning politics, but are you kidding? I can barely get them to read the syllabus. ↩︎

Enough With the Blue-books Already!

I think I first read someone bring up the “blue-book solution” for AI cheating shortly after ChatGPT exploded in fall 2022, but as I recall it, it was a joke. “Ha ha, now that AI can write as well as students, we’ll have to make them write by hand and while we’re watching. Ha ha!” My standard comment on social media to posts/articles about going back to handwritten and timed writing in the name of stopping cheating was “why not make them use a stone and chisel?” Ha ha.

Well, here we are three years in, and now blue-books really are a “solution” to AI. According to the Wall Street Journal, sales are up– way up. Earlier in August, Katie Day Good had an op-ed in The Chronicle of Higher Education titled “Bring Back the Blue-Book Exam,” and then at the end of August, Clay Shirky had an op-ed in The New York Times called “Students Hate Them. Universities Need Them. The Only Real Solution to the A.I. Cheating Crisis.” Both of these pieces make (mostly) serious arguments that the only way we can deal with/fight against AI cheating– a “crisis,” apparently– is to go back to the way we used to do these things. Way back.

Jeez.

Katie Day Good teaches at Calvin University in Grand Rapids and is “a media historian and cultural scholar of emerging technologies in education and everyday life.”  A lot of her current work seems to be about “cultural movements to disconnect from digital technology and take a ‘digital sabbath,'” so maybe this return to handwriting is kind of in her research/scholarship lane.

But Shirky?!? Here’s a guy who became famous as a new media evangelist, who, in the book Here Comes Everybody, enthusiastically writes about crowd-sourcing everything and the joys of a world where content is both consumed and produced by users. His by-line describes his current job as “a vice provost at N.Y.U.” where he helps “faculty members and students adapt to digital tools.” This is the guy who is suggesting a return to blue-books and oral exams?!?

Jeez again.

Before I get more into the specifics of Good’s and Shirky’s essays, I want to bring up three “bigger picture” problems with blue-books and similar calls to return to the 19th century, problems that don’t come up in either one of these essays. First, blue-books, along with oral exams and other face to face assessments, obviously won’t work for an online class, especially ones that are asynchronous. And roughly speaking, a little over half of all college students take at least one class online, and about a quarter of all college students only take classes online. So what is an online teacher to do, collect blue-books by snail mail?

Second, timed writing like this is bad pedagogy, and people in writing studies have known this forever. No one is an especially good writer when they are being timed and watched, not to mention with no opportunity for things like feedback from peers or revision. I think these exercises are more like filling out a form than writing, and honestly, a better solution is some kind short answer/multiple-choice exam.

Third, and my apologies for offending anyone who thinks that blue-books might be a good idea, this is just fucking lazy. Good and Shirky are suggesting it’s just too much work for a teacher to change the assignment in some way where it is either not effective to use AI or that leans into AI in specific and useful ways. Shirky dismisses doing this work thusly: “We cannot simply redesign our assignments to prevent lazy A.I. use. (We’ve tried.)” It’s just too hard to do anything differently! Instead, Good and Shirky are saying we should travel back in time and just keep pretending that there is no other possible way to change how we do things.

I saw a version of this same logic at the beginning of my career in the early 1990s when word processing and internet technologies were emerging. There were similar efforts then to restrict student access to things like spelling and grammar checkers, or banning students from using online sources. Teachers– especially English teachers, I think– do not like to change how they teach, even when what and how they teach is altered by technology. As a result, teachers often follow the lazier solution, which is to ban the technology. Thus blue-books.

Both Good and Shirky begin the same way all of these AI freak-out essays begin: we can’t trust students at all and every one of them cheats on everything, especially now that it is so easy with AI. Good writes the new capabilities of AI made her rethink the “take-home essays” she used to assign in favor of blue book exams, presumably (in part) because of the possibility of cheating. Shirky begins with a vague story about a philosophy professor he met with who said he simply could not get “several” of his students to stop cheating with AI.

“Take-home essays” (I think she means what I’d call a take-home essay exam) have always required teachers to trust that their students won’t cheat. After all, when the student is working “at home,” there is nothing to stop that student from getting help from others and the internet, or even to get someone else to complete the assignment for them. I don’t know if Good was ever concerned about her students cheating on their take-homes before AI (she doesn’t seem to have been worried), but she started using blue books based merely on the possibility of cheating with AI.

As for Shirky’s philosophy professor colleague: I don’t know what several of them used AI to cheat means (are we talking half the class? three students? what?), but to me, the solution is obvious: fail them. I am going to assume (perhaps wrongly) that this hypothetical professor Shirky cites has a policy that does not allow students to use AI, and I’m also going to assume that the professor explained this policy and the consequences of using AI, which (again, just guessing) was failure. So, what exactly is the problem? If it’s that easy for the professor to catch students cheating, why not just enforce your policy and fail those students?

My own approach has been to be very up-front with students about what I think is and isn’t cheating with AI (and the short version is it is cheating if the writer directly copies/pastes AI output into something that the writer said they wrote). If I think a student is cheating with AI– which, for me, is based on my admittedly not perfect sense of what a particular student’s writing “sounds like,” and the document history of their Google Doc— I talk to them about it. In the last year and a half or so, I have had a lot more students cheating than I did before AI, meaning I’ve had to have a lot more of those uncomfortable conversations with cheating students. I give them another chance to do the assignment right and almost all of them managed to turn things around and pass the class just fine. I had a couple of repeat cheaters last year and I failed them on the spot.

In a post on Substack where she was explaining why she’s using AI detection software, Anna Mills described a confrontation she had with a student who adamantly denied he cheated with AI even though Mills is almost certain he did. After all, students also know AI is difficult to detect. I get it, and it can be hard to prove AI cheating. I’m sure I’ve had students who have managed to get away with some AI that I would have counted as cheating had I known. But every time I have had that “I think you cheated” conversation with a student, be it with AI or old-fashioned plagiarism, that student has confessed, often in tears.

As I’ve said many MANY times before:

  • Most students do not want to cheat.
  • Students cheat when they are failing and they are desperate.
  • Students who cheat are not criminal masterminds and are easily caught.
  • All that said, it does depend on what exactly counts as cheating, and I don’t think it is cheating if students use AI as part of their process.

Good views this return to handwritten essays as a “balm for my tech-weary soul.” She goes on:

My students’ handwritten essays brim with their humanity. Each page conveys personality, craft, voice, and a “realness” that feels increasingly scarce in our screen-saturated, algorithmically-distorted information environment. As such, handwriting accomplishes something greater now than ever before in education: It restores a sense of trust to the student-teacher relationship that has been shaken by AI.

In the next paragraph, she also brings up some of the other beliefs in handwriting’s “authenticity,” that handwriting helps people make better connections in the brain than typing, that it results in better notes, etc. Well, right before Covid struck, I was researching laptop/cellphone bans in f2f classes and requiring students to take notes by hand. Long story short, the studies I’ve seen about comparing laptop notes with handwritten notes in classrooms– mostly quantitative/experimental methodologies coming out of Education/Psychology– strike me as flawed for all kinds of different reasons. And the claims about handwriting as a tool for judging one’s “authenticity” and identity and the like have been debunked by many researchers– I would recommend in particular the very readable and well-researched book by Tamara Thorton, Handwriting in America: A Cultural History. I also have my own baggage as someone with terrible handwriting, who remembers failing handwriting in the fourth grade, and also as someone who has typed everything I could type since I was in high school.

So for me, the idea that handwriting is “better” and that it is both possible and reasonable to make judgements about the writer based on their handwriting, that more of one’s humanity is revealed through handwriting– that’s all bullshit.

Shirky doesn’t seem to think that handwriting has the same kind of “Magic” that Good sees in her students’ writing, and he admits that a lot of students and faculty are skeptical of this change. But in the name of rigor and a “more relational model of higher education,” we must return to the way things were done, and he then proceeds to cherry-pick different speech and writing assignments all the way back to the 1300s. In the process, I think he indirectly describes a lot of the pedagogy common to small discussion classes like first year writing: requiring students to meet during office hours, entering into “Socratic dialogue or simple Q&A” with the class, and so forth.

“There is the problem of scale,” with old techniques like oral exams, Shirky admits. “With some lecture classes in the hundreds of students, in-class conversation is a nonstarter.” Well, wait a minute: maybe the past practices we need to return to are smaller classes. Perhaps one of the reasons why I am not that worried about AI cheating is that I feel like I actually do most of these things in the classes I teach now. My students end up doing a lot of writing— discussion posts to readings, scaffolded essays part of the research project, and drafts of work in progress— along with plenty of discussing as well.

So what if every class were no more than 25 students? That wouldn’t be logistically possible, and it wouldn’t be a complete solution to AI cheating either, of course. But it’s a start, and we’ve also known for a very long time that lecturing is also a terrible pedagogy.

I will say this: both end on a vaguely positive note, even if their optimism about the future does not strike me as particularly realistic. Good takes a lot of pleasure in this return to the past, connecting us back to Plato and education as “not a process of pouring knowledge into an empty soul, but as a ‘turning around’ of the soul in the direction of beauty and truth.” She sure seems to think that those blue-books and handwriting can accomplish a lot!

And after spending the rest of his op-ed saying there’s nothing to be done about AI except return to “technology free” classrooms, Shirky ends by predicting higher education will adapt. “Despite frequent pronouncements that college is doomed because students can now get an education from free online courses or TV or radio or the printing press, those revolutions never flattened us. Nor will A.I.” We’ll see. I want to believe Shirky is right, but….

New School Year Resolutions II

367 days ago, I posted about my new school year resolutions and plans, something I’ve been doing fairly regularly (though not every year) forever. I can see this year’s resolutions are a sequel.

This is not to say that the “vibe” this school year is similar, at least I don’t think so. For one thing, I had some hope that Harris was going to win. But also, it’s a year later. Last November, Annette turned 60, and I will turn 60 in March 2026. When you turn 60, the company that handles our 401K-like retirement plan for folks in higher education and the like, TIAA, gets very excited and insists on meeting. I don’t think either of us are planning on retiring earlier than 65 (and probably 67), but just the fact that we met with the retirement plan people means it’s getting closer.

Plus, there’s the whole Trumpian-Fascistic government’s attack on higher education and anything involving DEI shitshow going on. As I wrote back in March, the nice thing about being at a place like EMU is we’re kind of “under the radar,” so to speak, and, unlike the big-time schools Trump is going after, we don’t get much money from the federal government. But the stink of it all still hangs in the air, and there are plenty of other worrying things happening at EMU. Rumor has it enrollment is down even more than administration has admitted. Rumor has it that buy-out offers to faculty might be getting better.

These things (combined with a summer where I travelled a lot and where I didn’t do too much school work) do make one think about exit strategies. That seems at odds with a resolution to do/improve in the coming year. But here we are.

So, how do this year’s resolutions match up with last year’s?

The first item was to wade deeper into AI in My Teaching–Much Deeper, and I did that. My first year writing classes research theme was “your career goals and AI,” and in fall 2024, I taught a class called “Digital Writing” where two of the assignments were all about AI. I thought it went okay to pretty good.

This semester, I’m back with the same themes in first year writing, and I think it’s more relevant than ever. As I said to my 121 students today, when I was their age in the mid-1980’s, there was this new thing called the “internet” that was starting to get some attention. But I don’t think a lot of folks my age now had any sense then how much of our lives would be altered by this weird internet thing. AI feels very much like that now, though more accelerated. I think my students got the comparison.

The other class I’m teaching is an advanced undergraduate/graduate level “special topics” course called Rhetoric, Writing, and AI. It’s an online class (taught behind a firewall in Canvas), and the website is mainly for one of the assignments where my students (and probably me too) will be building collaboratively an annotated bibliography of interesting “items” about AI to share– articles, websites, videos, podcasts, whatever. But I’ve also included copies of the assignments and links (so far) to the readings. I will probably be writing another post soon, specifically about this class.

Second was to try to be at least a little more “involved.” I think I’m going to pass on that for this coming year, though I remain the department rep on the “college advisory committee.” That group meets for 90 minutes a pop twice a month, so I think that’s enough.

The third thing was to put together my next (maybe last?) sabbatical/research release project proposal, and that was one of my bigger disappointments from last year. My proposal was about the connection between the discourse around AI now resembles a lot of the discourse around the introduction of computers and the internet in the 80s and 90s among writing instructors. I thought the idea I had was a good one, and I still think that’s true. Alas, I got turned down. But this is another year, and I still think this (or something like it) would be a good project. Some rewording and rethinking, try try try again, and all that.

The fourth item/resolution was to keep figuring out Substack, and compared to where I was last year this time, I feel like I’m further along. Back then, I was trying to shift all of my blogging-type writing to Substack. The reason why I moved back (and I’m now doing both) is that I don’t think the audiences are the same. I’m still trying to figure out Substack, and I’m still trying to figure out who/what to read over there.

Last but not least, keep losing weight with Zepbound. That’s kind of a “not good/not bad” news thing. I started taking Zepbound in January 2024, and by August 2024, I had lost about 35 pounds. Since August 2024, I’ve lost around 6 or 7 more pounds. For me, that’s “not good” because I would have liked to have lost more weight by now. On the other hand, it’s “not bad” because I’ve at least lost some weight and I haven’t gained it back.

So I guess I could add to this resolution to try to mix in a lot more “diet and exercise,” along with Zepbound. My ultimate goal would be to lose another 15-20 pounds because, based on the extremely problematic BMI scale, that would give me a score that is just on the edge of being merely “overweight.” That’s “not good” because I am terrible at dieting, and I also suppose it’s not entirely “good” that I’d have to lose a lot more than 20 pounds to be in the “normal” range on the BMI scale. But it’s also “not bad” because the main reason why I went on this stuff in the first place was to be healthier, and relative to where I was, I think that’s worked out well.