Zepbound Updates: Me, and In the News

I’ve now been on Zepbound for 70 weeks, and I was just looking over my notes on my progress over that time.1 For the first 35 weeks, I was losing about a pound a week and without thinking about it much. I didn’t exercise or diet more than usual (though I did and still do exercise and watch what I eat); I just wasn’t as hungry and so I didn’t eat as much.

Since week 35 (which was September 2024), I’ve lost just shy of eight more pounds– about 3 more pounds since the last time I wrote here about this— for a total of around 42-43 pounds. So on the one hand, I haven’t completely plateaued in my weight loss progress, and at least I am still heading in the right direction. Plus even with the stall, I’m still a lot less fat (and more healthy) than I was before I lost the weight– and at least I haven’t gained it back (yet). On the other hand, I ain’t going to get to my goal, which means losing about another 17-20 pounds, with Zepbound alone.

Obviously, that means I need to start doing something closer to acutally dieting and exercising more– or at least I need to shake up the routine/rut, and I’ll be doing that for the next month. Annette and I are going on a month long trip through Europe starting next week as part of a celebration of our 31st wedding anniversary, a trip that was delayed by a year because we decided to buy a new house. I won’t be dieting, and because it will not be easy to refrigerate Zepbound as we go from place to place, I’ll have to skip a week of dosing at the end of the trip.2 Still, I’m not worried because on trips like this that involve a lot of walking around, I almost never gain weight.

But enough about me (or just me). There’s been some interesting Zepbound news in the last couple of months. A few highlights:

  • Access to Zepbound and similar weight loss drugs remains a significant problem. There was an article about this in The New York Times back in December, and how several other state’s biggest insurers have cut back on coverage. My insurance still covers it, but they have added additional hoops I need to jump through for me to get the drugs.
  • One of the other ways that access has been reduced is new restrictions of compound pharmacy and other “knock-off” versions of these drugs. Basically, because these drugs are no longer in short supply from manufacturers, companies that make their own version of Zepbound (and there are a lot of companies like this) can no longer sell their own versions. The drug manufacturers and companies like Ro have been making deals to make the drugs a little less expensive, but they’re still expensive. I’m just happy that I don’t have to decide if it would be worth the $700 or so it would cost me a month out of pocket (and honestly, it might be).
  • Still, there’s a lot of optimism about the near future of these drugs. Eli Lilly Chief Scientific Officer Dan Skovronsky gave an interview on CNBC where he talked about a daily pill as effective as a weekly injection will be available soon (maybe by the end of the year), stronger versions of these drugs, and also using these drugs for lots of stuff besides weight loss specifically: heart disease, sleep apnea, and maybe other things like addiction. Sure, this guy is trying to sell Eli Lilly drugs, but there are a lot of articles out there reporting similar things.
  • Just a few days ago, there was this article in The New York Times, “Group Dining on Ozempic? It’s Complicated,” which is about the social etiquette of being on a GLP-1 drug and out to eat with others where you don’t eat that much. I am obviously not shy about the fact that I’m taking Zepbound, so when I’m eating with others at a restaurant or at a dinner party, I just tell people it’s the drugs. Annette and I went to a breakfast diner place in Detroit in April, and I ordered what turned out to be an enormous skillet of eggs, hash browns, sausage, and peppers and onions. It was delicious, but I could barely eat half. When the waitress came away to clear our plates, she seemed concerned that I might not have liked it. “No, it’s great– it’s just I’m taking one of those weight loss drugs and I can’t eat more.” “Oh yeah?” she said. “How’s it going? I have a cousin of mine who is on one of those things and has last 50 pounds.” So at least she understood.
  • And then there was the recent news that Weight Watchers (aka WW International) was going bankrupt largely as a result of people shifting to drug alternatives instead. I saw this op-ed in The New York Times on the mixed messages of Weight Watchers, “Weight Watchers Got One Thing Very Right” by Jennifer Weiner. On the one hand, Weiner points out a lot of the dieting culture promoted by Weight Watchers was harmful. A lot of mothers took their slightly overweight but still growing/developing daughters to Weight Watchers too early, and a lot of their customers never succeeded and yet kept coming back, “stuck in a cycle of loss, regain and shame that didn’t ultimately leave them any thinner, even as it fattened Weight Watchers’ coffers.” On the other hand, Weiner says Weight Watchers provided its customers– especially women– a sense of community at what were (pre-Covid, of course) regular meetings. They were safe “third spaces,” a gathering that was one of the “all-too-rare places in America where conservatives and progressives found themselves sitting side by side, commiserating about the same plateaus or the same frustrations or the same annoyance that the power that be had changed the point value of avocados, again.”

I’ve written about this before, but I actually was Weight Watchers member and attended meetings (with my wife) for about three years in I believe the early 2010s. The regular meeting we attended was similar to what Weiner describes. It was at a WW storefront center in a strip mall located right next to a Chinese restaurant. Whenever I went, I always peed right before weighing in, anxious to cut every possible ounce. Then there’d be a meeting that lasted anywhere from 30 to 45 minutes where people “shared,” and where the leader (in our case, a gay man named Robert) led us through some lesson, mostly built upon stories of his own weight loss he’d repeat over and over. I was not the only man to attend these meetings, but yes, mostly women. Annette and I attended regularly enough to know most of the other “regulars,” and to also spot folks who would show up once or twice and never again. I do not remember any discussions about exercise, or really any other weight loss advice that went beyond “eat less.” In those three years, my weight did not change, and I never felt the sense of belonging to a community. It felt pretty hopeless by the end. So yeah, I don’t feel too badly about the demise of WW.

  1. As part of my journaling practices, I write down my weight for the morning and I also have a part where I track my weight on the days where I take Zepbound. ↩︎
  2. Zepbound needs to be stored in the fridge, but it can be kept at room temperature for up to 21 days. So I’ll have to skip one dose in the last week we’re gone and then I’ll be able to return to normal when we get back. That’s a good thing because if I miss two weeks, I need to start over on Zepbound with the lowest dose, and as far as I can tell from my limited internet research, people who restart with Zepbound often don’t have the same level of success the second time around. ↩︎

What Exactly is “Cheating”?

Here’s another freakout piece about AI, James D. Walsh’s New York Magazine piece “Everyone is Cheating Their Way Through College.”1 The TLDR version is the headline. “Everyone” is cheating. No one wants to do the assignments. Cheaters are super-duper sophisticated. Teachers are helpless. Higher education is now irrelevant. Etc., etc.

Walsh frames his piece with the story of Chungin “Roy” Lee, a student recently kicked out of Columbia for using AI to do some rather sophisticated computer programming cheating, I believe both for some of his courses and for an internship interview. He has since launched a startup called Cluely, which claims to be an undetectable AI tool to help the user, well, cheat in virtually any situation, including while on dates and in interviews. Lee sees nothing wrong with this: Walsh quotes him as saying “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating.”

Walsh is tapping into the myth of the “mastermind” cheater, the student so brilliant they could do the work if they wanted to but prefer to cheat. In the real world, mastermind cheating does not exist, which is why Lee’s story has been retold in all kinds of places, including this New York Magazine article, and cheaters don’t usually raise over $5 million in VC start-up money with an app they created. Rather and 99.99999% of the time (and, in my 30+ years of teaching experience, 100% of the time), students who cheat are not very smart about it,2 and the reason they cheat is they are failing the course and they are desperate to try anything to pass.

The cheaters Walsh talks to for this article (though also maybe not cheaters, as I will get to in a moment) all claim “everyone” is already using ChatGPT et al for all of their assignments, so what’s the big deal? I’ve seen surveys, like this one summarized by Campus Technology, that claim close to 90% of students “already use AI in their studies,” but that’s not what my students have told me, and it’s not really what the survey results are either. I think 90% of college students have tried AI, but that’s not the same as saying they regularly use AI. According to this survey, it’s more like 54% of students said they used AI “at least on a weekly basis,” and the percentages were even lower for using AI to do things like create a first draft of an essay.3

I could go on with the ways that I think Walsh is wrong, but for me this article raises a larger question that I think is at the heart of AI hand wringing and resistance: what, exactly, is “cheating” in a college class?

I think everyone would agree that if a student turns in work that they did not do themselves, that’s cheating. The most obvious example in a writing class is a student handing in a paper that someone else wrote. But I don’t think it is cheating for students to seek help on their writing assignments, and what counts as cheating aided by others can be fuzzy. Here are three relatively recent non-AI-related examples I’ve had to deal with:

  • I teach a class called “Writing for the Web” in which (among other things) I require students to work through a series of still free tutorials on HTML and CSS on Codecademy, and I also require them to use WordPress to make a basic website. A lot of my students struggle with the technical aspects of these projects, and I always tell them to seek help from me, from each other, and from friends. Occasionally, a struggling student will get help from a more techno-savvy friend, and sometimes, the line between “getting help” and “getting someone else to do the work” gets crossed. That student perhaps welcomed and encouraged a little too much help from their friend, but the student still did most of the writing. Is this cheating?
  • I had a first-year writing student who went to see a writing tutor (although not one in the EMU writing center) about one of the assignments. I always think it is a good idea for students to seek help and advice from others outside the class— friends and family, but also tutors available on campus or even someone they might pay. I insist students do all of their writing in Google Docs for a variety of reasons— mostly as a way for me to see their writing process and to help me when grading revisions, but also because it discourages AI cheating. When I looked at the version history and the document comments, I saw that there were large chunks of the document actually written by the tutor. Is this cheating?
  • Also in first-year writing, I had a student who handed in an essay much more polished than the same student’s earlier work. I suspected the essay was written by someone else, so I called the student in for a conference. After I asked a few questions about some of the details in the essay, the student said, “Wait, you don’t think I wrote this, do you?” “No, I don’t, actually,” I said. The student said, “Well, I didn’t type it. What happened was I sat down with my mom and told her what the essay was supposed to be about, and then she wrote it all down for me.” Is this cheating?

I think the first example is kind of cheating, but because the extra help was more about coding and less about the writing, I didn’t penalize that student. The second example could count as cheating because someone other than the student did the work. But it’s hard to blame the student because the tutor broke one of the cardinal rules of tutoring: help, but never actually do the client’s/tutee’s work for them. The third example strikes me as clearly cheating, and every person I’ve told this story to believes that the student had to have known they were cheating. It’s probably true that the student was lying to me, but what if they really did think this was just getting help? Maybe Mom did this for their child all the way through high school.4

While I think other college writing teachers would mostly agree with the previous paragraph, there is not nearly that level of consensus about cheating and AI. Annette Vee has a good post here (in a newsletter sponsored by Norton, fwiw) about this and AI policies. Usefully, Vee shares several different policies, including language for banning AI.

My own policy is pretty much the same as Vee’s, which is also very similar to Nature’s AI policy for publications. First, you cannot use verbatim the writing from AI because “any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.” Second, if a writer does use AI as part of the process (brainstorming, researching, summarizing, proofreading, etc.), they need to explain how they used AI and in some detail. So now, when my students turn in an essay, they also need to include an “AI Use Statement” in which they explain what AI tools they used, what kinds of prompts, how they applied the results, and so forth. I think both my students and I are still trying to figure out how much detail these AI Use Statements need, but that’s a slightly different topic.5

Anyway, while I am okay with students getting help from AI in more or less the same way they might get help from another human, I think a lot of teachers (especially AI refusers) are not.

Take this example of what Walsh sees as AI cheating:

Whenever Wendy uses AI to write an essay (which is to say, whenever she writes an essay), she follows three steps. Step one: “I say, ‘I’m a first-year college student. I’m taking this English class.’” Otherwise, Wendy said, “it will give you a very advanced, very complicated writing style, and you don’t want that.” Step two: Wendy provides some background on the class she’s taking before copying and pasting her professor’s instructions into the chatbot. Step three: “Then I ask, ‘According to the prompt, can you please provide me an outline or an organization to give me a structure so that I can follow and write my essay?’ It then gives me an outline, introduction, topic sentences, paragraph one, paragraph two, paragraph three.” Sometimes, Wendy asks for a bullet list of ideas to support or refute a given argument: “I have difficulty with organization, and this makes it really easy for me to follow.”

Once the chatbot had outlined Wendy’s essay, providing her with a list of topic sentences and bullet points of ideas, all she had to do was fill it in. Wendy delivered a tidy five-page paper at an acceptably tardy 10:17 a.m. When I asked her how she did on the assignment, she said she got a good grade. “I really like writing,” she said, sounding strangely nostalgic for her high-school English class — the last time she wrote an essay unassisted. “Honestly,” she continued, “I think there is beauty in trying to plan your essay. You learn a lot. You have to think, Oh, what can I write in this paragraph? Or What should my thesis be? ” But she’d rather get good grades. “An essay with ChatGPT, it’s like it just gives you straight up what you have to follow. You just don’t really have to think that much.”

Now, I don’t think AI advice on outlining is especially helpful, and I don’t think any teacher should be asking for “tidy five-page papers.” If AI means teachers have to stop assigning writing as a product and to instead teach writing as a process, then I am all for it. But regardless of the usefulness of AI outline advice, does what Wendy did with AI count as cheating? Walsh seems to think it does, and a lot of AI refusers would see this as cheating as well.

If Wendy cut and pasted text directly from the AI and just dumped it into an essay, then yes, that’s cheating— though proving AI cheating like that isn’t easy.6 But let’s assume that she didn’t do that and she used this advice as another brainstorming technique. I do not think this counts as cheating, and the fact that Wendy probably has some professors who think this is cheating is what makes this so confusing for Wendy and every other student nowadays.

Eventually, educators will reach a consensus on what is and isn’t AI cheating, and while I’m obviously biased, I think the consensus will more or less line up with my thoughts. But because faculty can’t agree on this now, it is essential that we take the time to decide on an AI policy and to explain that policy as clearly as possible to our students. This is especially important for teachers who don’t want their students to use AI at all, which is why instead of “refusing” AI, educators ought to be “paying attention” to it.

  1. The article is behind a firewall, but I had luck accessing it via 12ft.io ↩︎
  2. Though I will admit that I may have had mastermind cheaters in the past who were so successful I never caught on…. ↩︎
  3. The other issue about when/why students cheat— with AI or anything else— is it depends a lot on the grade level of the student. The vast majority of problems I’ve had with cheaters, generally and with AI in particular, have been with first year students in gen ed composition and rhetoric. I rarely have cheating problems in more advanced courses and with students who are juniors and seniors. ↩︎
  4. Ultimately, I made this student rewrite their essay on their own. As I recall, the student ended up failing the course because they didn’t turn in a number of assignments and missed too many classes, which is a pretty typical profile of the kind of student who resorts to cheating. ↩︎
  5. I think for all of my students last year, I was the only teacher who had an AI policy like this. As a result, the genre of an “AI Use Statement” was obviously unfamiliar, and their responses were all over the map. So one of the things on my “to do” list for preparing to teach in the fall is to develop some better models and better language about how much detail to include. ↩︎
  6. As I’ve already mentioned, this is one of the reasons why I use Google Docs: I can look at the document’s “Version History” and see how they put their essays together. Between looking at that and just reading the essay, I can usually spot something suspicious. When I think the student is cheating with AI (and even though I spend a lot of time explaining to students what I think is acceptable and unacceptable AI use, this still happened several times last school year in first year writing), I talk to the student and tell them why I think it’s AI. So far, they’ve all confessed. I let them redo the assignment without AI, and I tell them if they do it again, they’ll fail the class. That too happened last school year, but only once. ↩︎

Why Universities are “Special” for Everyone and Not Just Universities

I just read a piece Lee C. Bollinger wrote for The Atlantic titled “Universities Deserve Special Standing” (free link article!). It’s long and it gets into the weeds about universities and the First Amendment. Bollinger was President of Columbia University from 2002 to 2023, but I’ve also heard of him because he was once a local as President of the University of Michigan from 1996 to 2002.

Bollinger, who is a lawyer and a First Amendment scholar, argues that universities, similar to “the press,” depend upon and are protected by the First Amendment to do their work, and the work of both universities and the press is what makes democracy possible in the first place. Here’s a long quote that I think gets Bollinger’s main point across:

So, here is my thesis: American universities are rooted in the bedrock of human nature and the foundations of our constitutional democracy. They are every bit as vital to our society as the political branches of government or quasi-official institutions such as the press (often even referred to as the “fourth branch” of government). Universities, as institutions, are the embodiment of the basic rationale of the First Amendment, which affirms our nation’s commitment to a never-ending search for truth.

In some ways, universities are a version of the press: They make a deep inquiry into public issues and are always on call to serve as a check on the government. But if their deadlines are far longer, the scope of their work and remit in pursuing truth reach to everywhere that knowledge is or may yet be. Their role in society touches the full panoply of human discovery, never limited by what may be newsworthy at a given moment. And, as many have noted in today’s debate over federal funding, the results of academic research and discovery have benefited society in more obviously utilitarian ways, including curing disease, cracking the atom, and creating the technologies that have powered our economic dynamism and enhanced our quality of life.

I agree with this. Certainly, there have been a lot of times when universities have failed at embodying the values of free speech and the search for truth or enhancing everyone’s quality of life– and the press has failed their “fourth estate” check on the government role often enough as well. But the principle Bollinger is trying to make here is completely true.

The problem here though is the “they” Bollinger is talking about are people like him, university faculty and administrators, and particularly those who are tenured. He’s at best only talking about everyone else on university campuses– students, and also the staff and the legions of non-tenure-track instructors who make these places run– indirectly. He’s talking about academic elites.

I suppose I’m one of the “theys” Bollinger is describing because I am a tenured professor at a university. Though besides being at a “third tier” university, I also have always felt that the institution that best protects my rights to teach, to write, and to say what I want without fear of losing my job is not tenure or “the university” as an institution. Rather, it’s the union and the faculty contract.

In any event, arguing to anyone outside of the professoriate that universities (or university professors) are “special” and should be able to say or do anything without ever having to worry about losing their jobs in the name of the “search for truth” does not go over well. Believe me, I’ve offered a version of Bolliger’s argument to my extended family at Thanksgiving and Christmas gatherings over the years, and they are skeptical at best. And these people are not unfamiliar with higher education: everyone in my family has some kind of college degree, and Annette and I are not the only ones who went to graduate school.

Besides, if you want to convince normal people that universities deserve a special place in our society, making the comparison to “the press,” which the general public also distrusts nowadays, might not be the best strategy.

Like Bollinger, I have spent my professional life in academia, so I’m biased. But I do think universities as institutions are important to everyone, including those who never step on campus. For starters, there is the all that scientific research: the federal government pays research universities (via grants) to study things that will eventually lead to new cures and discoveries. That accounts for almost all of the money Trump (really, Musk) are taking away from universities.

More directly, large research universities (which usually have medical schools) also run large hospitals and health care systems, and these are the institutions that often treat the most complicated and expensive problems– organ transplants, the most aggressive forms of cancer, and so forth. University-run healthcare systems are the largest employer in several states, and universities themselves are the largest employer in several more, including Hawaii, California, New York, and Maryland. (By the way, Walmart is the largest employer in the U.S.). And of course, just about every employer I can think of around here is indirectly dependent on universities. I mean, without the University of Michigan, Ann Arbor would not exist.

There’s also the indirect community-building functioning of universities that goes beyond the “college town.” Take sports, for example. I’m reluctant to bring this up because I think EMU would be better off if we didn’t waste as much money as we do on trying to compete in the top division of football. Plus college sports have gotten very weird in the age of Name, Image, and Likeness deals and the transfer portal system. But it’s hard to deny the fandom around college sports, especially living in the shadow of U of M.

And of course, the main way that everyone benefits from universities is we offer college degrees. Elite universities (like the ones that have been in the news and/or the target of Trump’s revenge) don’t really do this that well because they are so selective– and they need to be selective because so many people apply. This year, 115,000 first-year and transfer students applied to Michigan, and obviously, they can only admit a small percentage of those folks.

But the reality is that only the famous universities that everyone has heard of are this difficult to attend. Most universities, including the one where I work, admit almost everyone who applies. We give everyone who otherwise couldn’t get into an elite university the chance to earn a college degree. That doesn’t always work out because a lot of the students we admit don’t finish. But I also know the degrees our graduates earn ultimately improve their lives and futures, and our graduates.

I could go on, but you get the idea. I understand Bollinger’s point, and he’s not wrong. But academics like us need to try to convince everyone else that they have something to gain from universities as well.

What I Learned About AI From My First Year Writing Students

I turned in grades Friday and thus wrapped up the 2024-25 school year. I have a few miscellaneous things I’ll have to do in the next few months, but I’m not planning on doing too much work stuff (other than posts like this) until late July/early August when I’ll have to get busy prepping for the fall. Of course, it’s difficult for me to just turn off the work part of my brain, and I’ve been reflecting on teaching the last couple of days: what I’ll do differently next time, what worked well, which assignments/readings need to be altered or retired, and also what I learned from my students. That was especially true with my sections of first-year writing this year.

This past year, the topic in my first-year writing courses was “Your Future Career and AI.” It was part of a lot of “leaning in” to AI for me this year. As I wrote back in December, we read and talked about how AI might be useful in some parts of the writing process, but also about AI’s limitations, especially when it comes to the key goals of the class. AI is not good at researching (especially researching anything academic/behind a library’s firewall), it cannot effectively or correctly quote/paraphrase/cite that research in an essay in MLA style, and AI cannot tell students what to think.

In other words, by paying attention to AI (rather than resisting, refusing, or wishing AI away), I think my students learned that ChatGPT is more than just a cheating device, and I think I learned a lot about how to tweak/redesign my first year writing class to make AI cheating less of a problem. Again, more details in my post “Six Things I Learned After a Semester of Lots of AI,” but I think what it boils down to is teach writing as a process.

But I also learned a lot from my students’ research about the impact of AI on all sorts of careers and industries beyond my own. So the other day, when I read this fuzzy little article by Jack Kelly on the Forbes website, “The Jobs That Will Fall First as AI Takes Over The Workplace,” I thought that seems about right, at least based on what my students were telling me with their research.

Now, two caveats on what I’ve learned from my freshmen: first, they’re freshmen (mostly– I had a few stray sophomores and juniors in there too), and thus these are inexperienced and incomplete researchers. Second, one of the many interesting (and stressful and fun) things about short and long-term projections of the future of Artificial Intelligence (both generative AI, which is basically where we are now, and artificial general intelligence or artificial superintelligence, which is where the AI is as “smart” or “smarter” than humans) is no one knows.

That said, I learned a lot. In a nutshell: while it’s likely that everything will eventually be impacted by AI (just as everything was affected by one of the more recent general-purpose technologies, the internet), I don’t think AI will transform education as much as a lot of educators fear. Though like I just said, every prediction about the future of AI has about the same chance of being right as being wrong.

For starters, all of my students were able to find plenty of research about “x” career and AI. No one came up empty. Predictably, my students interested in fields like engineering, accounting, finance, business, law, logistics, computer science, and so on had no problems finding articles in both MSM and academic publications. But I was surprised to see the success everyone had, including students with career ambitions in nursing, physical therapy, sports training, interior design, criminology, gaming, graphics, aviation, elementary school teaching, fine art, music, social work. I worried about the students who wanted to research AI and careers in hotel and restaurant management, theatre, dance, and dermatology; they all found plenty of resources. The one student who came closest to stumping the topic was a young man researching AI and professional baseball pitching. But yeah, there were some articles about that too.

Second, the fields/careers that will probably be impacted by AI the most (and this is already happening) involve working and dealing with A LOT of complex data, and ones that involve a lot of repetitive tasks which nonetheless take expertise. Think of something like accounting, finance, basic data analysis. None of my students researched this, but as that Forbes article mentioned, AI is also already reshaping careers like customer service, data processing, and simple bookkeeping.

None of my students wrote much about how AI will replace humans in “X” careers, though some of them did include some research on that for careers like nursing and hospitality. Perhaps my students were researching too selectively or too optimistically; after all, they were projecting their futures with their research and none of them wanted AI to put them out of a career before they even finished college. But most of what my students wrote about was how AI will assist but not replace professionals in careers like engineering and aviation. And as one of my aviation students pointed out, AI in various forms has been a part of being a pilot for a long time now. (I was tempted to include here a link to the autopilot scene from the movie Airplane!). Something similar was true in a lot of fields, including graphic design and journalism.

For a lot of careers, AI’s impact is likely to be more indirect. I heard this analogy while listening to this six-part podcast from The Atlantic: AI is probably not going to have a lot of impact on how a toothpaste factory makes and puts toothpaste into tubes, but it will change the way that company handles accounting, human resources, maybe distribution and advertising, and so forth. I think there are a lot of careers like that.

I only had a few students researching careers in education– which is surprising because EMU comes out of the Normal School tradition, and we certainly used to have a lot more K -12 education majors than we do now. The two students who come to mind right now were researching elementary education and art education, and both of those students argued AI can help but not replace teachers or the curriculum for lots of different reasons. This squares with what I’ve read elsewhere and in this short Forbes article as well: jobs in “teaching, especially in nuanced fields like philosophy or early education” and other jobs that “rely on emotional intelligence and adaptability, which AI struggles to replicate,” are less likely to be replaced by AI anytime too soon.

Don’t get me wrong: besides the fact that no one knows what is going to happen with AI in the next few years (that’s what makes predicting the future of AI so much fun because quite literally anything might be true!), AI already has impacted and altered how we teach and learn things. As I discussed in my CCCCs talk, the introduction of personal computers and the internet also changed how we practice and teach writing. As I’ve written about a lot here lately, if the goal of a writing class is to have students to use AI as an aid (or not at all) in their learning and process, then teachers need to teach differently than they did before the rise of AI. And of course teachers (and everyone else) are going to have to keep adapting as AI keeps evolving.

But when I wonder about the current and near future threats to my career of choice, higher education, I think about falling enrollments, declining funding from the state, the insane Trump/Musk cuts to research institutions, deporting international students, axing DEI initiatives and other programs meant to help at risk students, and the growing distrust of expertise and science. I don’t think about professors being replaced or made irrelevant because of AI.

4C25: My Talk in Two Parts, and “Thoughts”

I am home from the 2025 Conference for College Composition and Communication, after leaving directly after my 9:30 am one man show panel and an uneventful drive home. I actually had a good time, but it will still probably be the last CCCCs for me. Probably.

Click this link if you want to just skip to my overall conference thoughts, but here’s the whole talk script with slides:

The first part of the original title, “Echoes of the Past,” was just my lame effort at having something to do with the conference theme, so disregard that entirely. This has nothing to do with sound. The first part of my talk is the part after the colon, “Considering Current Artificial Intelligence Writing Pedagogies with Insights from the Era of Computer-Aided Instruction,” and that is something I will get to in a moment, and that does connect to the second title, 

“The Importance of Paying Attention To, Rather Than Resisting, AI.” It isn’t exactly what I had proposed to talk about, but I hope it’ll make sense.

So, the first part: I have always been interested in the history of emerging technologies, especially technologies that were once new and disruptive but became naturalized and are now seen not as technology at all but just as standard practice. There are lots of reasons why I think this is interesting, one of which is what these once-new and disruptive technologies can tell us now about emerging writing technologies. History doesn’t repeat, but it does rhyme, and history prepares the future for whatever is coming next.  

For example, I published an essay a long time ago about the impact of chalkboards in 19th-century education, and I’ve presented at the CCCCs about how changes in pens were disruptive and changed teaching practices.  I wrote a book about MOOCs where I argued they were not new but a continuation of the long history of distance education. As a part of that project, I wrote about the history of correspondence courses in higher education, which emerged in the late 19th century. Correspondence courses led to radio and television courses, which led to the first generation of online courses, MOOCs, and online courses as we know them now and post-Covid. Though sometimes emerging and disruptive technologies are not adopted. Experiments in teaching by radio and television didn’t continue, and while there are still a lot of MOOCs, they don’t have much to do with higher education anymore.

The same dynamic happened with the emergence of computer technology in the teaching of writing beginning in the late ’70s and early ’80s, and that even included a discussion of Artificial Intelligence– sort of. In the course of poking around and doing some lazy database searches, I stumbled across the first article in the first issue– a newsletter at the time– of what would become the journal Computers and Composition, a short piece by Hugh Burns called “A Note on Composition and Artificial Intelligence.”

Incidentally, this is what it looks like. I have not seen the actual physical print version of this article, but the PDF looks like it might have been typed and photocopied. Anyway, this was published in 1983, a time when AI researchers were interested in the development of “expert systems,” which worked with various programming rules and logic to simulate the way humans tend to think, at least in a rudimentary way. 

Incidentally and just in case we don’t all know this, AI is not remotely new, with a lot of enthusiasm and progress in the late 1950s through the 1970s, and then with a resurgence in the 1980s with expert systems. 

In this article, Burns, who wrote one of the first dissertations about the use of computers to teach writing, discusses the relevance of the research in the field of artificial intelligence and natural language processing in the development of Computer Aided Instruction, or CAI, which is an example of the kind of “expert system” applications of the time. “I, for one,” Burns wrote, “believe composition teachers can use the emerging research in artificial intelligence to define the best features of a writer’s consciousness and to design quality computer-assisted instruction – and other writing instruction – accordingly” (4). 

If folks nowadays remember anything at all about CAI, it’s probably “drill and kill” programs for practicing things like sentence combining, grammar skills, spelling, quizzes, and so forth. But what Burns was talking about was a program called Topi, which walked users through a series of invention questions based on Tagmemic and Aristotelian rhetoric. 

Here’s what the interface looked like from a conference presentation Burns gave in 1980. As you can see, the program basically simulates the kind of conversation a student might have with a not-very-convincing human. 

There were several similar prompting, editing, and revision tools at the time. One was Writer’s Workbench, which was an editing program developed by Bell Labs and initially meant as a tool for technical writers at the company. It was adopted for writing instruction at a few colleges and universities, and 

John T. Day wrote about St. Olaf College’s use of Writer’s Workbench in Computers and Composition in 1988 in his article “Writer’s Workbench: A Useful Aid, but not a Cure-All.” As the title of Day’s article suggests, the reviews to Writer’s Workbench were mixed. But I don’t want to get into all the details Day discusses here. Instead, what I wanted to share is Day’s faux epigraph.

I think this kind of sums up a lot of the profession’s feelings about the writing technologies that started appearing in classrooms– both K-12 and in higher education– as a result of the introduction of personal computers in the early 1980s. CAI tools never really caught on, but plenty of other software did, most notably word processing, and then networked computers, this new thing “the internet,” and then the World Wide Web. All of these technologies were surprisingly polarizing among English teachers at the time. And as an English major in the mid-1980s who also became interested in personal computers and then the internet and then the web, I was “an enthusiast.”

From around the late 1970s and continuing well into the mid-1990s, there were hundreds of articles and presentations in major publications in composition and English studies like Burns’ and Day’s pieces, about the enthusiasms and skepticisms of using computers for teaching and practicing writing. Because it was all so new and most folks in English studies knew even less about computers than they do now, a lot of that scholarship strikes me now as simplistic. Much of what appeared in Computers and Composition in its first few years was teaching anecdotes, as in “I had students use word processing in my class and this is what happened.” Many articles were trying to compare writing with and without computers, writing with a word processor or by hand, how students of different types (elementary/secondary, basic writers, writers with physical disabilities, skilled writers, etc.) were harmed or helped with computers, and so forth.  

But along with this kind of “should you/shouldn’t you write with computers” theme, a lot of the scholarship in this era raised questions that have continued with every other emerging and contentious technology associated with writing, including, of course, AI: questions about authorship, the costs (because personal computers were expensive), the difficulty of learning and also teaching the software, cheating, originality, “humanness” and so on. This scholarship was happening at a time when using computers to practice or teach writing was still perceived as a choice– that is, it was possible to refuse and reject computers.  I am assuming that the comparison I’m making here to this scholarship and the discussions now about AI are obvious.

So I think it’s worth re-examining some of this work where writers were expressing enthusiasms, skepticisms, and concerns about word processing software and personal computers and comparing it to the moment we are in with AI in the form of ChatGPT, Gemini, Claude, and so forth. What will scholars 30 years from now think about the scholarship and discourse around Artificial Intelligence that is in the air currently? 

Anyway, that was going to be the whole talk from me and with a lot more detail, but that project for me is on hold, at least for now. Instead, I want to pivot to the second part of my talk, “The Importance of Paying Attention To, Rather Than Resisting, AI.” 

I say “Rather Than Resisting” or Refusing AI in reference to Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes website “Refusing Generative AI in Writing Studies,” but also in reference to articles such as Melanie Dusseau’s “Burn It Down: A License for AI Resistance,” which was a column in Inside Higher Ed in November 2024, and other calls to refuse/resist using AI. “The Importance of Paying Attention To,” is my reference to Cynthia Selfe’s “Technology and Literacy: A Story about the Perils of Not Paying Attention,” which was first presented as her CCCC chair’s address in 1998 (published in 1999) and which was also expanded as a book called Technology and Literacy in the Twenty-first Century.

If Hugh Burns’ 1983 commentary in the first issue of Computers and Composition serves for me as the beginning of this not-so-long-ago history, when personal computers were not something everyone had or used and when they were still contentious and emerging tools for writing instruction and practice, then Selfe’s CCCCs address/article/book represents the point where computers (along with all things internet) were no longer optional for writing instruction and practice. And it was time for English teachers to wake up and pay attention to that.

And before I get too far, I agree with eight out of the ten points on the “Refusing Generative AI in Writing Studies” website, broadly speaking. I think these are points that most people in the field nowadays would agree with, actually. 

But here’s where I disagree. I don’t want to go into this today, but the environmental impact of the proliferation of data centers is not limited to AI. And when it comes to this last bullet point, no, I don’t think “refusal” or resistance are principled or pragmatic responses to AI. Instead, I think our field needs to engage with and pay attention to AI.

Now, some might argue that I’m taking the call to refuse/resist AI too literally and that the kind of engagement I’m advocating is not at odds with refusal.

I disagree. Word choices and their definitions matter. Refusing means being unwilling to do something. Paying attention means to listen to and to think about something. Much for the same reasons Selfe spoke about 27 years ago, there are perils to not paying attention to technology in writing classrooms. I believe our field needs to pay attention to AI by researching it, teaching with it, using it in our own writing, goofing around with it, and encouraging our students to do the same. And to be clear: studying AI is not the same as endorsing AI.  

Selfe’s opening paragraph is a kidding/not kidding assessment of the CCCCs community’s feelings about technology and the community’s refusal to engage with it. She says many members of the CCCCs over the years have shared some of the best ideas we have from any discipline about teaching writing, but it’s a community that has also been largely uninterested in the focus of Selfe’s work, the use of computers to teach composition. She said she knew bringing up the topic in a keynote at the CCCCs was “guaranteed to inspire glazed eyes and complete indifference in that portion of the CCCC membership which does not immediately sink into snooze mode.” She said people in the CCCCs community saw a disconnect between their humanitarian concerns and a distraction from the real work of teaching literacy.

It was still possible in a lot of English teacher’s minds to separate computers from the teaching of writing– at least in the sense that most CCCCs members did not think about the implications of computers in their classrooms. Selfe says “I think [this belief] informs our actions within our home departments, where we generally continue to allocate the responsibility of technology decisions … to a single faculty or staff member who doesn’t mind wrestling with computers or the thorny, unpleasant issues that can be associated with their use.” 

Let me stop for a moment to note that in 1998, I was there. I attended and presented at that CCCCs in Chicago, and while I can’t recall if I saw Selfe’s address in person (I think I did), I definitely remember the times.

After finishing my PhD in 1996, I was hired by Southern Oregon University as their English department’s first “computers and writing” specialist. At the 1998 convention, I met up with my future colleagues at EMU because I had recently accepted the position I currently have, where I was once again hired as a computer and writing specialist. At both SOU and EMU, I had colleagues– you will not be surprised to learn these tended to be senior colleagues– who questioned why there was any need to add someone like me to the faculty. In some ways, it was similar to the complaints I’ve seen on social media about faculty searches involving AI specialists in writing studies and related fields.

Anyway, Selfe argues that in hiring specialists, English departments outsourced responsibility to the rest of the faculty to have anything to do with computer technology. It enabled a continued belief that computers are simply “tool[s] that individual faculty members can use or ignore in their classrooms as they choose, but also one that the profession, as a collective whole–and with just a few notable exceptions–need not address too systematically.” Instead, she argued that what people in our profession needed to do was to pay attention to these issues, even if we really would rather refuse to do so: “I believe composition studies faculty have a much larger and more complicated obligation to fulfill–that of trying to understand and make sense of, to pay attention to, how technology is now inextricably linked to literacy and literacy education in this country. As a part of this obligation, I suggest that we have some rather unpleasant facts to face about our own professional behavior and involvement.” She goes on a couple of paragraphs later to say in all italics “As composition teachers, deciding whether or not to use technology in our classes is simply not the point–we have to pay attention to technology.”

Again, I’m guessing the connection to Selfe’s call then to pay attention to computer technology and my call now to pay attention to AI is pretty obvious.

The specific case example Selfe discusses in detail in her address is a Clinton-Gore era report called Getting America’s Children Ready for the Twenty-First Century, which was about that administration’s efforts to promote technological literacy in education, particularly in K-12 schools. The initiative spent millions on computer equipment, an amount of money that dwarfed the spending on literacy programs. As I recall those times, the main problem with this initiative was there was lots of money spent to put personal computers into schools, but very little money was spent on how to use the computers in classrooms. Self said, “Moreover, in a curious way, neither the CCCC, nor the NCTE, the MLA, nor the IRA–as far as I can tell–have ever published a single word about our own professional stance on this particular nationwide technology project: not one statement about how we think such literacy monies should be spent in English composition programs; not one statement about what kinds of literacy and technology efforts should be funded in connection with this project or how excellence should be gauged in these efforts; not one statement about the serious need for professional development and support for teachers that must be addressed within context of this particular national literacy project.” 

Selfe closes with a call for action and a need for our field and profession to recognize technology as important work we all do around literacy. I’ve cherry-picked a couple of quotes here to share at the end. Again, by “technology”, Selfe more or less meant PCs, networked computers, and the web, all tools we all take for granted. But also again, every single one of these calls applies to AI as well.

Now, I think the CCCCs community and the discipline as a whole have moved in the direction Selfe was urging in her CCCCs address. Unlike the way things were in the 1990s, I think there is widespread interest in the CCCC community in studying the connections between technologies and literacy. Unlike then, both MLA and CCCCs (and presumably other parts of NCTE) have been engaged and paying attention. There is a joint CCCC-MLA task force that has issued statements and guidance on AI literacy, along with a series of working papers, all things Selfe was calling for back then. Judging from this year’s program and the few presentations I have been able to attend, it seems like a lot more of us are interested in engaging and paying attention to AI rather than refusing it. 

At the same time, there is an echo–okay, one sound reference– of the scholarship in the early era of personal computers. A lot of the scholarship about AI now is based on teachers’ experiences of experimenting with it in their own classes. And we’re still revisiting a lot of the same questions regarding the extent to which we should be teaching students how to use AI, the issues of authenticity and humanness, of cheating, and so forth. History doesn’t repeat, but it does rhyme.

Let me close by saying I have no idea where we’re going to end up with AI. This fall, I’m planning on teaching a special topics course called Writing, Rhetoric, and AI, and while I have some ideas about what we’re going to do, I’m hesitant about committing too much to a plan now since all of this could be entirely different in a few months. There’s still the possibility of generative AI becoming artificial general intelligence and that might have a dramatic impact on all of our careers and beyond. Trump and shadow president Elon Musk would like nothing better than to replace most people who work for the federal government with this sort of AI. And of course, there is also the existential albeit science fiction-esque possibility of an AI more intelligent than humans enslaving us.

But at least I think that we’re doing a much better job of paying attention to technology nowadays.


Thoughts”

The first time I attended and presented at the CCCCs was in 1995. It was in Washington, D.C., and I gave a talk that was about my dissertation proposal. I don’t remember all the details, but I probably drove with other grad students from Bowling Green and split a hotel room, maybe with Bill Hart-Davidson or Mick Doherty or someone like that. I remember going to the big publisher party sponsored by Bedford-St. Martin’s (or whatever they were called then) which was held that year at the National Press Club, where they filled us with free cocktails and enough heavy hors d’oeuvres to serve as a meal.

For me, the event has been going downhill for a while. The last time I went to the CCCCs in person was in 2019– pre-Covid, of course– in Pittsburgh. I was on a panel of three scheduled for 8:30 am Friday morning. One of the people on the panel was a no-show, and the other panelist was Alex Reid; one person showed up to see what we had to say– though at least that one person was John Gallagher. Alex and I went out to breakfast, and I kind of wandered around the conference after that, uninterested in anything on the program. I was bored and bummed out. I had driven, so I packed up and left Friday night, a day earlier than I planned.

And don’t even get me started on how badly the CCCCs did at holding online versions of the conference during Covid.

So I was feeling pretty “done” with the whole thing. But I decided to put in an individual proposal this year because I was hoping it would be the beginning of another project to justify a sabbatical next year, and I thought going to one more CCCCs 30 years after my first one rounded things out well. Plus it was a chance to visit Baltimore and to take a solo road trip.

This year, the CCCCs/NCTE leadership changed the format for individual proposals, something I didn’t figure out until after I was accepted. Instead of creating panels made up of three or four individual proposals, which is what the CCCCs had always done before– which is what every other academic conference I have ever attended does with individual proposals— they decided that individuals would get a 30-minute solo session. To make matters even worse, my time slot was 9:30 am on Saturday, which is the day most people are traveling back home.

Oh, also: my sabbatical/research release time proposal got turned down, meaning my motivations for doing this work at all has dropped off considerably. I thought about bailing out right up to the morning I left. But I decided to go through with it because I was also going to Richmond to visit my friend Dennis, I still wanted to see Baltimore, and I still liked the idea of going one more time and 30 years later.

Remarkably, I had a very good time.

It wasn’t like what I think of as “the good old days,” of course.  I guess there were some publisher parties, but I missed out on those. I did run into people who I know and had some nice chats in the hallways of the enormous Baltimore convention center, but I mostly kept to myself, which was actually kind of nice. My “conference day” was Friday and I saw a couple of okay to pretty good panels about AI things– everything seemed to be about AI this year. I got a chance to look around the Inner Harbor on a cold and rainy day, and I got in half-price to the National Aquarium. And amazingly, I actually had a pretty decent-sized crowd (for me) at my Saturday morning talk. Honestly, I haven’t had as good of a CCCCs experience in years.

But now I’m done– probably.

I’m still annoyed with (IMO) the many many failings of the organization, and while I did have a good solo presenting experience, I still would have preferred being on a panel with others. But honestly, the main reason I’m done with the CCCCs (and other conferences) is not because of the conference but because of me. This conference made it very clear: essentially, I’ve aged out.

When I was a grad student/early career professor, conferences were a big deal. I learned a lot, I was able to do a lot of professional/social networking, and I got my start as a scholar. But at this point, where I am as promoted and as tenured as I’m ever going to be and where I’m not nearly as interested in furthering my career as I am retiring from it, I don’t get much out of all that anymore. And all of the people I used to meet up with and/or room with 10 or so years ago have quit going to the CCCCs because they became administrators, because they retired or died, or because they too just decided it was no longer necessary or worth it.

So that’s it. Probably. I have been saying for a while now that I want to shift from writing/reading/thinking about academic things to other non-academic things. I started my academic career as a fiction writer in an MFA program, and I’ve thought for a while now about returning to that. I’ve had a bit of luck publishing commentaries, and of course, I’ll keep blogging.

Then again, I feel like I got a good response to my presentation, so maybe I will stay with that project and try to apply for a sabbatical again. And after all, the CCCCs is going to be in Cleveland next year and Milwaukee the year after that….

Teaching this Fall (TBA): Writing, Rhetoric, and AI

The two big things on my mind right now are finishing this semester (I am well into the major grading portion of the term in all three of my classes) and preparing for the CCCCs road trip that will begin next week. I’m sure I’ll write more on the CCCCs/road trip after I’m back.

But this morning, I thought I’d write a post about a course I’m hoping to teach this fall, “Writing, Rhetoric, and AI.” I’ve set up that page on my site with a brief description of the course– at least as I’m imagining it now. “Topics in” courses like this always begin with just a sketch of a plan, but given the twists and turns and speed of developments in AI, I’ve learned not to commit to a plan too early.

For example: the first time I tried to teach anything about AI was in a course I taught in fall 2022 in a 300-level digital writing course. I came up with an AI assignment based in part on an online presentation by Christine Photinos and Julie Wihelm for the 2023 Computers and Writing Conference, and also on Paul Fyfe’s article “How to Cheat on Your Final Paper: Assigning AI for Student Writing.” My plan at the beginning of that semester was to have students use the same AI tools these writers were talking about, which was OpenAI’s GPT-2. By the time we were starting to work on the AI writing assignment for that class, ChatGPT was released. So plans changed, English teachers started freaking out, etc.

Anyway, the first thing that needs to happen is the class needs to “make”– that is, get enough students to justify it running at all. But right now, I’m cautiously optimistic that it is going to happen. The course will be on Canvas and behind a firewall, but my plan for now is to eventually post assignments and readings lists and the like here. Once I figure out what we’re going to do.

Now is a Good Time to be at a “Third Tier” University

The New York Times ran an editorial a couple of weekends ago called “The Authoritarian Endgame on Higher Education,” where the first sentence was “When a political leader wants to move a democracy toward a more authoritarian form of government, he often sets out to undermine independent sources of information and accountability.” The editorial goes on to describe the hundreds of millions of dollars of cuts in grants, and while the cuts are especially large and newsworthy at Johns Hopkins ($800 million) and Columbia ($400 million), they’re happening in lots of smaller amounts at lots of research universities. Full disclosure: my son is a post-doc at Yale, and while his lab has not been severely impacted by these cuts (yet), it is and continues to be a looming problem for him and his colleagues.

The NYT’s editorial board is correct: Trump is following the playbook of other modern authoritarian leaders (Putin, Orban in Hungary, Modi in India, Erdogan in Turkey, etc.) and is trying to weaken universities. Trump and shadow president Musk are cutting off the funding from the National Institute of Health (and other similar federal agencies) to research universities not so much because of waste and fraud and wanting to end DEI initiatives, and they’re destroying the rest of the federal government not because they want to save money. They’re doing it to consolidate power. They are trying to revamp the U.S. into an authoritarian system run by big tech and billionaires. I wish MSM would remind people more often that this is what is going on right now.

Then last week, Princeton President David A. Graham wrote a piece published in The Atlantic in which he insisted that now was the time for universities like Columbia to stand up to the Trump administration in the name of academic freedom. He quotes Joan Scott, the leader of the American Association of University Professors, who said “Even during the McCarthy period in the United States, this was not done.” The day after The Atlantic ran Graham’s column, Columbia more or less caved in and appeared to be ready to give Trump what he wanted.

And of course, Trump signed an executive order to close down the Department of Education– which is not something that Trump can do without Congress, but never mind the details of the law.

This is all very bad for all kinds of reasons that go well beyond the impact on these institutions. This is grant money from agencies like the National Institutes of Health to fund research, typically the kind of basic research that the private sector doesn’t do– but of course, research that the private sector profits from greatly. Just about every medical breakthrough you can think of over the last 75 years has been a result of this partnership between the feds and research universities, but to use one example close to my own heart (and the rest of my body) right now: take Zepbound. One of the origins of these current weight loss drugs was basic research the NIH and other federal government agencies did back in the 80s and 90s about the venom of Gila monsters, the kind of research MSM and politicians frequently mock– “why are we spending so much money to research lizards?” Because that’s where discoveries are made that eventually lead us to all sorts of surprising benefits.

But there is one detail about the way this story is being reported that bothers me. MSM puts all universities into the same bucket when the reality is much more complicated than that. The universities most impacted by Trump’s actions are very different kinds of institutions than the ones where I’ve spent my career.

In my book about MOOCs (More Than A Moment), I wrote a bit about the disparity between different tiers of universities, and how MOOCs (potentially) made the distance between higher ed’s haves and have-nots even greater. I frequently referenced the book A Perfect Mess: The Unlikely Ascendancy of American Higher Education by David F. Labaree. If you too are interested in the history of higher education (and who isn’t?), I’d highly recommend it. Among other things, Labaree describes the unofficial but well-understood hierarchy of different institutions. At the bottom fourth tier of this pyramid are community colleges, and I would also add proprietary schools and largely online universities. Roughly speaking, there are about 1,000 schools in this category. Labaree says that the third tier consists of universities that mostly began as “normal schools” in the 19th century, though I would add into that tier lots of small/private/often religious/not elite colleges, along with most other regional institutions. There are probably close to 1500 institutions in this category, and I think it’s fair to say most four-year colleges and universities in the US are in this group. EMU, which began as the Michigan State Normal School, is smack-dab in the middle of this tier.

The second tier and top tier are probably easiest for most non-academic types to understand because these are the only kinds of places that MSM routinely reports on as being “higher education.” Roughly speaking, these two tiers are comprised of about the top 150 or so national universities on the US News and World Report Rankings of Universities, with the top fifty or so in those rankings being the tippy-top 1 tier. By the way, EMU is “tied” as the 377th school on the list.

Now, those universities at the tippy-top that receive a lot of NIH and other federal grants– Columbia, Johns Hopkins, Michigan, Yale, etc.– have a serious problem because those grants are a major revenue stream. But for the rest of us in higher ed, especially on the third tier? Well, I was in a meeting just the other day where one of my colleagues asked an administrator when EMU could expect to see a cut in federal funding. This administrator, who seemed a little surprised at the question, pointed out that about 25% of our funding comes from state appropriations, and the rest of it comes from tuition. The amount of direct federal funding we receive is negligible.

And herein lies the Trump administration’s challenge at taking over education in this country, thankfully. Unlike most other countries in the world where schooling is more centralized, public education in the United States is quite decentralized and is mostly controlled by states and localities. As this piece from Inside Higher Ed reminds us, the main role of the federal government in higher education (besides collecting data about higher education nationwide, working with accreditors, and overseeing students’ civil rights) is to run the student loan and Pell Grant programs. The Trump administration has repeatedly said they want these programs to continue even if they are successful at eliminating the Department of Education. Not that I completely believe that– Trump/Musk might want to cut Pell grants, and they are trying to roll back Biden’s moves on loan forgiveness. But given how many students (and their parents) depend on these programs, including MAGA voters, I don’t see these programs going away.

In other words, now is a good time to be at a third-tier university.

Now, that New York Times editorial does have one paragraph where they acknowledge this difference between the haves and have-nots:

We understand why many Americans don’t trust higher education and feel they have little stake in it. Elite universities can come off as privileged playgrounds for young people seeking advantages only for themselves. Less elite schools, including community colleges, often have high dropout rates, leaving their students with the onerous combination of debt and no degree. Throughout higher education, faculty members can seem out of touch, with political views that skew far to the left.

I don’t know how much Americans do or don’t “trust” higher education, but the main reason why EMU and similar universities have a much higher dropout rate is we admit students more selective universities don’t. I don’t remember the details, but I heard this story years ago about this administrator in charge of admissions at EMU. When he was asked why our graduation rate is around 50% while the University of Michigan’s rate is more like 93%, he responded “Why isn’t U of M’s graduation rate 100%? They only admit students they know will graduate.” In contrast, EMU (and most other universities in the third tier) takes a lot of chances and admits almost everyone who applies.

I’m biased of course, but I think a more accurate way to frame the role of third-tier/regional universities is as institutions of opportunity. We give folks a chance at a college degree who otherwise would have few options. We aren’t a school that helps upper-middle-class kids stay that way. We’re a school that helps working class/working poor students improve their lives, to be one of the first (if not the first) people in their families to graduate from college. Sure, a lot of the students we admit don’t make it for all kinds of different reasons. But I think the benefits we provide to the ones who succeed in graduating outweigh the problems of admitting students who are just not prepared to go to college. Though I’ll admit it’s a close call.

Anyway, I don’t know what those of us working on the lower levels of the pyramid can do to help those at the top, if there’s anything we can do. That’s the frustration of everyone against Trump right now, right? What can we do?

Cancún, Winter Break 2025

A few months ago, we had no plans for Winter (aka Spring) Break. I had suggested to Annette (who is the one who manages the finances in our household, and for good reason) that maybe it’d be nice to at least get out of town for a long weekend to someplace warmer. Wisely, Annette pointed out that we just bought a new house and we are going on a big trip to Europe this summer, so no, we don’t have the money. Okay, fine.

Then we got a check from the IRS for $2500 because (we think) it turns out we were eligible for COVID relief money from the feds we never claimed. Thanks, Biden. “C’mon, found money!” I said and Annette could not disagree.

We considered a couple options, but we landed on Cancún for two reasons. First, we’ve talked for years about checking out an “all-inclusive” resort option. We’ve been on four cruises now, and I for one am undecided about them: there’s stuff I like, there’s stuff I don’t like. But we talked about how an all-inclusive resort might be interesting to try because we imagined it to be like a cruise that didn’t go anywhere. Second, while Annette visited Cancún a couple of times in the late 80s and early 90s, I’ve never been anywhere in Mexico, so what the heck?

Would I do it again? Well, like cruises, there are good things and not good things, so I don’t know.

My Peter Elbow Story

Peter Elbow died earlier this month at the age of 89. The New York Times had an obituary February 27 (a gift article) that did a reasonably good job of capturing his importance in the field of composition and rhetoric. I would not agree with the Times about how Elbow’s signature innovation, “free writing,” is a “touchy-feely” technique, but other than that, I think they get it about right. I can think of plenty of other key scholars and forces in the field, but I can’t think of anyone more important than Elbow.

Elbow was an active scholar and regular presence at the Conference for College Composition and Communication well into the 2000s. I remember seeing him in the halls going from event to event, and I saw him speak several times, including a huge event where he and Wayne Booth presented and then discussed their talks with each other.

A lot of people in the field had one store or another about meeting Peter Elbow; here’s my story (which I shared on Facebook earlier this month when I first learned of his passing):

When I was a junior in high school, in 1982-83 and in Cedar Falls, Iowa, I participated in some kind of state-wide or county-wide writing writing event/contest. This was a long time ago and I don’t remember any of the details about how it worked or what I wrote to participate in it, but I’m pretty sure it was an essay event/contest of some sort– as opposed to a fiction/poetry contest. It was held on the campus of the University of Northern Iowa, which is in Cedar Falls. So because it was local, a bunch of people from my high school and other local schools and beyond show up. My recollection was students participated in a version of a peer review sort of workshop.

This event was also a contest of some sort and there was a banquet everyone went to and where there were “winners” of some sort. I definitely remember I was not one of them. The banquet was a buffet, and I remember going through the line and there was this old guy (well, he would have been not quite 50 at this point) who was perfectly polite and nice and with a wondering eye getting something out of a chaffing dish right next to me. I don’t remember the details, but I think he was asked me about what I thought of this whole peer review thing we did, and I’m sure I told him it was fun because it was.

So then it turns out that this guy was there to give some kind of speech to all of the kids and all of the teachers and other adults that were at this thing. Well, really this was a speech for the teachers and adults and the kids were just there. I don’t remember how many were there, but I’m guessing maybe 100-200 people. I don’t remember anything Elbow talked about and I didn’t think a lot about it afterwards. But then a few years later and when I was first introduced to Elbow’s work in the comp/rhet theory class I took in my MFA program, I somehow figured out that I met that guy once years before and didn’t realize it at the time.

I can’t say I’ve read a ton of his writing, but what I have read I have found both smart and inspirational. It’s hard for me to think of anyone else who has had as much of an influence on shaping the field and the kind of work I do. May his memory be a blessing to his friends and family.

I’m Still Not Using AI Detection Software; However….

Back in mid-February, Anna Mills wrote a Substack post called “Why I’m using AI detection after all, alongside many other strategies.” Mills, who teaches at Cañada College in Silicon Valley, has written a lot about teaching and AI, and she was a member of the MLA-CCCC Joint Task Force on Writing and AI. That group recommended that teachers use AI detection tools with extreme caution, or to use them not at all.


What changed her mind? Well, it sounds like she had had enough:

I argued against use of AI detection in college classrooms for two years, but my perspective has shifted. I ran into the limits of my current approaches last semester, when a first-year writing student persisted in submitting work that was clearly not his own, presenting document history that showed him typing the work (maybe he. typed it and maybe he used an autotyper). He only admitted to the AI use and apologized for wasting my time when he realized that I was not going to give him credit and that if he initiated an appeals process, the college would run his writing through detection software.
I haven’t had this kind of student encounter over AI cheating, but it’s not hard for me to imagine this scenario. It might be the last straw for me too. And like I think is the case with Mills, I’m getting sick of seeing this kind of dumb AI cheating.

Last November, I wrote here about a “teachable moment” I had when an unusually high number of freshman comp students who dumbly cheated with AI. The short version: for the first short assignment (2 or 3 pages), students are supposed to explain why they are interested in the topic they’ve selected for their research, and to explain what prewriting and brainstorming activities they did to come up with their working thesis. It’s not supposed to be about why they think their thesis is right; it’s supposed to be a reflection on the process they used to come up with a thesis that they know will change with research. It’s a “pass/revise” assignment I’ve given for years, and I always have a few students who misunderstand and end up writing something kind of like a research paper with no research. I make them revise. But last fall, a lot more of my students did the assignment wrong because they blindly trusted what ChatGPT told them. I met with these students, reminded them what the assignment actually was, and to also remember that AI cannot write an essay that explains what you think.

I’m teaching another couple of sections of freshman composition this semester and students just finished that first assignment. I warned them about avoiding the mistakes with AI students made last semester, and I repeated more often that the assignment is about their process and is not a research paper. The result? Well, I had fewer students trying to pass off something written by AI, but I still had a few.

My approach to dealing with AI cheating is the same as it has been ever since ChatGPT appeared: I focus on teaching writing as a process, and I require students to use Google Docs so I can use the version history to see how they put together their essays. I still don’t want to use Turnitin, and to be fair, Mills has not completely gone all-in with AI detection. Far from it. She sees Turnitin as an additional tool to use along with solid process writing pedagogy. Mills also shares some interesting resources about research into AI detection software and the difficulty of accurately spotting AI writing. Totally worth checking her post out.

I do disagree with her about how difficult it is to spot AI writing. Sure, it’s hard to figure out if a chunk of writing came from a human or an AI if there’s no context. But in writing classes like freshman composition, I see A LOT of my students’ writing (not just in final drafts), and because these are classes of 25 or so students, I get to know them as writers and people fairly well. So when a struggling student suddenly produces a piece of writing that is perfect grammatically and that sounds like a robot, I get suspicious and I meet with the student. So far, they have all confessed, more or less, and I’ve given them a second chance. In the fall, I had a student who cheated a second time; I failed them on the spot. If I had a student who persisted like the one Mills describes, I’m not quite sure what I would do.

But like I said, I too am starting to get annoyed that students keep using AI like this.

When ChatGPT first became a thing in late 2022 and everyone was all freaked out about everyone cheating, I wrote about/gave a couple of talks about how plagiarism has been a problem in writing classes literally forever. The vast majority of examples of plagiarism I see are still a result of students not knowing how to cite sources (or just being too lazy to do it), and it’s clear that most students don’t want to cheat and they see the point of needing to do the work themselves so they might learn something.

But it is different. Before ChatGPT, I had to deal with a blatant and intentional case of plagiarism once every couple of years. For the last year or so, I’ve had to deal with some examples of blatant AI plagiarism in pretty much every section of first-year writing I teach. It’s frustrating, especially since I like to think that one of the benefits of teaching students how to use AI is to discourage them from cheating with it.