4C25: My Talk in Two Parts, and “Thoughts”

I am home from the 2025 Conference for College Composition and Communication, after leaving directly after my 9:30 am one man show panel and an uneventful drive home. I actually had a good time, but it will still probably be the last CCCCs for me. Probably.

Click this link if you want to just skip to my overall conference thoughts, but here’s the whole talk script with slides:

The first part of the original title, “Echoes of the Past,” was just my lame effort at having something to do with the conference theme, so disregard that entirely. This has nothing to do with sound. The first part of my talk is the part after the colon, “Considering Current Artificial Intelligence Writing Pedagogies with Insights from the Era of Computer-Aided Instruction,” and that is something I will get to in a moment, and that does connect to the second title, 

“The Importance of Paying Attention To, Rather Than Resisting, AI.” It isn’t exactly what I had proposed to talk about, but I hope it’ll make sense.

So, the first part: I have always been interested in the history of emerging technologies, especially technologies that were once new and disruptive but became naturalized and are now seen not as technology at all but just as standard practice. There are lots of reasons why I think this is interesting, one of which is what these once-new and disruptive technologies can tell us now about emerging writing technologies. History doesn’t repeat, but it does rhyme, and history prepares the future for whatever is coming next.  

For example, I published an essay a long time ago about the impact of chalkboards in 19th-century education, and I’ve presented at the CCCCs about how changes in pens were disruptive and changed teaching practices.  I wrote a book about MOOCs where I argued they were not new but a continuation of the long history of distance education. As a part of that project, I wrote about the history of correspondence courses in higher education, which emerged in the late 19th century. Correspondence courses led to radio and television courses, which led to the first generation of online courses, MOOCs, and online courses as we know them now and post-Covid. Though sometimes emerging and disruptive technologies are not adopted. Experiments in teaching by radio and television didn’t continue, and while there are still a lot of MOOCs, they don’t have much to do with higher education anymore.

The same dynamic happened with the emergence of computer technology in the teaching of writing beginning in the late ’70s and early ’80s, and that even included a discussion of Artificial Intelligence– sort of. In the course of poking around and doing some lazy database searches, I stumbled across the first article in the first issue– a newsletter at the time– of what would become the journal Computers and Composition, a short piece by Hugh Burns called “A Note on Composition and Artificial Intelligence.”

Incidentally, this is what it looks like. I have not seen the actual physical print version of this article, but the PDF looks like it might have been typed and photocopied. Anyway, this was published in 1983, a time when AI researchers were interested in the development of “expert systems,” which worked with various programming rules and logic to simulate the way humans tend to think, at least in a rudimentary way. 

Incidentally and just in case we don’t all know this, AI is not remotely new, with a lot of enthusiasm and progress in the late 1950s through the 1970s, and then with a resurgence in the 1980s with expert systems. 

In this article, Burns, who wrote one of the first dissertations about the use of computers to teach writing, discusses the relevance of the research in the field of artificial intelligence and natural language processing in the development of Computer Aided Instruction, or CAI, which is an example of the kind of “expert system” applications of the time. “I, for one,” Burns wrote, “believe composition teachers can use the emerging research in artificial intelligence to define the best features of a writer’s consciousness and to design quality computer-assisted instruction – and other writing instruction – accordingly” (4). 

If folks nowadays remember anything at all about CAI, it’s probably “drill and kill” programs for practicing things like sentence combining, grammar skills, spelling, quizzes, and so forth. But what Burns was talking about was a program called Topi, which walked users through a series of invention questions based on Tagmemic and Aristotelian rhetoric. 

Here’s what the interface looked like from a conference presentation Burns gave in 1980. As you can see, the program basically simulates the kind of conversation a student might have with a not-very-convincing human. 

There were several similar prompting, editing, and revision tools at the time. One was Writer’s Workbench, which was an editing program developed by Bell Labs and initially meant as a tool for technical writers at the company. It was adopted for writing instruction at a few colleges and universities, and 

John T. Day wrote about St. Olaf College’s use of Writer’s Workbench in Computers and Composition in 1988 in his article “Writer’s Workbench: A Useful Aid, but not a Cure-All.” As the title of Day’s article suggests, the reviews to Writer’s Workbench were mixed. But I don’t want to get into all the details Day discusses here. Instead, what I wanted to share is Day’s faux epigraph.

I think this kind of sums up a lot of the profession’s feelings about the writing technologies that started appearing in classrooms– both K-12 and in higher education– as a result of the introduction of personal computers in the early 1980s. CAI tools never really caught on, but plenty of other software did, most notably word processing, and then networked computers, this new thing “the internet,” and then the World Wide Web. All of these technologies were surprisingly polarizing among English teachers at the time. And as an English major in the mid-1980s who also became interested in personal computers and then the internet and then the web, I was “an enthusiast.”

From around the late 1970s and continuing well into the mid-1990s, there were hundreds of articles and presentations in major publications in composition and English studies like Burns’ and Day’s pieces, about the enthusiasms and skepticisms of using computers for teaching and practicing writing. Because it was all so new and most folks in English studies knew even less about computers than they do now, a lot of that scholarship strikes me now as simplistic. Much of what appeared in Computers and Composition in its first few years was teaching anecdotes, as in “I had students use word processing in my class and this is what happened.” Many articles were trying to compare writing with and without computers, writing with a word processor or by hand, how students of different types (elementary/secondary, basic writers, writers with physical disabilities, skilled writers, etc.) were harmed or helped with computers, and so forth.  

But along with this kind of “should you/shouldn’t you write with computers” theme, a lot of the scholarship in this era raised questions that have continued with every other emerging and contentious technology associated with writing, including, of course, AI: questions about authorship, the costs (because personal computers were expensive), the difficulty of learning and also teaching the software, cheating, originality, “humanness” and so on. This scholarship was happening at a time when using computers to practice or teach writing was still perceived as a choice– that is, it was possible to refuse and reject computers.  I am assuming that the comparison I’m making here to this scholarship and the discussions now about AI are obvious.

So I think it’s worth re-examining some of this work where writers were expressing enthusiasms, skepticisms, and concerns about word processing software and personal computers and comparing it to the moment we are in with AI in the form of ChatGPT, Gemini, Claude, and so forth. What will scholars 30 years from now think about the scholarship and discourse around Artificial Intelligence that is in the air currently? 

Anyway, that was going to be the whole talk from me and with a lot more detail, but that project for me is on hold, at least for now. Instead, I want to pivot to the second part of my talk, “The Importance of Paying Attention To, Rather Than Resisting, AI.” 

I say “Rather Than Resisting” or Refusing AI in reference to Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes website “Refusing Generative AI in Writing Studies,” but also in reference to articles such as Melanie Dusseau’s “Burn It Down: A License for AI Resistance,” which was a column in Inside Higher Ed in November 2024, and other calls to refuse/resist using AI. “The Importance of Paying Attention To,” is my reference to Cynthia Selfe’s “Technology and Literacy: A Story about the Perils of Not Paying Attention,” which was first presented as her CCCC chair’s address in 1998 (published in 1999) and which was also expanded as a book called Technology and Literacy in the Twenty-first Century.

If Hugh Burns’ 1983 commentary in the first issue of Computers and Composition serves for me as the beginning of this not-so-long-ago history, when personal computers were not something everyone had or used and when they were still contentious and emerging tools for writing instruction and practice, then Selfe’s CCCCs address/article/book represents the point where computers (along with all things internet) were no longer optional for writing instruction and practice. And it was time for English teachers to wake up and pay attention to that.

And before I get too far, I agree with eight out of the ten points on the “Refusing Generative AI in Writing Studies” website, broadly speaking. I think these are points that most people in the field nowadays would agree with, actually. 

But here’s where I disagree. I don’t want to go into this today, but the environmental impact of the proliferation of data centers is not limited to AI. And when it comes to this last bullet point, no, I don’t think “refusal” or resistance are principled or pragmatic responses to AI. Instead, I think our field needs to engage with and pay attention to AI.

Now, some might argue that I’m taking the call to refuse/resist AI too literally and that the kind of engagement I’m advocating is not at odds with refusal.

I disagree. Word choices and their definitions matter. Refusing means being unwilling to do something. Paying attention means to listen to and to think about something. Much for the same reasons Selfe spoke about 27 years ago, there are perils to not paying attention to technology in writing classrooms. I believe our field needs to pay attention to AI by researching it, teaching with it, using it in our own writing, goofing around with it, and encouraging our students to do the same. And to be clear: studying AI is not the same as endorsing AI.  

Selfe’s opening paragraph is a kidding/not kidding assessment of the CCCCs community’s feelings about technology and the community’s refusal to engage with it. She says many members of the CCCCs over the years have shared some of the best ideas we have from any discipline about teaching writing, but it’s a community that has also been largely uninterested in the focus of Selfe’s work, the use of computers to teach composition. She said she knew bringing up the topic in a keynote at the CCCCs was “guaranteed to inspire glazed eyes and complete indifference in that portion of the CCCC membership which does not immediately sink into snooze mode.” She said people in the CCCCs community saw a disconnect between their humanitarian concerns and a distraction from the real work of teaching literacy.

It was still possible in a lot of English teacher’s minds to separate computers from the teaching of writing– at least in the sense that most CCCCs members did not think about the implications of computers in their classrooms. Selfe says “I think [this belief] informs our actions within our home departments, where we generally continue to allocate the responsibility of technology decisions … to a single faculty or staff member who doesn’t mind wrestling with computers or the thorny, unpleasant issues that can be associated with their use.” 

Let me stop for a moment to note that in 1998, I was there. I attended and presented at that CCCCs in Chicago, and while I can’t recall if I saw Selfe’s address in person (I think I did), I definitely remember the times.

After finishing my PhD in 1996, I was hired by Southern Oregon University as their English department’s first “computers and writing” specialist. At the 1998 convention, I met up with my future colleagues at EMU because I had recently accepted the position I currently have, where I was once again hired as a computer and writing specialist. At both SOU and EMU, I had colleagues– you will not be surprised to learn these tended to be senior colleagues– who questioned why there was any need to add someone like me to the faculty. In some ways, it was similar to the complaints I’ve seen on social media about faculty searches involving AI specialists in writing studies and related fields.

Anyway, Selfe argues that in hiring specialists, English departments outsourced responsibility to the rest of the faculty to have anything to do with computer technology. It enabled a continued belief that computers are simply “tool[s] that individual faculty members can use or ignore in their classrooms as they choose, but also one that the profession, as a collective whole–and with just a few notable exceptions–need not address too systematically.” Instead, she argued that what people in our profession needed to do was to pay attention to these issues, even if we really would rather refuse to do so: “I believe composition studies faculty have a much larger and more complicated obligation to fulfill–that of trying to understand and make sense of, to pay attention to, how technology is now inextricably linked to literacy and literacy education in this country. As a part of this obligation, I suggest that we have some rather unpleasant facts to face about our own professional behavior and involvement.” She goes on a couple of paragraphs later to say in all italics “As composition teachers, deciding whether or not to use technology in our classes is simply not the point–we have to pay attention to technology.”

Again, I’m guessing the connection to Selfe’s call then to pay attention to computer technology and my call now to pay attention to AI is pretty obvious.

The specific case example Selfe discusses in detail in her address is a Clinton-Gore era report called Getting America’s Children Ready for the Twenty-First Century, which was about that administration’s efforts to promote technological literacy in education, particularly in K-12 schools. The initiative spent millions on computer equipment, an amount of money that dwarfed the spending on literacy programs. As I recall those times, the main problem with this initiative was there was lots of money spent to put personal computers into schools, but very little money was spent on how to use the computers in classrooms. Self said, “Moreover, in a curious way, neither the CCCC, nor the NCTE, the MLA, nor the IRA–as far as I can tell–have ever published a single word about our own professional stance on this particular nationwide technology project: not one statement about how we think such literacy monies should be spent in English composition programs; not one statement about what kinds of literacy and technology efforts should be funded in connection with this project or how excellence should be gauged in these efforts; not one statement about the serious need for professional development and support for teachers that must be addressed within context of this particular national literacy project.” 

Selfe closes with a call for action and a need for our field and profession to recognize technology as important work we all do around literacy. I’ve cherry-picked a couple of quotes here to share at the end. Again, by “technology”, Selfe more or less meant PCs, networked computers, and the web, all tools we all take for granted. But also again, every single one of these calls applies to AI as well.

Now, I think the CCCCs community and the discipline as a whole have moved in the direction Selfe was urging in her CCCCs address. Unlike the way things were in the 1990s, I think there is widespread interest in the CCCC community in studying the connections between technologies and literacy. Unlike then, both MLA and CCCCs (and presumably other parts of NCTE) have been engaged and paying attention. There is a joint CCCC-MLA task force that has issued statements and guidance on AI literacy, along with a series of working papers, all things Selfe was calling for back then. Judging from this year’s program and the few presentations I have been able to attend, it seems like a lot more of us are interested in engaging and paying attention to AI rather than refusing it. 

At the same time, there is an echo–okay, one sound reference– of the scholarship in the early era of personal computers. A lot of the scholarship about AI now is based on teachers’ experiences of experimenting with it in their own classes. And we’re still revisiting a lot of the same questions regarding the extent to which we should be teaching students how to use AI, the issues of authenticity and humanness, of cheating, and so forth. History doesn’t repeat, but it does rhyme.

Let me close by saying I have no idea where we’re going to end up with AI. This fall, I’m planning on teaching a special topics course called Writing, Rhetoric, and AI, and while I have some ideas about what we’re going to do, I’m hesitant about committing too much to a plan now since all of this could be entirely different in a few months. There’s still the possibility of generative AI becoming artificial general intelligence and that might have a dramatic impact on all of our careers and beyond. Trump and shadow president Elon Musk would like nothing better than to replace most people who work for the federal government with this sort of AI. And of course, there is also the existential albeit science fiction-esque possibility of an AI more intelligent than humans enslaving us.

But at least I think that we’re doing a much better job of paying attention to technology nowadays.


Thoughts”

The first time I attended and presented at the CCCCs was in 1995. It was in Washington, D.C., and I gave a talk that was about my dissertation proposal. I don’t remember all the details, but I probably drove with other grad students from Bowling Green and split a hotel room, maybe with Bill Hart-Davidson or Mick Doherty or someone like that. I remember going to the big publisher party sponsored by Bedford-St. Martin’s (or whatever they were called then) which was held that year at the National Press Club, where they filled us with free cocktails and enough heavy hors d’oeuvres to serve as a meal.

For me, the event has been going downhill for a while. The last time I went to the CCCCs in person was in 2019– pre-Covid, of course– in Pittsburgh. I was on a panel of three scheduled for 8:30 am Friday morning. One of the people on the panel was a no-show, and the other panelist was Alex Reid; one person showed up to see what we had to say– though at least that one person was John Gallagher. Alex and I went out to breakfast, and I kind of wandered around the conference after that, uninterested in anything on the program. I was bored and bummed out. I had driven, so I packed up and left Friday night, a day earlier than I planned.

And don’t even get me started on how badly the CCCCs did at holding online versions of the conference during Covid.

So I was feeling pretty “done” with the whole thing. But I decided to put in an individual proposal this year because I was hoping it would be the beginning of another project to justify a sabbatical next year, and I thought going to one more CCCCs 30 years after my first one rounded things out well. Plus it was a chance to visit Baltimore and to take a solo road trip.

This year, the CCCCs/NCTE leadership changed the format for individual proposals, something I didn’t figure out until after I was accepted. Instead of creating panels made up of three or four individual proposals, which is what the CCCCs had always done before– which is what every other academic conference I have ever attended does with individual proposals— they decided that individuals would get a 30-minute solo session. To make matters even worse, my time slot was 9:30 am on Saturday, which is the day most people are traveling back home.

Oh, also: my sabbatical/research release time proposal got turned down, meaning my motivations for doing this work at all has dropped off considerably. I thought about bailing out right up to the morning I left. But I decided to go through with it because I was also going to Richmond to visit my friend Dennis, I still wanted to see Baltimore, and I still liked the idea of going one more time and 30 years later.

Remarkably, I had a very good time.

It wasn’t like what I think of as “the good old days,” of course.  I guess there were some publisher parties, but I missed out on those. I did run into people who I know and had some nice chats in the hallways of the enormous Baltimore convention center, but I mostly kept to myself, which was actually kind of nice. My “conference day” was Friday and I saw a couple of okay to pretty good panels about AI things– everything seemed to be about AI this year. I got a chance to look around the Inner Harbor on a cold and rainy day, and I got in half-price to the National Aquarium. And amazingly, I actually had a pretty decent-sized crowd (for me) at my Saturday morning talk. Honestly, I haven’t had as good of a CCCCs experience in years.

But now I’m done– probably.

I’m still annoyed with (IMO) the many many failings of the organization, and while I did have a good solo presenting experience, I still would have preferred being on a panel with others. But honestly, the main reason I’m done with the CCCCs (and other conferences) is not because of the conference but because of me. This conference made it very clear: essentially, I’ve aged out.

When I was a grad student/early career professor, conferences were a big deal. I learned a lot, I was able to do a lot of professional/social networking, and I got my start as a scholar. But at this point, where I am as promoted and as tenured as I’m ever going to be and where I’m not nearly as interested in furthering my career as I am retiring from it, I don’t get much out of all that anymore. And all of the people I used to meet up with and/or room with 10 or so years ago have quit going to the CCCCs because they became administrators, because they retired or died, or because they too just decided it was no longer necessary or worth it.

So that’s it. Probably. I have been saying for a while now that I want to shift from writing/reading/thinking about academic things to other non-academic things. I started my academic career as a fiction writer in an MFA program, and I’ve thought for a while now about returning to that. I’ve had a bit of luck publishing commentaries, and of course, I’ll keep blogging.

Then again, I feel like I got a good response to my presentation, so maybe I will stay with that project and try to apply for a sabbatical again. And after all, the CCCCs is going to be in Cleveland next year and Milwaukee the year after that….

Now is a Good Time to be at a “Third Tier” University

The New York Times ran an editorial a couple of weekends ago called “The Authoritarian Endgame on Higher Education,” where the first sentence was “When a political leader wants to move a democracy toward a more authoritarian form of government, he often sets out to undermine independent sources of information and accountability.” The editorial goes on to describe the hundreds of millions of dollars of cuts in grants, and while the cuts are especially large and newsworthy at Johns Hopkins ($800 million) and Columbia ($400 million), they’re happening in lots of smaller amounts at lots of research universities. Full disclosure: my son is a post-doc at Yale, and while his lab has not been severely impacted by these cuts (yet), it is and continues to be a looming problem for him and his colleagues.

The NYT’s editorial board is correct: Trump is following the playbook of other modern authoritarian leaders (Putin, Orban in Hungary, Modi in India, Erdogan in Turkey, etc.) and is trying to weaken universities. Trump and shadow president Musk are cutting off the funding from the National Institute of Health (and other similar federal agencies) to research universities not so much because of waste and fraud and wanting to end DEI initiatives, and they’re destroying the rest of the federal government not because they want to save money. They’re doing it to consolidate power. They are trying to revamp the U.S. into an authoritarian system run by big tech and billionaires. I wish MSM would remind people more often that this is what is going on right now.

Then last week, Princeton President David A. Graham wrote a piece published in The Atlantic in which he insisted that now was the time for universities like Columbia to stand up to the Trump administration in the name of academic freedom. He quotes Joan Scott, the leader of the American Association of University Professors, who said “Even during the McCarthy period in the United States, this was not done.” The day after The Atlantic ran Graham’s column, Columbia more or less caved in and appeared to be ready to give Trump what he wanted.

And of course, Trump signed an executive order to close down the Department of Education– which is not something that Trump can do without Congress, but never mind the details of the law.

This is all very bad for all kinds of reasons that go well beyond the impact on these institutions. This is grant money from agencies like the National Institutes of Health to fund research, typically the kind of basic research that the private sector doesn’t do– but of course, research that the private sector profits from greatly. Just about every medical breakthrough you can think of over the last 75 years has been a result of this partnership between the feds and research universities, but to use one example close to my own heart (and the rest of my body) right now: take Zepbound. One of the origins of these current weight loss drugs was basic research the NIH and other federal government agencies did back in the 80s and 90s about the venom of Gila monsters, the kind of research MSM and politicians frequently mock– “why are we spending so much money to research lizards?” Because that’s where discoveries are made that eventually lead us to all sorts of surprising benefits.

But there is one detail about the way this story is being reported that bothers me. MSM puts all universities into the same bucket when the reality is much more complicated than that. The universities most impacted by Trump’s actions are very different kinds of institutions than the ones where I’ve spent my career.

In my book about MOOCs (More Than A Moment), I wrote a bit about the disparity between different tiers of universities, and how MOOCs (potentially) made the distance between higher ed’s haves and have-nots even greater. I frequently referenced the book A Perfect Mess: The Unlikely Ascendancy of American Higher Education by David F. Labaree. If you too are interested in the history of higher education (and who isn’t?), I’d highly recommend it. Among other things, Labaree describes the unofficial but well-understood hierarchy of different institutions. At the bottom fourth tier of this pyramid are community colleges, and I would also add proprietary schools and largely online universities. Roughly speaking, there are about 1,000 schools in this category. Labaree says that the third tier consists of universities that mostly began as “normal schools” in the 19th century, though I would add into that tier lots of small/private/often religious/not elite colleges, along with most other regional institutions. There are probably close to 1500 institutions in this category, and I think it’s fair to say most four-year colleges and universities in the US are in this group. EMU, which began as the Michigan State Normal School, is smack-dab in the middle of this tier.

The second tier and top tier are probably easiest for most non-academic types to understand because these are the only kinds of places that MSM routinely reports on as being “higher education.” Roughly speaking, these two tiers are comprised of about the top 150 or so national universities on the US News and World Report Rankings of Universities, with the top fifty or so in those rankings being the tippy-top 1 tier. By the way, EMU is “tied” as the 377th school on the list.

Now, those universities at the tippy-top that receive a lot of NIH and other federal grants– Columbia, Johns Hopkins, Michigan, Yale, etc.– have a serious problem because those grants are a major revenue stream. But for the rest of us in higher ed, especially on the third tier? Well, I was in a meeting just the other day where one of my colleagues asked an administrator when EMU could expect to see a cut in federal funding. This administrator, who seemed a little surprised at the question, pointed out that about 25% of our funding comes from state appropriations, and the rest of it comes from tuition. The amount of direct federal funding we receive is negligible.

And herein lies the Trump administration’s challenge at taking over education in this country, thankfully. Unlike most other countries in the world where schooling is more centralized, public education in the United States is quite decentralized and is mostly controlled by states and localities. As this piece from Inside Higher Ed reminds us, the main role of the federal government in higher education (besides collecting data about higher education nationwide, working with accreditors, and overseeing students’ civil rights) is to run the student loan and Pell Grant programs. The Trump administration has repeatedly said they want these programs to continue even if they are successful at eliminating the Department of Education. Not that I completely believe that– Trump/Musk might want to cut Pell grants, and they are trying to roll back Biden’s moves on loan forgiveness. But given how many students (and their parents) depend on these programs, including MAGA voters, I don’t see these programs going away.

In other words, now is a good time to be at a third-tier university.

Now, that New York Times editorial does have one paragraph where they acknowledge this difference between the haves and have-nots:

We understand why many Americans don’t trust higher education and feel they have little stake in it. Elite universities can come off as privileged playgrounds for young people seeking advantages only for themselves. Less elite schools, including community colleges, often have high dropout rates, leaving their students with the onerous combination of debt and no degree. Throughout higher education, faculty members can seem out of touch, with political views that skew far to the left.

I don’t know how much Americans do or don’t “trust” higher education, but the main reason why EMU and similar universities have a much higher dropout rate is we admit students more selective universities don’t. I don’t remember the details, but I heard this story years ago about this administrator in charge of admissions at EMU. When he was asked why our graduation rate is around 50% while the University of Michigan’s rate is more like 93%, he responded “Why isn’t U of M’s graduation rate 100%? They only admit students they know will graduate.” In contrast, EMU (and most other universities in the third tier) takes a lot of chances and admits almost everyone who applies.

I’m biased of course, but I think a more accurate way to frame the role of third-tier/regional universities is as institutions of opportunity. We give folks a chance at a college degree who otherwise would have few options. We aren’t a school that helps upper-middle-class kids stay that way. We’re a school that helps working class/working poor students improve their lives, to be one of the first (if not the first) people in their families to graduate from college. Sure, a lot of the students we admit don’t make it for all kinds of different reasons. But I think the benefits we provide to the ones who succeed in graduating outweigh the problems of admitting students who are just not prepared to go to college. Though I’ll admit it’s a close call.

Anyway, I don’t know what those of us working on the lower levels of the pyramid can do to help those at the top, if there’s anything we can do. That’s the frustration of everyone against Trump right now, right? What can we do?

My Peter Elbow Story

Peter Elbow died earlier this month at the age of 89. The New York Times had an obituary February 27 (a gift article) that did a reasonably good job of capturing his importance in the field of composition and rhetoric. I would not agree with the Times about how Elbow’s signature innovation, “free writing,” is a “touchy-feely” technique, but other than that, I think they get it about right. I can think of plenty of other key scholars and forces in the field, but I can’t think of anyone more important than Elbow.

Elbow was an active scholar and regular presence at the Conference for College Composition and Communication well into the 2000s. I remember seeing him in the halls going from event to event, and I saw him speak several times, including a huge event where he and Wayne Booth presented and then discussed their talks with each other.

A lot of people in the field had one store or another about meeting Peter Elbow; here’s my story (which I shared on Facebook earlier this month when I first learned of his passing):

When I was a junior in high school, in 1982-83 and in Cedar Falls, Iowa, I participated in some kind of state-wide or county-wide writing writing event/contest. This was a long time ago and I don’t remember any of the details about how it worked or what I wrote to participate in it, but I’m pretty sure it was an essay event/contest of some sort– as opposed to a fiction/poetry contest. It was held on the campus of the University of Northern Iowa, which is in Cedar Falls. So because it was local, a bunch of people from my high school and other local schools and beyond show up. My recollection was students participated in a version of a peer review sort of workshop.

This event was also a contest of some sort and there was a banquet everyone went to and where there were “winners” of some sort. I definitely remember I was not one of them. The banquet was a buffet, and I remember going through the line and there was this old guy (well, he would have been not quite 50 at this point) who was perfectly polite and nice and with a wondering eye getting something out of a chaffing dish right next to me. I don’t remember the details, but I think he was asked me about what I thought of this whole peer review thing we did, and I’m sure I told him it was fun because it was.

So then it turns out that this guy was there to give some kind of speech to all of the kids and all of the teachers and other adults that were at this thing. Well, really this was a speech for the teachers and adults and the kids were just there. I don’t remember how many were there, but I’m guessing maybe 100-200 people. I don’t remember anything Elbow talked about and I didn’t think a lot about it afterwards. But then a few years later and when I was first introduced to Elbow’s work in the comp/rhet theory class I took in my MFA program, I somehow figured out that I met that guy once years before and didn’t realize it at the time.

I can’t say I’ve read a ton of his writing, but what I have read I have found both smart and inspirational. It’s hard for me to think of anyone else who has had as much of an influence on shaping the field and the kind of work I do. May his memory be a blessing to his friends and family.

A New Substack About My AI Research: “Paying Attention to AI”

As I wrote about earlier in December, I am “Back to Blogging Again” after experimenting with shifting everything to Substack. I switched back to blogging because I still get a lot more traffic on this site than on Substack, and because my blogging habits are too eclectic and random to be what I think of as a Newsletter. I realize this isn’t true for lots of Substackers, but to me, a Newsletter should be about a more specific “topic” than a blog, and it should be published on a more regular schedule.

So that’s my goal with “Paying Attention to AI.” We’ll see how it works out. Because I still want to post those Substack things here– because this is a platform I control, unlike any of the other ones owned by tech oligarchs or whatever, and because while I do like Substack, there is still the “Nazi problem” they are trying to work out. Besides, while Substack could be bought out and turned into a dumpster fire (lookin’ at you, X), no one is going to buy stevendkrause.com, and that’s even if I was selling.

Anyway, here’s the first post on that new Substack space.

Welcome to (working title) Paying Attention to AI

More Notes on Late 20th Century Composition, CAI, Word Processing, the Internet, and AI

My goal for this Substack site/newsletter/etc. is to write (mostly to myself) about what will probably be the last big research/scholarly project of my academic career, but I still don’t have a good title. I’m currently thinking “Paying Attention to AI,” a reference to Cynthia Selfe’s “Technology and Literacy: A Story about the Perils of Not Paying Attention,” which was her chair’s address at the 1997 Conference for College Composition and Communication before it was republished in the journal for the CCCs in 1999 and also expanded into the book Technology and Literacy in the Twenty-First Century.

But I also thought something mentioning AI, Composition, and “More Notes” would be good. That’s a reference to “A Note on Composition and Artificial Intelligence,” a brief 1983 article by Hugh Burns in the first newsletter issue of what would become the journal Computers and Composition. AI meant something quite different in the late 1970s/early 1980s, of course. Burns was writing then about how research in natural language processing and AI could help improve Computer Assisted Instruction (CAI) programs, which were then seen as one of the primary uses of computer technology in the teaching of writing— along with the new and increasingly popular word processing programs that run on these newly emerging personal computers.

Maybe I’ll figure out a way to combine the two into one title…

This project is based on a proposal that’s been accepted for the 2025 CCCCs in Baltimore, and also a proposal I have submitted at EMU for a research leave or a sabbatical for the 2025-26 school year. 1 I’m interested in looking back at the (relatively) recent history of the beginnings of the widespread use of “computers” (CAI, personal computers, word processors and spell/grammar checkers, local area networks, and the beginnings of “the internet”).

Burns’ and Selfe’s articles make nice bookends for this era for me because between the late 1970s until about the mid 1990s, there were hundreds of presentations and articles in major publications in writing studies and English about the role of personal computers and (later) the internet and the teaching of writing. Burns was enthusiastic about the potential of AI research and writing instruction, calling for teachers to use emerging CAI and other tools. It was still largely a theory though since in 1983, fewer 8% of households had one personal computer. By the time Selfe was speaking and then writing 13 or so years later, over 36% of households had at least one computer, and the internet and “World Wide Web” was rapidly taking its place as a general purpose technology altering the ways we do nearly everything, including how we teach and practice writing.

These are also good bookends for my own history as a student, a teacher, and a scholar, not mention as a writer who dabbled a lot with computers for a long time. I first wrote with computers in the early 1980s while in high school. I started college in 1984 with a typewriter and I got a Macintosh 512KE by about 1986. I was introduced to the idea of teaching writing in a lab of terminals— not PCs— connected to a mainframe unix computer when I started my MFA program at Virginia Commonwealth University in fiction writing in 1988. (I never taught in that lab, fwiw). In the mid-90s and while in my PhD program at Bowling Green State University, the internet and “the web” came along, first as text (remember GopherLynx?) and then as GUI interfaces like Netscape. By the time Selfe was urging the English teachers attending the CCCCs attendees to, well, pay attention to technology, I had starte my first tenure-track job.

A lot of the things I read about AI right now (mostly on social media and MSM, but also in more scholarly work) dhas a tinge of the exuberant enthusiasm and/or the moral panic about the encroachment of computer technology back then, and that interests me a great deal. But at the same time, this is a different moment in lots of small and large ways. For one thing, while CAI applications never really caught on for teaching writing (at least beyond middle school), AI shows some real promise in making similar tutoring tools actually work. Of course, there were also a lot of other technologies and tools way back when that had their moments but then faded away. Remember MOOs/MUDs? Listservs? Blogs? And more recently, MOOCs?

So we’ll see where this goes.

1 FWIW: in an effort to make it kinda/sorta fit the conference theme, this presentation is awkwardly titled ““Echoes of the Past: Considering Current Artificial Intelligence Writing Pedagogies with Insights from the Era of Computer-Aided Instruction.” This will almost certainly be the last time I attend the CCCCs, my field’s annual flagship conference, because, as I am sure I will write about eventually, I think it has become a shit show. And whether or not this project continues much past the April 2025 conference will depend heavily on the research release time from EMU. Fingers crossed on that.

Is Apple Intelligence (and AI) For Dumb and Lazy People?

And the challenges of an AI world where everyone is above average

I’ve been an Apple fanboy since the early 1980s. I owned one Windoze computer years ago that was mostly for games my kid wanted to play. Otherwise, I’ve been all Apple for around 40 years. But what the heck is the deal with these ads for Apple Intelligence?

In this ad (the most annoying of the group, IMO), we see a schlub of a guy, Warren, emailing his boss in idiotic/bro-based prose. He pushes the Apple Intelligence feature and boom, his email is transformed into appropriate office prose. The boss reads the prose, is obviously impressed, and the tagline at the end is “write smarter.” Ugh.

Then there’s this one:

This guy, Lance, is in a board meeting and he’s selected to present about “the Prospectus,” which he obviously has not read. He slowly wheels his office chair and his laptop into the hallway, asks Apple’s AI to summarize the key points in this long thing he didn’t read. Then he slowly wheels back into the conference room and delivers a successful presentation. The tagline on this one? “Catch up quick.” Ugh again.

But in a way, these ads might not be too far from wrong. These probably are the kind of “less than average” office workers who could benefit the most from AI— well, up to a point, in theory.

Among many other things, my advanced writing students and I read Ethan Mollick’s Co-Intelligence, and in several different places in that book, he argues that in experiments when knowledge workers (consultants, people completing a writing task, programmers) use AI to complete tasks, they are much more productive. Further, while AI does not make already excellent workers that much better, it does help less than excellent workers improve. There’s S. Noy and W. Zhang’s Science paper “Experimental evidence on the productivity effects of generative artificial intelligence;” here’s a quote from the editor’s summary:

Will generative artificial intelligence (AI) tools such as ChatGPT disrupt the labor market by making educated professionals obsolete, or will these tools complement their skills and enhance productivity? Noy and Zhang examined this issue in an experiment that recruited college-educated professionals to complete incentivized writing tasks. Participants assigned to use ChatGPT were more productive, efficient, and enjoyed the tasks more. Participants with weaker skills benefited the most from ChatGPT, which carries policy implications for efforts to reduce productivity inequality through AI.

Then there’s S. Peng et al and their paper “The Impact of AI on Developer Productivity: Evidence from GitHub Copilot.” This was an experiment with a programming AI on Github, and the programmers who used AI completed tasks 55.8% faster. And Mollick talks a fair amount about a project he was a co-writer on, “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality,” which found that consultants in an experiment were more productive when allowed to use AI— except when faced with a “jagged technology frontier” problem, which in the study was a technical problem beyond the AI’s abilities. However, one of the problems Mollick and his colleagues observed is that a lot of the subjects in their study often copied and pasted content from the AI with minimal editing, and the AI-using subjects had a much harder time with that jagged frontier problem. I’ll come back to this in a couple more paragraphs.

Now, Mollick is looking at AI as a business professor, so he sees this as a good thing because it improves the quality of the workforce, and maybe it’ll enable employers to hire fewer people to complete the same tasks. More productivity with less labor equals more money, capitalism for the win. But my English major students and I all see ourselves (accurately or not) as well-above-average writers, and we all take pride in that. We like the fact we’re better at writing than most other people. Many of my students are aspiring novelists, poets, English teachers, or some other career where they make money from their abilities to write and read, and they all know that publishing writing that other people read is not something that everyone can do. So the last thing any of us who are good at something want is a technology that diminishes the value of that expertise.

This is part of what is behind various declarations of late for refusing or resisting AI, of course. Part of what is motivating someone like Ted Chiang to write about how AI can’t make art is making art is what he is good at. The last thing he wants is a world where any schmuck (like those dudes in the Apple AI ads) can click a button and be as good as he is at making art. I completely understand this reason for fearing and resisting AI, and I too hope that AI doesn’t someday in the future become humanity’s default story teller.

Fortunately for writers like Chiang and me and my students, the AI hype does not square with reality. I haven’t played around with Apple AI yet, but the reviews I’ve seen are underwhelming. I stumbled across a YouTube review by Marques Brownlee about the new AI that is quite thorough. I don’t know much about Brownlee, but he has over 19 million subscribers so he probably knows what he is talking about. If you’re curious, he talks about the writing feature in the first few minutes of this video, but the short version is he says that as a professional writer, he finds it useless.

The other issue I think my students and I are noticing is that the jagged frontier Mollick and his colleagues talk about— that is, the line/divide between tasks the AI can accomplish reasonably well and what it can’t— is actually quite large. In describing the study Mollick and his colleagues did which included a specifically difficult/can’t do with AI jagged frontier problem, I think he implies that this frontier is small. But Mollick and his colleagues— and the same is true with these other studies he quotes on this— are not studying AI in real settings. These are controlled experiments, and the researchers are trying to do all they can to eliminate other variables.

But in the more real world with lots of variables, there are jagged frontiers everywhere. The last assignment I gave in the advanced writing class asked students to attempt to “compose” or “make” something with the help of AI (a poem, a play, a song, a movie, a website, etc. etc.) that they could not do on their own. The reflection essays are not due until the last week of class, but we have had some “show and tell” exchanges about these projects. Some students were reasonably successful with making or doing something thanks to AI— and as a slight tangent: some students are better than others at prompting the AI and making it work for them. It’s not just a matter of clicking a button. But they all ran into that frontier, and for a lot of students, that was essentially how their experiment ended. For example, one student was successful at getting AI to generate the code for a website; but this student didn’t know what to do with the code the AI made to make it actually into a website. A couple of students tried to use AI to write music, but since they didn’t know much about music, their results were limited. One student tried to get AI to teach them how to play the card game Euchre, but the AI kept on doing things like playing cards in the student’s hand.

This brings me back to these Apple ads: I wish they both went on just another minute or so. Right after Warren and Lance confidently look directly at the camera with smug look that says to viewers “Do you see what I just got away with there,” they have to follow through with what they supposedly have accomplished, and I have a feeling that would go poorly. Right after Warren’s boss talks with him about that email and right after Lance starts his summary, I am pretty sure they’re gonna get busted. Sort of like what has happened when I have suspected correctly that a student used too much AI and that student can’t answer basic questions about what it is they (supposedly) wrote.

New School Year Resolutions

So, kind of in the form of resolutions, here’s what I’m hoping to accomplish this school year— mostly with work stuff, with a few life things on the list too.

Wade Deeper into AI in My Teaching— Much Deeper

This fall, I’m going to be teaching two sections of the required first year writing course (aka “freshman comp”), and a junior/senior level course called “Digital Writing.”

For first year writing, I have never let students do research on whatever they wanted. Instead, I have always had a common research theme; for example, a few years ago, the theme was “social media,” meaning students’ semester-long research project had to have something to do with social media. This semester, the theme for my sections of first year writing is going to be “AI and your future career goals.”

The Digital Writing course is one I helped develop quite a while ago and it has gone through various evolutions. It’s a course that explores literacy as a technology, and it is also about the relationships between “words in a row” writing and multimedia writing. I have always started the course with readings from Walter Ong, Dennis Baron, a selection from Plato’s Phaedrus (where Socrates talks about the nature of writing), and similar kinds of texts, and also with an assignment where students have to “invent” a way of writing without any of the conventional tools. Maybe I’ll post more about that later here. In previous versions the course, the next two projects were something more multimedia-ish: podcast-like audio presentations, short videos, comics, memes, mashups, etc. But this semester, the second two projects are both going to be deep dives into AI— and I’m still trying to figure out what that means. In that class (and among other readings), I’m assigning Ethan Mollick’s Co-Intelligence: Living and Working with AI. I’m sure I’ll write more about all of that later too.

I don’t know how this is going to go, and I think it is quite possible that it will turn out poorly. I think it’ll be interesting though.

Try to be at least a little more “involved”

Being in my 36th year of teaching at the college level means that I’m getting closer to retiring— or at least officially retiring. I don’t think I can afford to retire for another seven years (when I’ll be 65), and I don’t think I’ll want to work much past 70 (12 years from now). Unofficially though, as the joke goes, I retired from service work six years ago.

Just service, mind you: I’m not “deadwood” because I’m still publishing and presenting (at least some), and I’m still trying to innovate with my teaching. But I’ve been unofficially retired from service and committee work in my department since about 2018, mainly because I spent 13 of my first 20 years here doing A LOT of service. I had a couple of different coordinator positions, I chaired a number of searches, and I had been on just about every elected committee at one time or another. I was burnt out, I wanted to get out of the way for younger faculty to step up, and I think my colleagues were tired of me being involved in everything. So for the last six years, I’ve been a lot more checked out. I meet with my fellow writing faculty about things, and I’ll go to a department meeting if there’s something important on the agenda, but that’s about it.

This year, I think I want to make more of an effort to be a little more involved with happenings on campus, I guess for two reasons. First, after six years away, I’m just ready to back, at least a bit. After all, I did a lot of service stuff for my first 20 years because I liked it and I was good at it. Second, EMU is going through some interestingly difficult times as an institution. Like most of the other regional universities in the state and a lot of similar places in the upper midwest and northeast, we’ve had falling enrollments for a while, and it seems to have gotten worse in the last two years. Falling enrollments have resulted in dramatic budget cuts and declining faculty and staff. At the same time, the administration tries to keep some money around the place with some dubious outsourcing decisions.

Just to add to the drama a bit: we’re going to have to have some serious conversations this year about the future of most of my department’s graduate programs; the dean has announced that she is taking an early buyout and is leaving at the end of the school year; and the president announced a while ago that he will be retiring at the end of his contract in 2026. Which, when I think about it, might be when the faculty union will be negotiating a new contract.

I could go on, but you get the idea. There’s too much going on around here now to be checked out.

I’m not quite sure what “trying to be at least a little more involved” means, and I’m not interested in taking on any huge service jobs. I’m not planning on running to be on the executive committee of the faculty union, for example. But I suppose it means at least going to more informational meetings about things on campus.

(I should note that I have already failed on this resolution: I attended a kicking off the semester department meeting this morning, but then decided to blow off the College of Arts and Sciences meeting in the afternoon).

Put together my next (maybe last?) sabbatical/research release project proposal

I have a few ideas, mostly about AI and teaching (not surprisingly). As was the case with my work on MOOCs and before that the emergence of different writing technologies and pedagogy, I’m interested to see what kinds of tools and technologies from the past were as disruptive in ways that are similar to AI. That’s kind of vague, both on purpose and because that’s where I’m at in the process.

Anyway, sabbaticals and semester long research releases are competitive, and I’m eligible to submit a proposal in January 2025 for a semester off from teaching to research in the 2025-26 school year.

Keep figuring out Substack

The look and feel of this interface versus WordPress is intriguing, and while there are features I wish this had, there’s something to be said for the simplicity and uniformity of Substack— at least I think so far. I don’t think I’ll be able to rely on revenue from newsletter subscriptions anytime soon, and that’s not really my goal. On the other hand, if could convince 1000 people to give me $100 a year for stuff I write here…

Keep losing weight with Zepbound

I started Zepbound in the first week of January 2024 and, as of today, I’ve lost about 35 pounds. It’s not all the result of the drugs, but it’s— well, yes, it is all the result of the drugs. Anyway, my resolution here is to keep doing what I’m doing and (ideally) lose another 25-30 pounds before the end of the semester.

Well, sort of….

The 2024-25 school year is my 36th teaching college (counting my time as a grad student and a part-timer), my 26th year as a tenure-track professor at EMU, and my 17th as a full professor. So it’s probably no wonder that when I think of the “new year,” I think of new school year at least as much as I think of January. On the old blog, I usually wrote a post around this time of year, reflecting on the school year that was and the year that was likely ahead of me. No reason to stop doing that now, right?

So, kind of in the form of resolutions, here’s what I’m hoping to accomplish this school year— mostly with work stuff, with a few life things on the list too.

Wade Deeper into AI in My Teaching— Much Deeper

This fall, I’m going to be teaching two sections of the required first year writing course (aka “freshman comp”), and a junior/senior level course called “Digital Writing.”

For first year writing, I have never let students do research on whatever they wanted. Instead, I have always had a common research theme; for example, a few years ago, the theme was “social media,” meaning students’ semester-long research project had to have something to do with social media. This semester, the theme for my sections of first year writing is going to be “AI and your future career goals.”

The Digital Writing course is one I helped develop quite a while ago and it has gone through various evolutions. It’s a course that explores literacy as a technology, and it is also about the relationships between “words in a row” writing and multimedia writing. I have always started the course with readings from Walter Ong, Dennis Baron, a selection from Plato’s Phaedrus (where Socrates talks about the nature of writing), and similar kinds of texts, and also with an assignment where students have to “invent” a way of writing without any of the conventional tools. Maybe I’ll post more about that later here. In previous versions the course, the next two projects were something more multimedia-ish: podcast-like audio presentations, short videos, comics, memes, mashups, etc. But this semester, the second two projects are both going to be deep dives into AI— and I’m still trying to figure out what that means. In that class (and among other readings), I’m assigning Ethan Mollick’s Co-Intelligence: Living and Working with AI. I’m sure I’ll write more about all of that later too.

I don’t know how this is going to go, and I think it is quite possible that it will turn out poorly. I think it’ll be interesting though.

Try to be at least a little more “involved”

Being in my 36th year of teaching at the college level means that I’m getting closer to retiring— or at least officially retiring. I don’t think I can afford to retire for another seven years (when I’ll be 65), and I don’t think I’ll want to work much past 70 (12 years from now). Unofficially though, as the joke goes, I retired from service work six years ago.

Just service, mind you: I’m not “deadwood” because I’m still publishing and presenting (at least some), and I’m still trying to innovate with my teaching. But I’ve been unofficially retired from service and committee work in my department since about 2018, mainly because I spent 13 of my first 20 years here doing A LOT of service. I had a couple of different coordinator positions, I chaired a number of searches, and I had been on just about every elected committee at one time or another. I was burnt out, I wanted to get out of the way for younger faculty to step up, and I think my colleagues were tired of me being involved in everything. So for the last six years, I’ve been a lot more checked out. I meet with my fellow writing faculty about things, and I’ll go to a department meeting if there’s something important on the agenda, but that’s about it.

This year, I think I want to make more of an effort to be a little more involved with happenings on campus, I guess for two reasons. First, after six years away, I’m just ready to back, at least a bit. After all, I did a lot of service stuff for my first 20 years because I liked it and I was good at it. Second, EMU is going through some interestingly difficult times as an institution. Like most of the other regional universities in the state and a lot of similar places in the upper midwest and northeast, we’ve had falling enrollments for a while, and it seems to have gotten worse in the last two years. Falling enrollments have resulted in dramatic budget cuts and declining faculty and staff. At the same time, the administration tries to keep some money around the place with some dubious outsourcing decisions.

Just to add to the drama a bit: we’re going to have to have some serious conversations this year about the future of most of my department’s graduate programs; the dean has announced that she is taking an early buyout and is leaving at the end of the school year; and the president announced a while ago that he will be retiring at the end of his contract in 2026. Which, when I think about it, might be when the faculty union will be negotiating a new contract.

I could go on, but you get the idea. There’s too much going on around here now to be checked out.

I’m not quite sure what “trying to be at least a little more involved” means, and I’m not interested in taking on any huge service jobs. I’m not planning on running to be on the executive committee of the faculty union, for example. But I suppose it means at least going to more informational meetings about things on campus.

(I should note that I have already failed on this resolution: I attended a kicking off the semester department meeting this morning, but then decided to blow off the College of Arts and Sciences meeting in the afternoon).

Put together my next (maybe last?) sabbatical/research release project proposal

I have a few ideas, mostly about AI and teaching (not surprisingly). As was the case with my work on MOOCs and before that the emergence of different writing technologies and pedagogy, I’m interested to see what kinds of tools and technologies from the past were as disruptive in ways that are similar to AI. That’s kind of vague, both on purpose and because that’s where I’m at in the process.

Anyway, sabbaticals and semester long research releases are competitive, and I’m eligible to submit a proposal in January 2025 for a semester off from teaching to research in the 2025-26 school year.

Keep figuring out Substack

The look and feel of this interface versus WordPress is intriguing, and while there are features I wish this had, there’s something to be said for the simplicity and uniformity of Substack— at least I think so far. I don’t think I’ll be able to rely on revenue from newsletter subscriptions anytime soon, and that’s not really my goal. On the other hand, if could convince 1000 people to give me $100 a year for stuff I write here…

Keep losing weight with Zepbound

I started Zepbound in the first week of January 2024 and, as of today, I’ve lost about 35 pounds. It’s not all the result of the drugs, but it’s— well, yes, it is all the result of the drugs. Anyway, my resolution here is to keep doing what I’m doing and (ideally) lose another 25-30 pounds before the end of the semester.

TALIA? This is Not the AI Grading App I Was Searching For

(My friend Bill Hart-Davidson unexpectedly died last week. At some point, I’ll write more about Bill here, probably. In the meantime, I thought I’d finish this post I started a while ago about the webinar about Instructify’s AI grading app. Bill and I had been texting/talking more about AI lately, and I wish I would have had a chance to text/talk more about this. Or anything else).

In March 2023, I wrote a blog post titled “What Would an AI Grading App Look Like?” I was inspired by what I still think is one of the best episodes of South Park I have seen in years, “Deep Learning.”  Follow this link for a detailed summary or look at my post from last year, but in the nutshell, the kids start using ChatGPT to write a paper assignment and Mr. Garrison figures out how to use ChatGPT to grade those papers. Hijinks ensue.

Well, about a month ago and at a time when I was up to my eyeballs in grading, I saw a webinar presentation from Instructify about their AI product called TALIA. The title of the webinar was “How To Save Dozens of Hours Grading Essays Using AI.” I missed the live event, but I watched the recording– and you can too, if you want— or at least you could when I started writing this. Much more about it after the break, but the tl;dr version is this AI grading tool is not the one I am looking for (not surprisingly), and I think it would be a good idea for these tech startups to include people with actual experience with teaching writing on their development teams.

Continue reading “TALIA? This is Not the AI Grading App I Was Searching For”

Bomb Threat

It is “that time” of the semester, which is made all the much worse by it also being “that time” of the school year, mid-April. Everyone on campus– students of course, but also faculty and staff and administrators and everyone else– is peak stressed out because this is the time where everything is due. We’re all late. My students are late finishing stuff they were supposed to finish a couple weeks ago, and for me that means I’m late on finishing reading/commenting on/grading all of those things they haven’t quite finished. We are mutually late. And just to make the vibe around it all that much more spooky, there’s the remaining mojo of Monday’s eclipse.

So sure, let’s through a stupid bomb threat into the mix.

This entry from “EMU Today” (EMU’s public relations site) provides a timeline, and this story from The Detroit News is a good summary of the event.  I was in my office during all this, slowly climbing Grading Mountain (the summit is visible, the end is near, and yet the distance to that summit is further away than I had hoped) and responding to earlier student emails about missing class because of “stress” and such. Then I started getting messages from EMU’s emergency alert system. “Emergency reported in Wise, Buell, Putnam (these are dorms). Please evacuate the building immediately.” This was followed a few minutes later by a similar message about clearing several other dorms and an update that said it was a bomb threat.

EMU set up an emergency alert system a few years ago as part of a response to the rise in school and college campus shootings and violence happening around the country. They rolled this out at about the same time the campus security folks started holding workshops about how to properly shelter in place. I believe yesterday’s bomb threat was the first time this system was used for a threat like this. Previously, the only alerts I think I have received from this system (besides regular system tests) had to do with the weather, messages about campus being closed because of a snowstorm. It is also worth mentioning that this time, the alert system didn’t just send everyone a text. It also sent emails and robocalls, which means all the devices were all lit up in a few different ways.

Our son Will (who lives in Connecticut) texted me and Annette because, for whatever reason, he’s signed up to get these EMU emergency messages and he was concerned. Annette, who wasn’t on campus, wasn’t sure what was going on. When EMU alerted a few minutes after the evacuation posts that it was a bomb threat, I knew it had to be a hoax. I knew (well, I assumed) this in part because I have a good view of several of these dorms from my office, and it wasn’t like I was seeing cops and firefighters rushing into those buildings. Mostly what I saw were students hanging around outside the dorm looking at their phones.

I also thought immediately it was a hoax because 99.9999% of the time, bomb threats are hoaxes. One of the few colleagues of mine who was around the offices at the same time as me poked his head in my door and asked if I was going to still have class. “Well, yeah,” I said, “no one has said classes are cancelled.”

Rather than spending another hour or so prepping for my two afternoon classes and at least making a tiny bit more of a dent on all the grading as I had planned, I instead spent the time responding to student emails and then sending out group emails to my afternoon classes that yes indeed, we were meeting because EMU had not cancelled classes. Some were genuinely confused, wondering if we were still having class because the alerts did not make that clear. Some emailed me about the logistics of it all, basically “I don’t know if I can make it because I need to get back into my dorm room to get my stuff first,” or whatever. Some were freaked out about the whole thing, that they didn’t feel safe on campus, there was no way they were coming to class, etc. “Well, EMU has not cancelled classes, so we will be meeting,” I wrote back. And a couple of student seemed to sense this might be the excuse to skip they were hoping for.

About an hour after it all started and before my 2 pm class, we got another alert (or rather, three more alerts simultaneously) that the three dorms that had been named in the initial bomb threat had been inspected and declared clear. The other dorms had been evacuated as a precaution. At about 2:15, I got an email from the dean (forwarded to faculty by the department head) that no, classes were not cancelled.

Before my 2 pm class was over, EMU alerts sent a final message (again, three ways) to announce all was clear. But of course a lot of students were still freaked out– and for good reason, I guess. I talked with one student after my last class and after it was over who said he was nervous about spending the night in his dorm room, and I kind of understand that. But at the same time, maybe there was never anything to be afraid of?

I’m not saying that EMU overreacted because, obviously, all it takes is that 0.0001% chance where bombs go off simultaneously in the dorms like in the end of Fight Club. Not unlike a fire alarm going off in the dorms in the middle of the night (a regular occurance, I’m told), everyone knows (or at least assumes) is because of some jackass. But you still have to evacuate, you still have to call the fire department, etc.

The whole thing pisses me off. At least it was a hoax and it wasn’t a shooter, something that is always always somewhere on everyone’s minds nowadays. At least no one was hurt beyond being freaked out for a while. And at least there are only about two weeks before the end of the semester.

Once Again, the Problem is Not AI (a Response to Justus’ and Janos’ “Assessment of Student Learning is Broken”)

I most certainly do not have the time to be writing this  because it’s the height of the “assessment season” (e.g., grading) for several different assignments my students have been working on for a while now. That’s why posting this took me a while– I wrote it during breaks in a week-long grading marathon. In other words, I have better things to do right now. But I find myself needing to write a bit in response to Zach Justus and Nik Janos’ Inside Higher Ed piece “Assessment of Student Learning is Broken,” and I figured I might as well make it into a blog entry. I don’t want to be a jerk about any of this and I’m just Justus and Janos are swell guys and everything, but this op-ed bothered me a lot.

Justus and Janos are both professors at Chico State in California; Justus is a professor in Communications and is the director of the faculty development program there, and Janos is in sociology. They begin their op-ed about AI “breaking” assessment quite briskly:

Generative artificial intelligence (AI) has broken higher education assessment. This has implications from the classroom to institutional accreditation. We are advocating for a one-year pause on assessment requirements from institutions and accreditation bodies. We should divert the time we would normally spend on assessment toward a reevaluation of how to measure student learning. This could also be the start of a conversation about what students need to learn in this new age.

I hadn’t thought a lot about how AI might figure into institutional accreditation, so I kept reading. And that’s where I first began to wonder about the argument they’re making, because very quickly, they seem to equate institutional assessment with assessment in individual classes (grading). Specifically, most of this piece is about the problems caused by AI (supposedly) of a very specific assignment in a very specific sociology class.

I have no direct experience with institutional assessment, but as part of the Writing Program Administration work I’ve dipped into a few times over the years, I have some experience with program assessment. In those kind of assessments, we’re looking at the forest rather than the individual trees. For example, maybe as part of a program assessment, the WPAs might want to consider the average grades of all sections of first year writing. That sort of measure could tell us stuff about the overall pass rate and grade distribution across sections, and so on.  But that data can’t tell you much about grades for specific students or the practices of a specific instructor. As far as I can tell, institutional assessments are similar “big picture” evaluations.

Justus and Janos see it differently, I guess:

“Take an introductory writing class as an example. One instructor may not have an AI policy, another may have a “ban” in place and be using AI detection software, a third may love the technology and be requiring students to use it. These varied policies make the aggregated data as evidence of student learning worthless.”

Yes, different teachers across many different sections of the same introductory writing class take different approaches to teaching writing, including with (or without) AI. That’s because individual instructors are, well, individuals– plus each group of students is different as well. Some of Justus and Janos’ reaction to these differences probably have to do with their disciplinary presumptions about “data”: if it’s not uniform and if it not something that can be quantified, then it is, as they say, “worthless.” Of course in writing studies, we have no problem with much more fuzzy and qualitative data. So from my point of view, as long as the instructors are more or less following the same outcomes/curriculum, I don’t see the problem.

But like I said, Justus and Janos aren’t talking about institutional assessment. Rather, they devote most of this piece to a very specific assignment. Janos teaches a sociology class that has an institutional writing competency requirement for the major. The class has students “writing frequently” with a variety of assignments for “nonacademic audiences,” like “letters-to-the-editor, … encyclopedia articles, and mock speeches to a city council” meeting. Justus and Janos say “Many of these assignments help students practice writing to show general proficiency in grammar, syntax and style.” That may or may not be true, but it’s not at all clear how this was assigned or what sort of feedback students received. .

Anyway, one of the key parts of this class is a series of assignments about:

“a foundational concept in sociology called the sociological imagination (SI), developed by C. Wright Mills. The concept helps people think sociologically by recognizing that what we think of as personal troubles, say being homeless, are really social problems, i.e., homelessness.”

It’s not clear to me what students read and study to learn about SI, but it’s a concept that’s been around for a long time– Mills wrote about it in a book in the 1950s. So not surprisingly, there is A LOT of information about this available online, and presumably that has been the case for years.

Students read about SI and as part of their study, they “are asked to provide, in their own words and without quotes, a definition of the SI.” To help do this, students do activities like “role play” to they are talking to friends or family about a social problem such as homelessness. “Lastly,” (to quote at length one last time):

…students must craft a script of 75 words or fewer that defines the SI and uses it to shed light on the social problem. The script has to be written in everyday language, be set in a gathering of friends or family, use and define the concept, and make one point about the topic.

Generative AI, like ChatGPT, has broken assessment of student learning in an assignment like this. ChatGPT can meet or exceed students’ outcomes in mere seconds. Before fall 2022 and the release of ChatGPT, students struggled to define the sociological imagination, so a key response was to copy and paste boilerplate feedback to a majority of the students with further discussion in class. This spring, in a section of 27 students, 26 nailed the definition perfectly. There is no way to know whether students used ChatGPT, but the outcomes were strikingly different between the pre- and post-AI era.

Hmm. Okay, I have questions.

  • You mean to tell me that the key deliverable/artifact that students produce in this class to demonstrate that they’ve met a university-mandated gen ed writing requirement is a 75 word or fewer passage? That’s it? Really. Really? I am certainly not saying that being able to produce a lot of text should not be the main factor for demonstrating “writing competency,” but this seems more than weird and hard to believe.
  • Is there any instructional apparatus for this assignment at all? In other words, do students have to produce drafts of this script? Are there any sort of in-class work with the role-play that’s documented in some way? Any reflection on the process? Anything?
  • I have no idea what the reading assignments and lectures were for this assignment, so I could very well be missing a key concept with SI. But I feel like I could have copied and pasted together a pretty good script just based on some Google searching around– if I was inclined to cheat in the first place. So given that, why are Justus and Janos confident that students hadn’t been cheating before Fall 2022?
  • The passage about the “before Fall 2022” approach to teaching this writing assignment says a lot. It sounds like there’s no actual discussion of what students wrote, and the main instructions to students back then was to follow “boilerplate feedback.” So, in assessing this assignment, was Janos evaluating the unique choices students made in crafting their SI scripts? Or rather, was he evaluating these SI scripts for the “right answer” he provided in the readings or lectures?
  • And as Justus and Janos note, there is no good way to know for certain if a student handed in something made in part or in whole by AI, so why are they assuming that all of those students who got the “right answer” with their SI scripts were cheating?

So, Justus and Janos conclude, because now instructors are evaluating “some combination of student/AI work,” it is simply impossible to make any assessment for institutional accreditation. Their solution is “we should have a one-year pause wherein no assessment is expected or will be received.” What kinds of assessments are they talking about? Why only a year pause? None of this is clear.

Clearly, the problem here is not institutional assessment or the role of AI; the problem is the writing assignment. The solutions are also obvious.

First, there’s the teaching writing versus assigning it.  I have blogged a lot about this in the last couple years (notably here), but teaching writing means a series of assignments where students need to “show their work.” That seems extremely doable with this particular assignment, too. Sure, it would require more actual instruction and evaluation than “boilerplate feedback,” but this seems like a small class (27 students), so that doesn’t seem that big of a deal.

Second, if you have an assignment in anything that can successfully be completed with a simple prompt into ChatGPT (as in “write a 75 word script explaining SI in everyday language”), then that’s definitely now a bad assignment. That’s the real “garbage in, garbage out” issue here.

And third, one of the things that AI has made me realize is if an instructor has an assignment in a class– and I mean any assignment in any class– which can be successfully completed without having any experience or connection to that instructor or the class, then that’s a bad assignment. Again, that seems like an extremely easy to address with the assignment that Justus and Janos describe. They’d have to make changes to the assignment and assessment, of course, but doesn’t that make more sense than trying to argue that we should completely revamp the institutional accreditation process?

Starting 2024 With All First Year Writing/All the Time!

This coming winter term (what every other university calls spring term), I’m going to be doing something I have never done in my career as a tenure-track professor. I’m going to be teaching first year composition and only first year composition.  It’ll be quite a change.

When I came to EMU in 1998, my office was right next to a very senior colleague, Bob Kraft. Bob, who retired from EMU in 2004 and who passed away in December 2022, came back to the department to teach after having been in some administrative positions for quite a while. His office was right next to mine and we chatted with each other often about teaching, EMU politics, and other regular faculty chit-chat. He was a good guy; used to call me “Steve-O!”

Bob taught the same three courses every semester: three sections of a 300-level course called Professional Writing. It was a class he was involved in developing back in the early 1980s and I believe he assigned a course pack that had the complete course in it– and I mean everything: all the readings, in-class worksheets, the assignments, rubrics, you name it. Back in those days and before a university shift to “Writing Intensive” courses within majors, this was a class that was a “restricted elective” in lots of different majors, and we offered plenty of sections of it and similar classes. (In retrospect, the shift away from courses like this one to a “writing in the disciplines” approach/philosophy was perhaps a mistake both because of the way these classes have subsequently been taught in different disciplines and because it dramatically reduced the credit hour production in the English department– but all this is a different topic).

Anyway, Bob essentially did exactly the same thing three times a semester every semester, the same discussions, the same assignments, and the same kinds of papers to grade. Nothing– or almost nothing– changed. I’m pretty sure the only prep Bob had to do was change the dates on the course schedule.

I thought “Jesus, that’d be so boring! I’d go crazy with that schedule.” I mean, he obviously liked the arrangement and I have every reason to believe it was a good class and all, but the idea of teaching the same class the same way every semester for years just gave me hives. Of course, I was quite literally in the opposite place in my career: rather than trying to make the transition into retirement, I was an almost freshly-minted PhD who was more than eager to develop and teach new classes and do new things.

For my first 20 years at EMU (give or take), my work load was a mix of advanced undergraduate writing classes, a graduate course almost every semester, and various quasi-administrative duties. I occasionally have had semesters where I taught two sections of the same course, but most semesters, I taught three different courses– or two different ones plus quasi-admin stuff. I rarely taught first year composition during the regular school year (though I taught it in the summer for extra money while our son Will was still at home) because I was needed to teach the advanced undergrad and MA-level writing classes we had. And this was all a good thing: I got to teach a lot of different courses, I got a chance to do things like help direct the first year writing program or to coordinate our major and grad program, and I had the opportunity to work closely with a lot of MA students who have gone on to successful careers of their own.

But around six or seven years ago, the department (the entire university, actually) started to change and I started to change as well. Our enrollments have fallen across the board, but especially for upper-level undergraduate and MA level courses, which means instead of a grad course every semester, I tend to teach one a school year, along with fewer advanced undergrad writing classes, and now I teach first year writing every semester. One of the things I’ve come to appreciate about this arrangement is the students I work with in first year composition are different from the students I work with on their MA projects– but they’re really not that different, in the big picture of things.

And of course, as I move closer to thinking about retirement myself, Bob’s teaching arrangement seems like a better and better idea. So, scheduling circumstances being what they are, when it became clear I’d have a chance to just teach three sections of first year comp this coming winter, I took it.

We’ll see what happens. I’m looking forward to greatly reducing my prep time because this is the only course I’m teaching this semester (just three times), and also because first year writing is something I’ve taught and thought about A LOT. I’m also looking forward to experimenting with requiring students to use ChatGPT and other AI tools to at least brainstorm and copy-edit– maybe more. What I’m not looking forward to is kind of just repeating the same thing three times in a row each day I teach. Along these lines, I am not looking forward to teaching three classes all on the same days (Tuesdays and Thursdays) and all face to face. I haven’t done that in a long time (possibly never) because I’ve either taught two and been on reassigned time, or I have taught at least a third of my load online. And I’m also worried about keeping all three of these classes in synch. If one group falls behind for some reason, it’ll mess up my plans (this is perhaps inevitable).

What I’m not as worried about is all the essays I’ll have to read and grade. I’m well-aware that the biggest part of the work for anyone teaching first year writing is all the reading and commenting and grading student work, and I’ve figured out a lot over the years about how to do it. Of course, I might be kidding myself with this one….