4C25: My Talk in Two Parts, and “Thoughts”

I am home from the 2025 Conference for College Composition and Communication, after leaving directly after my 9:30 am one man show panel and an uneventful drive home. I actually had a good time, but it will still probably be the last CCCCs for me. Probably.

Click this link if you want to just skip to my overall conference thoughts, but here’s the whole talk script with slides:

The first part of the original title, “Echoes of the Past,” was just my lame effort at having something to do with the conference theme, so disregard that entirely. This has nothing to do with sound. The first part of my talk is the part after the colon, “Considering Current Artificial Intelligence Writing Pedagogies with Insights from the Era of Computer-Aided Instruction,” and that is something I will get to in a moment, and that does connect to the second title, 

“The Importance of Paying Attention To, Rather Than Resisting, AI.” It isn’t exactly what I had proposed to talk about, but I hope it’ll make sense.

So, the first part: I have always been interested in the history of emerging technologies, especially technologies that were once new and disruptive but became naturalized and are now seen not as technology at all but just as standard practice. There are lots of reasons why I think this is interesting, one of which is what these once-new and disruptive technologies can tell us now about emerging writing technologies. History doesn’t repeat, but it does rhyme, and history prepares the future for whatever is coming next.  

For example, I published an essay a long time ago about the impact of chalkboards in 19th-century education, and I’ve presented at the CCCCs about how changes in pens were disruptive and changed teaching practices.  I wrote a book about MOOCs where I argued they were not new but a continuation of the long history of distance education. As a part of that project, I wrote about the history of correspondence courses in higher education, which emerged in the late 19th century. Correspondence courses led to radio and television courses, which led to the first generation of online courses, MOOCs, and online courses as we know them now and post-Covid. Though sometimes emerging and disruptive technologies are not adopted. Experiments in teaching by radio and television didn’t continue, and while there are still a lot of MOOCs, they don’t have much to do with higher education anymore.

The same dynamic happened with the emergence of computer technology in the teaching of writing beginning in the late ’70s and early ’80s, and that even included a discussion of Artificial Intelligence– sort of. In the course of poking around and doing some lazy database searches, I stumbled across the first article in the first issue– a newsletter at the time– of what would become the journal Computers and Composition, a short piece by Hugh Burns called “A Note on Composition and Artificial Intelligence.”

Incidentally, this is what it looks like. I have not seen the actual physical print version of this article, but the PDF looks like it might have been typed and photocopied. Anyway, this was published in 1983, a time when AI researchers were interested in the development of “expert systems,” which worked with various programming rules and logic to simulate the way humans tend to think, at least in a rudimentary way. 

Incidentally and just in case we don’t all know this, AI is not remotely new, with a lot of enthusiasm and progress in the late 1950s through the 1970s, and then with a resurgence in the 1980s with expert systems. 

In this article, Burns, who wrote one of the first dissertations about the use of computers to teach writing, discusses the relevance of the research in the field of artificial intelligence and natural language processing in the development of Computer Aided Instruction, or CAI, which is an example of the kind of “expert system” applications of the time. “I, for one,” Burns wrote, “believe composition teachers can use the emerging research in artificial intelligence to define the best features of a writer’s consciousness and to design quality computer-assisted instruction – and other writing instruction – accordingly” (4). 

If folks nowadays remember anything at all about CAI, it’s probably “drill and kill” programs for practicing things like sentence combining, grammar skills, spelling, quizzes, and so forth. But what Burns was talking about was a program called Topi, which walked users through a series of invention questions based on Tagmemic and Aristotelian rhetoric. 

Here’s what the interface looked like from a conference presentation Burns gave in 1980. As you can see, the program basically simulates the kind of conversation a student might have with a not-very-convincing human. 

There were several similar prompting, editing, and revision tools at the time. One was Writer’s Workbench, which was an editing program developed by Bell Labs and initially meant as a tool for technical writers at the company. It was adopted for writing instruction at a few colleges and universities, and 

John T. Day wrote about St. Olaf College’s use of Writer’s Workbench in Computers and Composition in 1988 in his article “Writer’s Workbench: A Useful Aid, but not a Cure-All.” As the title of Day’s article suggests, the reviews to Writer’s Workbench were mixed. But I don’t want to get into all the details Day discusses here. Instead, what I wanted to share is Day’s faux epigraph.

I think this kind of sums up a lot of the profession’s feelings about the writing technologies that started appearing in classrooms– both K-12 and in higher education– as a result of the introduction of personal computers in the early 1980s. CAI tools never really caught on, but plenty of other software did, most notably word processing, and then networked computers, this new thing “the internet,” and then the World Wide Web. All of these technologies were surprisingly polarizing among English teachers at the time. And as an English major in the mid-1980s who also became interested in personal computers and then the internet and then the web, I was “an enthusiast.”

From around the late 1970s and continuing well into the mid-1990s, there were hundreds of articles and presentations in major publications in composition and English studies like Burns’ and Day’s pieces, about the enthusiasms and skepticisms of using computers for teaching and practicing writing. Because it was all so new and most folks in English studies knew even less about computers than they do now, a lot of that scholarship strikes me now as simplistic. Much of what appeared in Computers and Composition in its first few years was teaching anecdotes, as in “I had students use word processing in my class and this is what happened.” Many articles were trying to compare writing with and without computers, writing with a word processor or by hand, how students of different types (elementary/secondary, basic writers, writers with physical disabilities, skilled writers, etc.) were harmed or helped with computers, and so forth.  

But along with this kind of “should you/shouldn’t you write with computers” theme, a lot of the scholarship in this era raised questions that have continued with every other emerging and contentious technology associated with writing, including, of course, AI: questions about authorship, the costs (because personal computers were expensive), the difficulty of learning and also teaching the software, cheating, originality, “humanness” and so on. This scholarship was happening at a time when using computers to practice or teach writing was still perceived as a choice– that is, it was possible to refuse and reject computers.  I am assuming that the comparison I’m making here to this scholarship and the discussions now about AI are obvious.

So I think it’s worth re-examining some of this work where writers were expressing enthusiasms, skepticisms, and concerns about word processing software and personal computers and comparing it to the moment we are in with AI in the form of ChatGPT, Gemini, Claude, and so forth. What will scholars 30 years from now think about the scholarship and discourse around Artificial Intelligence that is in the air currently? 

Anyway, that was going to be the whole talk from me and with a lot more detail, but that project for me is on hold, at least for now. Instead, I want to pivot to the second part of my talk, “The Importance of Paying Attention To, Rather Than Resisting, AI.” 

I say “Rather Than Resisting” or Refusing AI in reference to Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes website “Refusing Generative AI in Writing Studies,” but also in reference to articles such as Melanie Dusseau’s “Burn It Down: A License for AI Resistance,” which was a column in Inside Higher Ed in November 2024, and other calls to refuse/resist using AI. “The Importance of Paying Attention To,” is my reference to Cynthia Selfe’s “Technology and Literacy: A Story about the Perils of Not Paying Attention,” which was first presented as her CCCC chair’s address in 1998 (published in 1999) and which was also expanded as a book called Technology and Literacy in the Twenty-first Century.

If Hugh Burns’ 1983 commentary in the first issue of Computers and Composition serves for me as the beginning of this not-so-long-ago history, when personal computers were not something everyone had or used and when they were still contentious and emerging tools for writing instruction and practice, then Selfe’s CCCCs address/article/book represents the point where computers (along with all things internet) were no longer optional for writing instruction and practice. And it was time for English teachers to wake up and pay attention to that.

And before I get too far, I agree with eight out of the ten points on the “Refusing Generative AI in Writing Studies” website, broadly speaking. I think these are points that most people in the field nowadays would agree with, actually. 

But here’s where I disagree. I don’t want to go into this today, but the environmental impact of the proliferation of data centers is not limited to AI. And when it comes to this last bullet point, no, I don’t think “refusal” or resistance are principled or pragmatic responses to AI. Instead, I think our field needs to engage with and pay attention to AI.

Now, some might argue that I’m taking the call to refuse/resist AI too literally and that the kind of engagement I’m advocating is not at odds with refusal.

I disagree. Word choices and their definitions matter. Refusing means being unwilling to do something. Paying attention means to listen to and to think about something. Much for the same reasons Selfe spoke about 27 years ago, there are perils to not paying attention to technology in writing classrooms. I believe our field needs to pay attention to AI by researching it, teaching with it, using it in our own writing, goofing around with it, and encouraging our students to do the same. And to be clear: studying AI is not the same as endorsing AI.  

Selfe’s opening paragraph is a kidding/not kidding assessment of the CCCCs community’s feelings about technology and the community’s refusal to engage with it. She says many members of the CCCCs over the years have shared some of the best ideas we have from any discipline about teaching writing, but it’s a community that has also been largely uninterested in the focus of Selfe’s work, the use of computers to teach composition. She said she knew bringing up the topic in a keynote at the CCCCs was “guaranteed to inspire glazed eyes and complete indifference in that portion of the CCCC membership which does not immediately sink into snooze mode.” She said people in the CCCCs community saw a disconnect between their humanitarian concerns and a distraction from the real work of teaching literacy.

It was still possible in a lot of English teacher’s minds to separate computers from the teaching of writing– at least in the sense that most CCCCs members did not think about the implications of computers in their classrooms. Selfe says “I think [this belief] informs our actions within our home departments, where we generally continue to allocate the responsibility of technology decisions … to a single faculty or staff member who doesn’t mind wrestling with computers or the thorny, unpleasant issues that can be associated with their use.” 

Let me stop for a moment to note that in 1998, I was there. I attended and presented at that CCCCs in Chicago, and while I can’t recall if I saw Selfe’s address in person (I think I did), I definitely remember the times.

After finishing my PhD in 1996, I was hired by Southern Oregon University as their English department’s first “computers and writing” specialist. At the 1998 convention, I met up with my future colleagues at EMU because I had recently accepted the position I currently have, where I was once again hired as a computer and writing specialist. At both SOU and EMU, I had colleagues– you will not be surprised to learn these tended to be senior colleagues– who questioned why there was any need to add someone like me to the faculty. In some ways, it was similar to the complaints I’ve seen on social media about faculty searches involving AI specialists in writing studies and related fields.

Anyway, Selfe argues that in hiring specialists, English departments outsourced responsibility to the rest of the faculty to have anything to do with computer technology. It enabled a continued belief that computers are simply “tool[s] that individual faculty members can use or ignore in their classrooms as they choose, but also one that the profession, as a collective whole–and with just a few notable exceptions–need not address too systematically.” Instead, she argued that what people in our profession needed to do was to pay attention to these issues, even if we really would rather refuse to do so: “I believe composition studies faculty have a much larger and more complicated obligation to fulfill–that of trying to understand and make sense of, to pay attention to, how technology is now inextricably linked to literacy and literacy education in this country. As a part of this obligation, I suggest that we have some rather unpleasant facts to face about our own professional behavior and involvement.” She goes on a couple of paragraphs later to say in all italics “As composition teachers, deciding whether or not to use technology in our classes is simply not the point–we have to pay attention to technology.”

Again, I’m guessing the connection to Selfe’s call then to pay attention to computer technology and my call now to pay attention to AI is pretty obvious.

The specific case example Selfe discusses in detail in her address is a Clinton-Gore era report called Getting America’s Children Ready for the Twenty-First Century, which was about that administration’s efforts to promote technological literacy in education, particularly in K-12 schools. The initiative spent millions on computer equipment, an amount of money that dwarfed the spending on literacy programs. As I recall those times, the main problem with this initiative was there was lots of money spent to put personal computers into schools, but very little money was spent on how to use the computers in classrooms. Self said, “Moreover, in a curious way, neither the CCCC, nor the NCTE, the MLA, nor the IRA–as far as I can tell–have ever published a single word about our own professional stance on this particular nationwide technology project: not one statement about how we think such literacy monies should be spent in English composition programs; not one statement about what kinds of literacy and technology efforts should be funded in connection with this project or how excellence should be gauged in these efforts; not one statement about the serious need for professional development and support for teachers that must be addressed within context of this particular national literacy project.” 

Selfe closes with a call for action and a need for our field and profession to recognize technology as important work we all do around literacy. I’ve cherry-picked a couple of quotes here to share at the end. Again, by “technology”, Selfe more or less meant PCs, networked computers, and the web, all tools we all take for granted. But also again, every single one of these calls applies to AI as well.

Now, I think the CCCCs community and the discipline as a whole have moved in the direction Selfe was urging in her CCCCs address. Unlike the way things were in the 1990s, I think there is widespread interest in the CCCC community in studying the connections between technologies and literacy. Unlike then, both MLA and CCCCs (and presumably other parts of NCTE) have been engaged and paying attention. There is a joint CCCC-MLA task force that has issued statements and guidance on AI literacy, along with a series of working papers, all things Selfe was calling for back then. Judging from this year’s program and the few presentations I have been able to attend, it seems like a lot more of us are interested in engaging and paying attention to AI rather than refusing it. 

At the same time, there is an echo–okay, one sound reference– of the scholarship in the early era of personal computers. A lot of the scholarship about AI now is based on teachers’ experiences of experimenting with it in their own classes. And we’re still revisiting a lot of the same questions regarding the extent to which we should be teaching students how to use AI, the issues of authenticity and humanness, of cheating, and so forth. History doesn’t repeat, but it does rhyme.

Let me close by saying I have no idea where we’re going to end up with AI. This fall, I’m planning on teaching a special topics course called Writing, Rhetoric, and AI, and while I have some ideas about what we’re going to do, I’m hesitant about committing too much to a plan now since all of this could be entirely different in a few months. There’s still the possibility of generative AI becoming artificial general intelligence and that might have a dramatic impact on all of our careers and beyond. Trump and shadow president Elon Musk would like nothing better than to replace most people who work for the federal government with this sort of AI. And of course, there is also the existential albeit science fiction-esque possibility of an AI more intelligent than humans enslaving us.

But at least I think that we’re doing a much better job of paying attention to technology nowadays.


Thoughts”

The first time I attended and presented at the CCCCs was in 1995. It was in Washington, D.C., and I gave a talk that was about my dissertation proposal. I don’t remember all the details, but I probably drove with other grad students from Bowling Green and split a hotel room, maybe with Bill Hart-Davidson or Mick Doherty or someone like that. I remember going to the big publisher party sponsored by Bedford-St. Martin’s (or whatever they were called then) which was held that year at the National Press Club, where they filled us with free cocktails and enough heavy hors d’oeuvres to serve as a meal.

For me, the event has been going downhill for a while. The last time I went to the CCCCs in person was in 2019– pre-Covid, of course– in Pittsburgh. I was on a panel of three scheduled for 8:30 am Friday morning. One of the people on the panel was a no-show, and the other panelist was Alex Reid; one person showed up to see what we had to say– though at least that one person was John Gallagher. Alex and I went out to breakfast, and I kind of wandered around the conference after that, uninterested in anything on the program. I was bored and bummed out. I had driven, so I packed up and left Friday night, a day earlier than I planned.

And don’t even get me started on how badly the CCCCs did at holding online versions of the conference during Covid.

So I was feeling pretty “done” with the whole thing. But I decided to put in an individual proposal this year because I was hoping it would be the beginning of another project to justify a sabbatical next year, and I thought going to one more CCCCs 30 years after my first one rounded things out well. Plus it was a chance to visit Baltimore and to take a solo road trip.

This year, the CCCCs/NCTE leadership changed the format for individual proposals, something I didn’t figure out until after I was accepted. Instead of creating panels made up of three or four individual proposals, which is what the CCCCs had always done before– which is what every other academic conference I have ever attended does with individual proposals— they decided that individuals would get a 30-minute solo session. To make matters even worse, my time slot was 9:30 am on Saturday, which is the day most people are traveling back home.

Oh, also: my sabbatical/research release time proposal got turned down, meaning my motivations for doing this work at all has dropped off considerably. I thought about bailing out right up to the morning I left. But I decided to go through with it because I was also going to Richmond to visit my friend Dennis, I still wanted to see Baltimore, and I still liked the idea of going one more time and 30 years later.

Remarkably, I had a very good time.

It wasn’t like what I think of as “the good old days,” of course.  I guess there were some publisher parties, but I missed out on those. I did run into people who I know and had some nice chats in the hallways of the enormous Baltimore convention center, but I mostly kept to myself, which was actually kind of nice. My “conference day” was Friday and I saw a couple of okay to pretty good panels about AI things– everything seemed to be about AI this year. I got a chance to look around the Inner Harbor on a cold and rainy day, and I got in half-price to the National Aquarium. And amazingly, I actually had a pretty decent-sized crowd (for me) at my Saturday morning talk. Honestly, I haven’t had as good of a CCCCs experience in years.

But now I’m done– probably.

I’m still annoyed with (IMO) the many many failings of the organization, and while I did have a good solo presenting experience, I still would have preferred being on a panel with others. But honestly, the main reason I’m done with the CCCCs (and other conferences) is not because of the conference but because of me. This conference made it very clear: essentially, I’ve aged out.

When I was a grad student/early career professor, conferences were a big deal. I learned a lot, I was able to do a lot of professional/social networking, and I got my start as a scholar. But at this point, where I am as promoted and as tenured as I’m ever going to be and where I’m not nearly as interested in furthering my career as I am retiring from it, I don’t get much out of all that anymore. And all of the people I used to meet up with and/or room with 10 or so years ago have quit going to the CCCCs because they became administrators, because they retired or died, or because they too just decided it was no longer necessary or worth it.

So that’s it. Probably. I have been saying for a while now that I want to shift from writing/reading/thinking about academic things to other non-academic things. I started my academic career as a fiction writer in an MFA program, and I’ve thought for a while now about returning to that. I’ve had a bit of luck publishing commentaries, and of course, I’ll keep blogging.

Then again, I feel like I got a good response to my presentation, so maybe I will stay with that project and try to apply for a sabbatical again. And after all, the CCCCs is going to be in Cleveland next year and Milwaukee the year after that….

New School Year Resolutions

So, kind of in the form of resolutions, here’s what I’m hoping to accomplish this school year— mostly with work stuff, with a few life things on the list too.

Wade Deeper into AI in My Teaching— Much Deeper

This fall, I’m going to be teaching two sections of the required first year writing course (aka “freshman comp”), and a junior/senior level course called “Digital Writing.”

For first year writing, I have never let students do research on whatever they wanted. Instead, I have always had a common research theme; for example, a few years ago, the theme was “social media,” meaning students’ semester-long research project had to have something to do with social media. This semester, the theme for my sections of first year writing is going to be “AI and your future career goals.”

The Digital Writing course is one I helped develop quite a while ago and it has gone through various evolutions. It’s a course that explores literacy as a technology, and it is also about the relationships between “words in a row” writing and multimedia writing. I have always started the course with readings from Walter Ong, Dennis Baron, a selection from Plato’s Phaedrus (where Socrates talks about the nature of writing), and similar kinds of texts, and also with an assignment where students have to “invent” a way of writing without any of the conventional tools. Maybe I’ll post more about that later here. In previous versions the course, the next two projects were something more multimedia-ish: podcast-like audio presentations, short videos, comics, memes, mashups, etc. But this semester, the second two projects are both going to be deep dives into AI— and I’m still trying to figure out what that means. In that class (and among other readings), I’m assigning Ethan Mollick’s Co-Intelligence: Living and Working with AI. I’m sure I’ll write more about all of that later too.

I don’t know how this is going to go, and I think it is quite possible that it will turn out poorly. I think it’ll be interesting though.

Try to be at least a little more “involved”

Being in my 36th year of teaching at the college level means that I’m getting closer to retiring— or at least officially retiring. I don’t think I can afford to retire for another seven years (when I’ll be 65), and I don’t think I’ll want to work much past 70 (12 years from now). Unofficially though, as the joke goes, I retired from service work six years ago.

Just service, mind you: I’m not “deadwood” because I’m still publishing and presenting (at least some), and I’m still trying to innovate with my teaching. But I’ve been unofficially retired from service and committee work in my department since about 2018, mainly because I spent 13 of my first 20 years here doing A LOT of service. I had a couple of different coordinator positions, I chaired a number of searches, and I had been on just about every elected committee at one time or another. I was burnt out, I wanted to get out of the way for younger faculty to step up, and I think my colleagues were tired of me being involved in everything. So for the last six years, I’ve been a lot more checked out. I meet with my fellow writing faculty about things, and I’ll go to a department meeting if there’s something important on the agenda, but that’s about it.

This year, I think I want to make more of an effort to be a little more involved with happenings on campus, I guess for two reasons. First, after six years away, I’m just ready to back, at least a bit. After all, I did a lot of service stuff for my first 20 years because I liked it and I was good at it. Second, EMU is going through some interestingly difficult times as an institution. Like most of the other regional universities in the state and a lot of similar places in the upper midwest and northeast, we’ve had falling enrollments for a while, and it seems to have gotten worse in the last two years. Falling enrollments have resulted in dramatic budget cuts and declining faculty and staff. At the same time, the administration tries to keep some money around the place with some dubious outsourcing decisions.

Just to add to the drama a bit: we’re going to have to have some serious conversations this year about the future of most of my department’s graduate programs; the dean has announced that she is taking an early buyout and is leaving at the end of the school year; and the president announced a while ago that he will be retiring at the end of his contract in 2026. Which, when I think about it, might be when the faculty union will be negotiating a new contract.

I could go on, but you get the idea. There’s too much going on around here now to be checked out.

I’m not quite sure what “trying to be at least a little more involved” means, and I’m not interested in taking on any huge service jobs. I’m not planning on running to be on the executive committee of the faculty union, for example. But I suppose it means at least going to more informational meetings about things on campus.

(I should note that I have already failed on this resolution: I attended a kicking off the semester department meeting this morning, but then decided to blow off the College of Arts and Sciences meeting in the afternoon).

Put together my next (maybe last?) sabbatical/research release project proposal

I have a few ideas, mostly about AI and teaching (not surprisingly). As was the case with my work on MOOCs and before that the emergence of different writing technologies and pedagogy, I’m interested to see what kinds of tools and technologies from the past were as disruptive in ways that are similar to AI. That’s kind of vague, both on purpose and because that’s where I’m at in the process.

Anyway, sabbaticals and semester long research releases are competitive, and I’m eligible to submit a proposal in January 2025 for a semester off from teaching to research in the 2025-26 school year.

Keep figuring out Substack

The look and feel of this interface versus WordPress is intriguing, and while there are features I wish this had, there’s something to be said for the simplicity and uniformity of Substack— at least I think so far. I don’t think I’ll be able to rely on revenue from newsletter subscriptions anytime soon, and that’s not really my goal. On the other hand, if could convince 1000 people to give me $100 a year for stuff I write here…

Keep losing weight with Zepbound

I started Zepbound in the first week of January 2024 and, as of today, I’ve lost about 35 pounds. It’s not all the result of the drugs, but it’s— well, yes, it is all the result of the drugs. Anyway, my resolution here is to keep doing what I’m doing and (ideally) lose another 25-30 pounds before the end of the semester.

Well, sort of….

The 2024-25 school year is my 36th teaching college (counting my time as a grad student and a part-timer), my 26th year as a tenure-track professor at EMU, and my 17th as a full professor. So it’s probably no wonder that when I think of the “new year,” I think of new school year at least as much as I think of January. On the old blog, I usually wrote a post around this time of year, reflecting on the school year that was and the year that was likely ahead of me. No reason to stop doing that now, right?

So, kind of in the form of resolutions, here’s what I’m hoping to accomplish this school year— mostly with work stuff, with a few life things on the list too.

Wade Deeper into AI in My Teaching— Much Deeper

This fall, I’m going to be teaching two sections of the required first year writing course (aka “freshman comp”), and a junior/senior level course called “Digital Writing.”

For first year writing, I have never let students do research on whatever they wanted. Instead, I have always had a common research theme; for example, a few years ago, the theme was “social media,” meaning students’ semester-long research project had to have something to do with social media. This semester, the theme for my sections of first year writing is going to be “AI and your future career goals.”

The Digital Writing course is one I helped develop quite a while ago and it has gone through various evolutions. It’s a course that explores literacy as a technology, and it is also about the relationships between “words in a row” writing and multimedia writing. I have always started the course with readings from Walter Ong, Dennis Baron, a selection from Plato’s Phaedrus (where Socrates talks about the nature of writing), and similar kinds of texts, and also with an assignment where students have to “invent” a way of writing without any of the conventional tools. Maybe I’ll post more about that later here. In previous versions the course, the next two projects were something more multimedia-ish: podcast-like audio presentations, short videos, comics, memes, mashups, etc. But this semester, the second two projects are both going to be deep dives into AI— and I’m still trying to figure out what that means. In that class (and among other readings), I’m assigning Ethan Mollick’s Co-Intelligence: Living and Working with AI. I’m sure I’ll write more about all of that later too.

I don’t know how this is going to go, and I think it is quite possible that it will turn out poorly. I think it’ll be interesting though.

Try to be at least a little more “involved”

Being in my 36th year of teaching at the college level means that I’m getting closer to retiring— or at least officially retiring. I don’t think I can afford to retire for another seven years (when I’ll be 65), and I don’t think I’ll want to work much past 70 (12 years from now). Unofficially though, as the joke goes, I retired from service work six years ago.

Just service, mind you: I’m not “deadwood” because I’m still publishing and presenting (at least some), and I’m still trying to innovate with my teaching. But I’ve been unofficially retired from service and committee work in my department since about 2018, mainly because I spent 13 of my first 20 years here doing A LOT of service. I had a couple of different coordinator positions, I chaired a number of searches, and I had been on just about every elected committee at one time or another. I was burnt out, I wanted to get out of the way for younger faculty to step up, and I think my colleagues were tired of me being involved in everything. So for the last six years, I’ve been a lot more checked out. I meet with my fellow writing faculty about things, and I’ll go to a department meeting if there’s something important on the agenda, but that’s about it.

This year, I think I want to make more of an effort to be a little more involved with happenings on campus, I guess for two reasons. First, after six years away, I’m just ready to back, at least a bit. After all, I did a lot of service stuff for my first 20 years because I liked it and I was good at it. Second, EMU is going through some interestingly difficult times as an institution. Like most of the other regional universities in the state and a lot of similar places in the upper midwest and northeast, we’ve had falling enrollments for a while, and it seems to have gotten worse in the last two years. Falling enrollments have resulted in dramatic budget cuts and declining faculty and staff. At the same time, the administration tries to keep some money around the place with some dubious outsourcing decisions.

Just to add to the drama a bit: we’re going to have to have some serious conversations this year about the future of most of my department’s graduate programs; the dean has announced that she is taking an early buyout and is leaving at the end of the school year; and the president announced a while ago that he will be retiring at the end of his contract in 2026. Which, when I think about it, might be when the faculty union will be negotiating a new contract.

I could go on, but you get the idea. There’s too much going on around here now to be checked out.

I’m not quite sure what “trying to be at least a little more involved” means, and I’m not interested in taking on any huge service jobs. I’m not planning on running to be on the executive committee of the faculty union, for example. But I suppose it means at least going to more informational meetings about things on campus.

(I should note that I have already failed on this resolution: I attended a kicking off the semester department meeting this morning, but then decided to blow off the College of Arts and Sciences meeting in the afternoon).

Put together my next (maybe last?) sabbatical/research release project proposal

I have a few ideas, mostly about AI and teaching (not surprisingly). As was the case with my work on MOOCs and before that the emergence of different writing technologies and pedagogy, I’m interested to see what kinds of tools and technologies from the past were as disruptive in ways that are similar to AI. That’s kind of vague, both on purpose and because that’s where I’m at in the process.

Anyway, sabbaticals and semester long research releases are competitive, and I’m eligible to submit a proposal in January 2025 for a semester off from teaching to research in the 2025-26 school year.

Keep figuring out Substack

The look and feel of this interface versus WordPress is intriguing, and while there are features I wish this had, there’s something to be said for the simplicity and uniformity of Substack— at least I think so far. I don’t think I’ll be able to rely on revenue from newsletter subscriptions anytime soon, and that’s not really my goal. On the other hand, if could convince 1000 people to give me $100 a year for stuff I write here…

Keep losing weight with Zepbound

I started Zepbound in the first week of January 2024 and, as of today, I’ve lost about 35 pounds. It’s not all the result of the drugs, but it’s— well, yes, it is all the result of the drugs. Anyway, my resolution here is to keep doing what I’m doing and (ideally) lose another 25-30 pounds before the end of the semester.

My Talk About AI at Hope College (or why I still post things on a blog)

I gave a talk at Hope College last week about AI. Here’s a link to my slides, which also has all my notes and links. Right after I got invited to do this in January, I made it clear that I am far from an expert with AI. I’m just someone who had an AI writing assignment last fall (which was mostly based on previous teaching experiments by others), who has done a lot of reading and talking about it on Facebook/Twitter, and who blogged about it in December. So as I promised then, my angle was to stay in my lane and focus on how AI might impact the teaching of writing.

I think the talk went reasonably well. Over the last few months, I’ve watched parts of a couple of different ChatGPT/AI presentations via Zoom or as previously recorded, and my own take-away from them all has been a mix of “yep, I know that and I agree with you” and “oh, I didn’t know that, that’s cool.” That’s what this felt like to me: I talked about a lot of things that most of the folks attending knew about and agreed with, along with a few things that were new to them. And vice versa: I learned a lot too. It probably would have been a little more contentious had this taken place back when the freakout over ChatGPT was in full force. Maybe there still are some folks there who are freaked out by AI and cheating who didn’t show up. Instead, most of the people there had played around with the software and realized that it’s not quite the “cheating machine” being overhyped in the media. So it was a good conversation.

But that’s not really what I wanted to write about right now. Rather, I just wanted to point out that this is why I continue to post here, on a blog/this site, which I have maintained now for almost 20 years. Every once in a while, something I post “lands,” so to speak.

So for example: I posted about teaching a writing assignment involving AI at about the same time MSM is freaking out about ChatGPT. Some folks at Hope read that post (which has now been viewed over 3000 times), and they invited me to give this talk. Back in fall 2020, I blogged about how weird I thought it was that all of these people were going to teach online synchronously over Zoom. Someone involved with the Media & Learning Association, which is a European/Belgian organization, read it, invited me to write a short article based on that post and they also invited me to be on a Zoom panel that was a part of a conference they were having. And of course all of this was the beginning of the research and writing I’ve been doing about teaching online during Covid.

Back in April 2020, I wrote a post “No One Should Fail a Class Because of a Fucking Pandemic;” so far, it’s gotten over 10,000 views, it’s been quoted in a variety of places, and it was why I was interviewed by someone at CHE in the fall. (BTW, I think I’m going to write an update to that post, which will be about why it’s time to return to some pre-Covid requirements). I started blogging about MOOCs in 2012, which lead to a short article in College Composition and Communication and numerous more articles and presentations, a few invited speaking gigs (including TWO conferences sponsored by the University of Naples on the Isle of Capri), an edited collection and a book.

Now, most of the people I know in the field who once blogged have stopped (or mostly stopped) for one reason or another. I certainly do not post here nearly as often as I did before the arrival of Facebook and Twitter, and it makes sense for people to move on to other things. I’ve thought about giving it up, and there have been times where I didn’t post anything for months. Even the extremely prolific and smart local blogger Mark Maynard gave it all up, I suspect because of a combination of burn-out, Trump being voted out, and the additional work/responsibility of the excellent restaurant he co-owns/operates, Bellflower.

Plus if you do a search for “academic blogging is bad,” you’ll find all sorts of warnings about the dangers of it– all back in the day, of course. Deborah Brandt seemed to think it was mostly a bad idea (2014)The Guardian suggested it was too risky (2013), especially for  grad students posting work in progress. There were lots of warnings like this back then. None of them ever made any sense to me, though I didn’t start blogging until after I was on the tenure-track here. And no one at EMU has ever had anything negative to me about doing this, and that includes administrators even back in the old days of EMUTalk.

Anyway, I guess I’m just reflecting/musing now about why this very old-timey practice from the olde days of the Intertubes still matters, at least to me. About 95% of the posts I’ve written are barely read or noticed at all, and that’s fine. But every once in a while, I’ll post something, promote it a bit on social media, and it catches on. And then sometimes, a post becomes something else– an invited talk, a conference presentation, an article. So yeah, it’s still worth it.

Is AI Going to be “Something” or “Everything?”

Way back in January, I applied for release time from teaching for one semester next year– either a sabbatical or what’s called here a “faculty research fellowship” (FRF)– in order to continue the research I’ve been doing about teaching online during Covid. This is work I’ve been doing since fall 2020, including a Zoom talk at a conference in Europe, a survey I ran for about six months, and from that survey, I was able to recruit and interview a bunch of faculty about their experiences. I’ve gotten a lot out of this work already: a couple conference presentations (albeit in the kind of useless “online/on-demand” format), a website (which I had to code myself!) article, and, just last year, I was on one of those FRFs.

Well, a couple weeks ago, I found out that I will not be on sabbatical or FRF next year. My proposal, which was about seeking time to code and analyze all of the interview transcripts I collected last year, got turned down. I am not complaining about that: these awards are competitive, and I’ve been fortunate enough to receive several of these before, including one for this research. But not getting release time is making me rethink how much I want to continue this work, or if it is time for something else.

I think studying how Covid impacted faculty attitudes about online courses is definitely something important worth doing. But it is also looking backwards, and it feels a bit like an autopsy or one of those commissioned reports. And let’s be honest: how many of us want to think deeply about what happened during the pandemic, recalling the mistakes that everyone already knows they made? A couple years after the worst of it, I think we all have a better understanding now why people wanted to forget the 1918 pandemic.

It’s 20/20 hindsight, but I should have put together a sabbatical/research leave proposal about AI. With good reason, the committee that decides on these release time awards tends to favor proposals that are for things that are “cutting edge.” They also like to fund releases for faculty who have book contracts who are finishing things up, which is why I have been lucky enough to secure these awards both at the beginning and end of my MOOC research.

I’ve obviously been blogging about AI a lot lately, and I have casually started amassing quite a number of links to news stories and other resources related to Artificial Intelligence in general, ChatGPT and OpenAI in particular. As I type this entry in April 2023, I already have over 150 different links to things without even trying– I mean, this is all stuff that just shows up in my regular diet of social media and news. I even have a small invited speaking gig about writing and AI, which came about because of a blog post I wrote back in December— more on that in a future post, I’m sure.

But when it comes to me pursuing AI as my next “something” to research, I feel like I have two problems. First, it might already be too late for me to catch up. Sure, I’ve been getting some attention by blogging about it, and I had a “writing with GPT-3” assignment in a class I taught last fall, which I guess kind of puts me at least closer to being current with this stuff in terms of writing studies. But I also know there are already folks in the field (and I know some of these people quite well) who have been working on this for years longer than me.

Plus a ton of folks are clearly rushing into AI research at full speed. Just the other day, the CWCON at Davis organizers sent around a draft of the program for the conference in June. The Call For Proposals they released last summer describes the theme of this year’s event, “hybrid practices of engagement and equity.” I skimmed the program to get an idea of the overall schedule and some of what people were going to talk about, and there were a lot of mentions of ChatGPT and AI, which makes me think a lot of people are likely to be not talking about the CFP theme at all.

This brings me to the bigger problem I see with researching and writing about AI: it looks to me like this stuff is moving very quickly from being “something” to “everything.” Here’s what I mean:

A research agenda/focus needs to be “something” that has some boundaries. MOOCs were a good example of this. MOOCs were definitely “hot” from around 2012 to 2015 or so, and there was a moment back then when folks in comp/rhet thought we were all going to be dealing with MOOCs for first year writing. But even then, MOOCs were just a “something”  in the sense that you could be a perfectly successful writing studies scholar (even someone specializing in writing and technology) and completely ignore MOOCs.

Right now, AI is a myriad of “somethings,” but this is moving very quickly toward “everything.” It feel to me like very soon (five years, tops), anyone who wants to do scholarship in writing studies is going to have to engage with AI. Successful (and even mediocre) scholars in writing studies (especially someone specializing in writing and technology) are not going to be able to ignore AI.

This all reminds me a bit about what happened with word processing technology. Yes, this really was something people studied and debated way back when. In the 1980s and early 1990s, there were hundreds of articles and presentations about whether or not to use word processing to teach writing— for example, “The Word Processor as an Instructional Tool: A Meta-Analysis of Word Processing in Writing Instruction” by Robert L. Bangert-Drowns, or “The Effects of Word Processing on Students’ Writing Quality and Revision Strategies” by Ronald D. Owston, Sharon Murphy, Herbert H. Wideman. These articles were both published in the early 1990s and in major journals, and both are trying to answer the question which one is “better.” (By the way, most but far from all of these studies concluded that word processing is better in the sense it helped students generate more text and revise more frequently. It’s also worth mentioning that a lot of this research overlaps with studies about the role of spell-checking and grammar-checking with writing pedagogy).

Yet in my recollection of those times, this comparison between word processing and writing by hand was rendered irrelevant because everyone– teachers, students, professional writers (at least all but the most stubborn, as Wendell Berry declares in his now cringy and hopelessly dated short essay “Why I Am not Going to Buy a Computer”)– switched to word processing software on computers to write. When I started teaching as a grad student in 1988, I required students to hand in typed papers and I strongly encouraged them to write at least one of their essays with a word processing program. Some students complained because they were never asked to type anything in high school. By the time I started my PhD program five years later in 1993, students all knew they needed to type their essays on a computer and generally with MS Word.

Was this shift a result of some research consensus that using a computer to type texts was better than writing texts out by hand? Not really, and obviously, there are still lots of reasons why people still write some things by hand– a lot of personal writing (poems, diaries, stories, that kind of thing) and a lot of note-taking. No, everyone switched because everyone realized word processing made writing easier (but not necessarily better) in lots and lots of different ways and that was that. Even in the midst of this panicky moment about plagiarism and AI, I have yet to read anyone seriously suggest that we make our students give up Word or Google Docs and require them to turn in handwritten assignments. So, as a researchable “something,” word processing disappeared because (of course) everyone everywhere who writes obviously uses some version of word processing, which means the issue is settled.

One of the other reasons why I’m using word processing scholarship as my example here is because both Microsoft and Google have made it clear that they plan on integrating their versions of AI into their suites of software– and that would include MS Word and Google Docs. This could be rolling out just in time for the start of the fall 2023 semester, maybe earlier. Assuming this is the case, people who teach any kind of writing at any kind of level are not going to have time to debate if AI tools will be “good” or “bad,” and we’re not going to be able to study any sorts of best practices either. This stuff is just going to be a part of the everything, and for better or worse, that means the issue will soon be settled.

And honestly, I think the “everything” of AI is going to impact, well, everything. It feels to me a lot like when “the internet” (particularly with the arrival of web browsers like Mosaic in 1993) became everything. I think the shift to AI is going to be that big, and it’s going to have as big of an impact on every aspect of our professional and technical lives– certainly every aspect that involves computers.

Who the hell knows how this is all going to turn out, but when it comes to what this means for the teaching of writing, as I’ve said before, I’m optimistic. Just as the field adjusted to word processing (and spell-checkers and grammar-checkers, and really just the whole firehouse of text from the internet), I think we’ll be able to adjust to this new something to everything too.

As far as my scholarship goes though: for reasons, I won’t be able to eligible for another release from teaching until the 2025-26 school year. I’m sure I’ll keep blogging about AI and related issues and maybe that will turn into a scholarly project. Or maybe we’ll all be on to something entirely different in three years….

 

AI Can Save Writing by Killing “The College Essay”

I finished reading and grading the last big project from my “Digital Writing” class this semester, an assignment that was about the emergence of openai.com’s artificial intelligence technologies GPT-3 and DALL-E. It was interesting and I’ll probably write more about it later, but the short version for now is my students and I have spent the last month or so noodling around with software and reading about both the potentials and dangers of rapidly improving AI, especially when it comes to writing.

So the timing of of Stephen Marche’s recently published commentary with the clickbaity title “The College Essay Is Dead” in The Atlantic could not be better– or worse? It’s not the first article I’ve read this semester along these lines, that GPT-3 is going to make cheating on college writing so easy that there simply will not be any point in assigning it anymore. Heck, it’s not even the only one in The Atlantic this week! Daniel Herman’s “The End of High-School English” takes a similar tact. In both cases, they claim, GPT-3 will make the “essay assignment” irrelevant.

That’s nonsense, though it might not be nonsense in the not so distant future. Eventually, whatever comes after GPT-3 and ChatGPT might really mean teachers can’t get away with only assigning writing. But I think we’ve got a ways to go before that happens.

Both Marche and Herman (and just about every other mainstream media article I’ve read about AI) make it sound like GPT-3, DALL-E, and similar AIs are as easy as working the computer on the Starship Enterprise: ask the software for an essay about some topic (Marche’s essay begins with a paragraph about “learning styles” written by GPT-3), and boom! you’ve got a finished and complete essay, just like asking the replicator for Earl Grey tea (hot). That’s just not true.

In my brief and amateurish experience, using GPT-3 and DALL-E is all about entering a carefully worded prompt. Figuring out how to come up with a good prompt involved trial and error, and I thought it took a surprising amount of time. In that sense, I found the process of experimenting with prompts similar to the kind of  invention/pre-writing activities  I teach to my students and that I use in my own writing practices all the time.  None of my prompts produced more than about two paragraphs of useful text at a time, and that was the case for my students as well. Instead, what my students and I both ended up doing was entering in several different prompts based on the output we were hoping to generate. And my students and I still had to edit the different pieces together, write transitions between AI generated chunks of texts, and so forth.

In their essays, some students reflected on the usefulness of GPT-3 as a brainstorming tool.  These students saw the AI as a sort of “collaborator” or “coach,” and some wrote about how GPT-3 made suggestions they hadn’t thought of themselves. In that sense, GPT-3 stood in for the feedback students might get from peer review, a visit to the writing center, or just talking with others about ideas. Other students did not think GPT-3 was useful, writing that while they thought the technology was interesting and fun, it was far more work to try to get it to “help” with writing an essay than it was for the student to just write the thing themselves.

These reactions square with the results in more academic/less clickbaity articles about GPT-3. This is especially true about  Paul Fyfe’s “How to cheat on your final paper: Assigning AI for student writing.” The assignment I gave my students was very similar to what Fyfe did and wrote about– that is, we both asked students to write (“cheat”) with AI (GPT-2 in the case of Fyfe’s article) and then reflect on the experience. And if you are a writing teacher reading this because you are curious about experimenting with this technology, go and read Fyfe’s article right away.

Oh yeah, one of the other major limitations of GPT-3’s usefulness as an academic writing/cheating tool: it cannot do even basic “research.” If you ask GPT-3 to write something that incorporates research and evidence, it either doesn’t comply or it completely makes stuff up, citing articles that do not exist. Let me share a long quote from a recent article at The Verge by James Vincent on this:

This is one of several well-known failings of AI text generation models, otherwise known as large language models or LLMs. These systems are trained by analyzing patterns in huge reams of text scraped from the web. They look for statistical regularities in this data and use these to predict what words should come next in any given sentence. This means, though, that they lack hard-coded rules for how certain systems in the world operate, leading to their propensity to generate “fluent bullshit.”

I think this limitation (along with the limitation that GPT-3 and ChatGPT are not capable of searching the internet) makes using GPT-3 as a plagiarism tool in any kind of research writing class kind of a deal-breaker. It certainly would not get students far in most sections of freshman comp where they’re expected to quote from other sources.

Anyway, the point I’m trying to make here (and this is something that I think most people who teach writing regularly take as a given) is that there is a big difference between assigning students to write a “college essay” and teaching students how to write essays or any other genre. Perhaps when Marche was still teaching Shakespeare (before he was a novelist/cultural commentator, Marche earned a PhD specializing in early English drama), he assigned his students to write an essay about one of Shakespeare’s plays. Perhaps he gave his students some basic requirements about the number of words and some other mechanics, but that was about it. This is what I mean by only assigning writing: there’s no discussion of audience or purpose, there are no opportunities for peer review or drafts, there is no discussion of revision.

Teaching writing is a process. It starts by making writing assignments that are specific and that require an investment in things like prewriting and a series of assignments and activities that are “scaffolding” for a larger writing assignment. And ideally, teaching writing includes things like peer reviews and other interventions in the drafting process, and there is at least an acknowledgment that revision is a part of writing.

Most poorly designed assigned writing tasks are good examples of prompts that you enter into GPT-3. The results are definitely impressive, but I don’t think it’s quite useful enough to produce work a would-be cheater can pass off as their own. For example, I asked ChatGPT (twice) to “write a 1000 word college essay about the theme of insanity in Hamlet” and it came up with this and this essay. ChatGPT produced some impressive results, sure, but besides the fact that both of these essays are significantly shorter than 1000 word requirement, they both kind of read like… well, like a robot wrote them. I think that most instructors who received this essay from a student– particularly in an introductory class– would suspect that the student cheated. When I asked ChatGPT to write a well researched essay about the theme of insanity in Hamlet, it managed to produce an essay that quoted from the play, but not any research about Hamlet.

Interestingly, I do think ChatGPT has some potential for helping students revise. I’m not going to share the example here (because it was based on actual student writing), but I asked ChatGPT to “revise the following paragraph so it is grammatically correct” and I then added a particularly pronounced example of “basic” (developmental, grammatically incorrect, etc.) writing. The results didn’t improve the ideas in the writing and it changed only a few words. But it did transform the paragraph into a series of grammatically correct (albeit not terribly interesting) sentences.

In any event, if I were a student intent on cheating on this hypothetical assignment, I think I’d just do a Google search for papers on Hamlet instead. And that’s one of the other things Marche and these other commentators have left out: if a student wants to complete a badly designed “college essay” assignment by cheating, there are much much better and easier ways to do that right now.

Marche does eventually move on from “the college essay is dead” argument by the end of his commentary, and he discusses how GPT-3 and similar natural language processing technologies will have a lot of value to humanities scholars. Academics studying Shakespeare now have a reason to talk to computer science-types to figure out how to make use of this technology to analyze the playwright’s origins and early plays. Academics studying computer science and other fields connected to AI will now have a reason to maybe talk with the English-types as to how well their tools actually can write. As Marche says at the end, “Before that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance.”

Plus I have to acknowledge that I have only spent so much time experimenting with my openai.com account because I still only have the free version. That was enough access for my students and me to noodle around enough to complete a short essay composed with the assistance of GPT-3 and to generate an accompanying image with GPT-3. But that was about it. Had I signed up for openai.com’s “pay as you go” payment plan, I might learn more about how to work this thing, and maybe I would have figured out better prompts for that Hamlet assignment. Besides all that, this technology is getting better alarmingly fast. We all know whatever comes after ChatGPT is going to be even more impressive.

But we’re not there yet. And when it is actually as good as Marche fears it might be, and if that makes teachers rethink how they might teach rather than assign writing, that would be a very good thing.

A lot of what Leonhardt said in ‘Not Good for Learning’ is just wrong

I usually agree with David Leonhardt’s analysis in his New York Times newsletter “The Morning” because I think he does a good job of pointing out how both the left and the right have certain beliefs about issues– Covid in particular for the last couple years, of course– that are sometimes at odds with the evidence. But I have to say that this morning’s newsletter and the section “Not Good For Learning” ticks me off.

While just about every K-12 school went online when Covid first hit in spring 2020, a lot of schools/districts resumed in-person classes in fall 2020, and a lot did not. Leonhardt said:

These differences created a huge experiment, testing how well remote learning worked during the pandemic. Academic researchers have since been studying the subject, and they have come to a consistent conclusion: Remote learning was a failure.

Now, perhaps I’m overreacting to this passage because of my research about teaching online at the college-level, but the key issue here is he’s talking about K-12 schools that had never done anything close to online/remote instruction ever before. He is not talking about post-secondary education at all, which is where the bulk of remote learning has worked just fine for 125+ years. Maybe that’s a distinction that most readers will understand anyway, but I kind of doubt it, and not bringing that up at all is inaccurate and just sloppy.

Obviously, remote learning in the vast majority of K-12 schools went poorly during Covid and in completely predictable ways. Few of these teachers had any experience or training to teach online, and few of these school districts had the kinds of technologies and tools (like Canvas and Blackboard and other LMSes) to support these courses. This has been a challenge at the college level too, but besides the fact that I think a lot more college teachers at various levels and various types of institutions have had at least some prior to Covid experience teaching online and most colleges and university have more tech support, a lot (most?) college teachers were already making use of an LMS tool and using a lot more electronic tools for essays and tests (as opposed to paper) in their classes.

The students are also obviously different. When students in college take classes online, it’s a given that they will have the basic technology of a laptop and easy access to the internet. It’s also fairly clear from the research (and I’ve seen this in my own experiences teaching online) that the students who do best in these formats are more mature and more self-disciplined. Prior to Covid, online courses were primarily for “non-traditional” students who were typically older, out in the workforce, and with responsibilities like caring for children or others, paying a mortgage, and so forth. These students, who are typically juniors/seniors or grad students, have been going to college for a while, they understand the expectations of a college class, and (at least the students who are most successful) have what I guess I’d describe as the “adulting” skills to succeed in the format. I didn’t have a lot of first and second year students in online classes before Covid, but a lot of the ones I did have during the pandemic really struggled with these things. Oh sure, I did have some unusually mature and “together” first year students who did just fine, but a lot of the students we have at EMU at this level started college underprepared for the expectations, and adding on the additional challenge of the online format was too much.

So it is not even a teeny-weeny surprise that a lot of teenagers/secondary students– many of whom were struggling to learn and succeed in traditional classrooms– did not succeed in hastily thrown together and poorly supported online courses, and do not even get me started on the idea of grade school kids being forced to sit through hours of Zoom calls. I mean honestly, I think these students probably would have done better if teachers had just sent home worksheets and workbooks and other materials to the kids and the parents to study on their own.

I think a different (and perhaps more accurate) way to study the effectiveness of remote learning would be to look at what some K-12 schools were doing before Covid. Lots and lots of kids and their parents use synch and asynch technology to supplement home schooling, and programs like the Michigan Online School have been around for a while now. Obviously, home schooling or online schooling is not right for everyone, but these programs are also not “failures.”

Leonhardt goes on to argue that more schools that serve poor students and/or non-white students went remote for longer than schools. Leonhardt claims there were two reasons for this:

Why? Many of these schools are in major cities, which tend to be run by Democratic officials, and Republicans were generally quicker to reopen schools. High-poverty schools are also more likely to have unionized teachers, and some unions lobbied for remote schooling.

Second, low-income students tended to fare even worse when schools went remote. They may not have had reliable internet access, a quiet room in which to work or a parent who could take time off from work to help solve problems.

First off, what Leonhardt seems to forget that Covid was most serious in “the major cities” in this country, and also among populations that were non-white and that were poor. So of course school closings were more frequent in these areas because of Covid.

Second, while it is quite easy to complain about the teacher unions, let us all remember it was not nearly as clear in Fall 2020 as Leonhardt is implying that the risks of Covid in the schools were small. It did turn out that those settings weren’t as risky as we thought, but at the same time, that “not as risky” analysis primarily applies to students. A lot of teachers got sick and a few died. I wrote about some of this back in February here. I get the idea that most people who were demanding their K-12 schools open immediately only had their kids in mind (though a lot of these parents were also the same ones adamant against mask and vaccine mandates), and if I had a kid still in school, I might feel the same way. But most people (and I’d put Leonhardt in this camp in this article) didn’t think for a second about the employees, and at the end of the day, working in a public school setting is not like being in the ministry or some other job where we expect people to make huge personal sacrifices for others. Being a teacher is a white collar job. Teachers love to teach, sure, but we shouldn’t expect them to put their own health and lives at any level of risk–even if it’s small– just because a lot of parents haven’t sorted out their childcare situations.

Third, the idea that low-income students fared worse in remote classes (and I agree, they certainly did) is bad, but that has nothing to do with why they spent more time online in the first place. That just doesn’t make sense.

Leonhardt goes on:

In places where schools reopened that summer and fall, the spread of Covid was not noticeably worse than in places where schools remained closed. Schools also reopened in parts of Europe without seeming to spark outbreaks.

I wrote about back in February: these schools didn’t reopen because they never closed! They tried the best they could and often failed, but as far as I can tell, no K-12 school in this country, public or private, just closed and told folks “we’ll reopen after Covid is over.” Second, most of the places where public schools (and universities as well) that went back to at least some f2f instruction in Fall 2020 were in parts of the country where being outside and/or leaving the windows open to classrooms is a lot easier than in Michigan, and/or most of these schools had the resources to do things like create smaller classes for social distancing, to install ventilation equipment, and so forth.

Third– and I cannot believe Leonhardt doesn’t mention this because I know this is an issue he has written about in the past– the comparison to what went on with schools in Europe is completely bogus. In places like Germany and France, they put a much much higher priority on opening schools– especially as compared to things like restaurants and bars and other places where Covid likes to spread. So they kept those kinds of places closed longer so the chances of a Covid outbreak in the schools was smaller. Plus Europeans are much MUCH smarter about things like mask and vaccine mandates too.

No, the pandemic was not good for learning, but it was not good for anything else, either. It wasn’t good for our work/life balances, our mental health, a lot of our household incomes, on and on and on. We have all suffered mightily for it, and I am certain that as educators of all stripes study and reflect on the last year and a half, we’ll all learn a lot about what worked and what didn’t. But after two years of trying their fucking best to do the right things, there is no reason to through K-12 teachers under the bus now.

My CCCCs 2022

Here’s a follow-up (of sorts) on my CCCCs 2022 experiences– minus the complaining, critiques, and ideas on how it could have been better. Oh, I have some thoughts, but to be honest, I don’t think anyone is particularly interested in those thoughts. So I’ll keep that to myself and instead focus on the good things, more or less.

When the CCCCs went online for 2022 and I was put in the “on demand” sessions, my travel plans changed. Instead of going to Chicago on my own to enjoy conferencing, my wife and I decided to rent a house on a place called Seabrook Island in South Carolina near Charleston. We both wanted to get out of Michigan to someplace at least kind of warm, and the timing on the rental and other things was such that we were on the road for all the live sessions, so I missed out on all of that. But I did take advantage of looking at some of the other on demand sessions to see what was there.

Now, I have never been a particularly devout conference attendee. Even at the beginning of my career attending that first CCCCs in 1995 in Washington, DC, when everything was new to me, I was not the kind of person who got up at dawn for the WPA breakfast or even for the 9 am keynote address, the kind of conference goer who would then attend panels until the end of the day. More typical for me is to go to about two or three other panels (besides my own, of course), depending on what’s interesting and, especially at this point of my life, depending on where it is. I usually spend the rest of the time basically hanging out. Had I actually gone to Chicago, I probably would have spent at least half a day doing tourist stuff, for example.

The other thing that has always been true about the CCCCs is even though there are probably over 1000 presentations, the theme of the conference and the chair who puts it together definitely shapes what folks end up presenting about. Sometimes that means there are fewer presentations that connect to my own interests in writing and technology– and as of late, that specifically has been about teaching online. That was the case this year. Don’t get me wrong, I think the theme(s) of identity, race, and gender invoked in the call are completely legitimate and important topics of concern, and I’m interested them both as a scholar and just as a human being. But at the same time, that’s not the “work” I do, if that makes sense.

That said, there’s always a bit of something for everyone. Plus the one (and only, IMO) advantage of the on demand format is the materials are still accessible through the CCCCs conference portal. So while enjoying some so-so weather in a beach house, I spent some time poking around the online program.

First off, for most of the links below to work, you have to be registered for and signed into the CCCCs portal, which is here:

https://app.forj.ai/en?t=/tradeshow/index&page=lobby&id=1639160915376

If you never registered for the conference at all, you won’t be able to access the sessions, though the program of on-demand sessions is available to anyone here. As I understand it, the portal will remain open/accessible for the month of March (though I’m not positive about that). Second, the search feature for the portal is… let’s just say “limited.” There’s no connection between the portal and the conference on-demand program, so you have to look through the program and then do a separate search of the portal opened in a different browser tab. The search engine doesn’t work at all if you include any punctuation, and for the most part, it only returns results when you enter in a few words and not an entire title. My experience has been it seems to work best if you enter in the first three words of the session title. Again, I’m not going to complain….

So obviously, the first thing I found/went to was my own panel:

OD-301 Researching Communication in Practice

There’s not much there. One of the risks of proposing an individual paper for the CCCCs rather than as part of a panel or round table discussion is how you get grouped with other individual submissions. Sometimes, this all ends up working out really well, and sometimes, it doesn’t. This was in the category of “doesn’t.” Plus it looks to me like three out of the other five other people on the program for this session essentially bailed out and didn’t post anything.

Of course, my presentation materials are all available here as Google documents, slides, and a YouTube video.

To find other things I was interested in, I did a search for the key terms “distance” (as in distance education– zero results) and “online,” which had 54 results. A lot of those sessions– a surprising amount to me, actually– involved online writing centers, both in terms of adopting to Covid but also in terms of shifting more work in writing centers to online spaces. Interesting, but not quite what I was looking for.

So these are the sessions I dug into a bit more and I’ll probably be going back to them in the next weeks as I keep working on my “online and the new normal” research:

OD-45 So that just happened…Where does OWI go from here?: Access, Enrollment, and Relevance

Really nice talk that sums up some of the history and talks in broad ways about some of the experiences of teaching online in Covid. Of course, I’m also always partial to presentations that agree with what I’m finding in my own research, and this talk definitely does that.

OD-211 Access and Community in Online Learning– specifically, Ashley Barry, University of New Hampshire, “Inequities in Digital Literacies and Innovations in Writing Pedagogies during COVID-19 Learning.”

Here’s a link to her video in the CCCCs site, and here’s a Google Slides link. At some point, I think I might have to send this PhD student at New Hampshire an email because it seems like Barry’s dissertation research is similar to what I am (kinda/sorta) trying to do with own research about teaching online during Covid. She is working with a team of researchers from across the disciplines on what is likely a more robust albeit local study than mine, but again, with some similar kind of conclusions.

OD-295 Prospects for Online Writing Instruction after the Pandemic Lockdown— specifically, Alexander Evans, Cincinnati State Technical and Community College, “Only Out of Necessity: The Future of Online Developmental FirstYear Writing Courses in Post-Pandemic Society.”

Here’s a link to his video and his slides (which I think are accessible outside of the CCCCs portal). What I liked about Evans’ talk is it is coming from someone very new to teaching at the college level in general, new to community college work, and (I think) new to online teaching as well. A lot of this is about what I see as the wonkiness of what happens (as I think is not uncommon at a lot of community colleges for classes like developmental writing) where instructors more or less get handed a fully designed course and are told “teach this.” I would find that incredibly difficult, and part of Evans’ argument here is if his institution is really going to give people access to higher education, then they need to offer this class in an online format– and not just during the pandemic.

So that was pretty much my CCCCs experience for 2022. I’m not sure when (or if) I’ll be back.

 

 

CCCCs 2022 (part 1?)

Here is a link (bit.ly/krause4c22) to my “on demand” presentation materials for this year’s annual Conference for College Composition and Communication. It’s a “talk” called “When ‘You’ Cannot be ‘Here:’ What Shifting Teaching Online Teaches Us About Access, Diversity, Inclusion, and Opportunity.” As I wrote in the abstract/description of my session:

My presentation is about a research project I began during the 2020-21 school year titled “Online Teaching and the ‘New Normal.” After discussing broadly some assumptions about online teaching, I discuss my survey of instructors teaching online during Covid, particularly the choice to teach synchronously versus asynchronously. I end by returning to the question of my subtitle.

I am saying this is “part 1?” because I might or might not write a recap post about the whole experience. On the one hand, I have a lot of thoughts about how this is going so far, how the online experience could have been better. On the other hand (and I’ve already learned this directly and indirectly on social media), the folks at NCTE generally seem pretty stressed out and overwhelmed and everything else, and it kind of feels like any kind of criticism, constructive or otherwise, will be taken as piling on. I don’t want to do that.

I’m also not sure there will be a part 2 because I’m not sure how much conferencing I’ll actually be able to do. When the conference went all online, my travel plans changed. Now I’m going to be be on the road during most of live or previously recorded sessions, so most of my engagement will have to to be in the on demand space. Though hopefully, there will be some recordings of events available for a while, things like Anita Hill’s keynote speech.

The thing I’ll mention for now is my reasons for sharing my materials in the online/on demand format outside the walled garden of the conference website itself. I found out that I was assigned to present in the “on demand” format of the conference– if I do write a part 2 to this post, I’ll come back to that decision process then. In any event, the instructions the CCCCs provided asked presenters to upload materials– PDFS, PPT slides, videos, etc.– to the server space for the conference. I emailed “ccccevents” and asked if that was a requirement. This was their response:

We do suggest that you load materials directly into the platform through the Speaker Ready Room for content security purposes (once anyone has the link outside of the platform, they could share it with anyone). However, if you really don’t want to do that, you could upload a PDF or a PPT slide that directs attendees to the link with your materials.

The “Speaker Ready Room” is just want they call the portal page for uploading stuff. The phrase I puzzled over was “content security purposes” and trying to prevent the possibility that anyone anywhere could share a link to my presentation materials. Maybe I’m missing something, but isn’t that kind of the point of scholarship? That we present materials (presentations, articles, keynote speeches, whatever) in the hopes that those ideas and thoughts and arguments are made available to (potential) readers who are anyone and anywhere?

I’ve been posting web-based versions of conference talks for a long time now– sometimes as blog posts, as videos, as Google Slides with notes, etc. I do it mainly because it’s easy for me to do, I believe in as much open access to scholarship as possible, and I’m trying to give some kind of life to this work that is beyond 15 minutes of me talking to (typically) less than a dozen people. I wouldn’t say any of my self-published conference materials have made much difference in the scholarly trajectory of the field, but I can tell from some of the tracking stats that these web-based versions of talks get many more times the number of “hits” than the size of the audience at the conference itself. Of course, that does not really mean that the 60 or 100 or so people who clicked on a link to a slide deck are nearly as engaged of an audience as the 10 people (plus other presenters) who were actually sitting in the room when I read my script, followed by a discussion. But it’s better than not making it available at all.

Anyway, we’ll see how this turns out.

“Synch Video is Bad,” perhaps a new research project?

As Facebook has been reminding me far too often lately, things were quite different last year. Last fall, Annette and I both had “faculty research fellowships,” which meant that neither of us were teaching because we were working on research projects. (It also meant we did A LOT of travel, but that’s a different post). I was working on a project that was officially called “Investigating Classroom Technology bans Through the Lens of Writing Studies,” a project I always referred to as the “Classroom Tech Bans are Bullshit” project.

It was going along well, albeit slowly. I gave a conference presentation about it all in fall at the Great Lakes Writing and Rhetoric Conference  in September, and by early October, I was circulating a snowball sampling survey to students and instructors (via mailing lists, social media, etc.) about their attitudes about laptops and devices in classes. I blogged about it some in December, and while I wasn’t making as much progress as quickly as I would have preferred, I was getting together a presentation for the CCCCs and ready to ramp up the next steps of this: sorting through the results of the survey and contacting individuals for follow-up case study interviews.

Then Covid.

Then the mad dash to shove students and faculty into the emergency lifeboats of makeshift online classes, kicking students out of the dorms with little notice, and a long and troubling summer of trying to plan ahead for the fall without knowing exactly what universities were going to do about where/in what mode/how to hold classes. Millions of people got sick, hundreds of thousands died, the world economy descended into chaos. And Black Lives Matter protests, Trump descending further into madness, forest fires, etc., etc.

It all makes the debate about laptops and cell phones in classes seem kind of quaint and old-fashioned and irrelevant, doesn’t it? So now I’m mulling over starting a new different but similar project about faculty (and perhaps students) attitudes about online courses– specifically about synchronous video-conference online classes (mostly Zoom or Google Meetings).

Just to back up a step: after teaching online since about 2005, after doing a lot of research on best practices for online teaching, after doing a lot of writing and research about MOOCs, I’ve learned at least two things about teaching online:

  • Asynchronous instruction works better than synchronous instruction because of the affordances (and limitations) of the medium.
  • Video– particularly videos of professors just lecturing into a webcam while students (supposedly) sit and pay attention– is not very effective.

Now, conventional wisdom often turns out to be wrong, and I’ll get to that. Nonetheless, for folks who have been teaching online for a while, I don’t think either of these statements are remotely controversial or in dispute.

And yet, judging from what I see on social media, a lot of my colleagues who are teaching online this fall for the first time are completely ignoring these best practices: they’re teaching synchronous classes during the originally scheduled time of the course and they are relying heavily on Zoom. In many cases (again, based on what I’ve seen on the internets), instructors have no choice: that is, the institution is requiring that what were originally scheduled f2f classes be taught with synch video regardless of what the instructor wants to do, what the class is, and if it makes any sense. But a lot of instructors are doing this to themselves (which, in a lot of ways, is even worse). In my department at EMU, all but a few classes are online this fall, and as far as I can tell, many (most?) of my colleagues have decided on their own to teach their classes with Zoom and synchronously.

It doesn’t make sense to me at all. It feels like a lot of people are trying to reinvent the wheel, which in some ways is not that surprising because that’s exactly what happened with MOOCs. When the big for-profit MOOC companies like Coursera and Udacity and EdX and many others got started, they didn’t reach out to universities that were already experienced with online teaching. Instead, they reached out to themselves and peer institutions– Stanford, Harvard, UC-Berkeley, Michigan, Duke, Georgia Tech, and lots of other high profile flagships. In those early TED talks (like this one from Daphne Koller and this one from Peter Norvig), it really really seems like these people sincerely believe that they were the first ones to ever actually think about teaching online, that they had stumbled across an undiscovered country. But I digress.

I think requiring students to meet online but synchronously for a class via Zoom simply is putting a round peg into a square hole. Imagine the logical opposite situation: say I was scheduled to teach an asynchronous online class that was suddenly changed into a traditional f2f class, something that meets Tuesdays and Thursdays from 10 am to 11:45 am. Instead of changing my approach to this now different mode/medium, I decided I was going to teach the class as an asynch online class anyway. I’d require everyone to physically show up to the class on Tuesdays and Thursdays at 10 am (I have no choice about that), but instead of taking advantage of the mode of teaching f2f, I did everything all asynch and online. There’d be no conversation or acknowledgement that we were sitting in the same room. Students would only be allowed to interact with each other in the class LMS. No one would be allowed to actually talk to each other, though texting would be okay. Students would sit there for 75 minutes, silently doing their work but never allowed to speak with each other, and as the instructor, I would sit in the front of the room and do the same. We’d repeat this at all meetings the entire semester.

A ridiculous hypothetical, right? Well, because I’m pretty used to teaching online, that’s what an all Zoom class looks like like to me.

The other problem I have with Zoom is its part in policing and surveilling both students and teachers. Inside Higher Ed and the Chronicle of Higher Education both published inadvertently hilarious op-eds written to an audience of faculty about how they should maintain their own appearances and of their “Zoom backgrounds” to project professionalism and respect. And consider this post on Twitter:


I can’t verify the accuracy of these rules, but it certainly sounds like it could be true. When online teaching came up in the first department meeting of the year (held on Zoom, of course), the main concern voiced by my colleagues who had never taught online before was dealing with students who misbehave in these online forums. I’ve seen similar kinds of discussions about how to surveil students from other folks on social media. And what could possibly motivate a teacher’s need to have bodily control over what their students do in their own homes to the point of requiring them to wear fucking shoes?

This kind of “soft surveillance” is bad enough, but as I understand it, one of Zoom’s features it sells to institutions is robust data on what users do with it: who is logged in, when, for how long, etc. I need to do a little more research on this, but as I was discussing on Facebook with my friend Bill Hart-Davidson (who is in a position to know more about this both as an administrator and someone who has done the scholarship), this is clearly data that can be used to effectively police both teachers’ and students’ behavior. The overlords might have the power to make us to wear shoes at all times on Zoom after all.

On the other hand…

The conventional wisdom about teaching online asynchronously and without Zoom might be wrong, and that makes it potentially interesting to study. For example, the main reason why online classes are almost always asynchronous is the difficulty of scheduling and the flexibility helps students take classes in the first place. But if you could have a class that was mostly asynchronous but with some previously scheduled synchronous meetings as a part of the mix, well, that might be a good thing. I’ve tried to teach hybrid classes in the past that approach this, though I think Zoom might make this a lot easier in all kinds of ways.

And I’m not a complete Zoom hater. I started using it (or Google Meetings) last semester in my online classes for one-on-one conferences, and I think it worked well for that. I actually prefer our department meetings on Zoom because it cuts down on the number of faculty who just want to pontificate about something for no good reason (and I should note I am very very much one of these kind of faculty members, at least once in a while). I’ve read faculty justifying their use of Zoom based on what they think students want, and maybe that turns out to be true too.

So, what I’m imagining here is another snowball sample survey of faculty (maybe students as well) about their use of Zoom. I’d probably continue to focus on small writing classes because it’s my field and also because of different ideas about what teaching means in different disciplines. As was the case with the laptop bans are bullshit project, I think I’d want to continue to focus on attitudes about online teaching generally and Zoom in particular, mainly because I don’t have the resources or skills as a researcher to do something like an experimental design that compares the effectiveness of a Zoom lecture versus a f2f one versus an asynchronous discussion on a topic– though as I type that, I think that could be a pretty interesting experiment. Assuming I could get folks to respond, I’d also want to use the survey to recruit participants in one on one interviews, which I think would be more revealing and relevant data, at least to the basic questions I have now:

  • Why did you decide to use a lot of Zoom and do things synchronously?
  • What would you do differently next time?

What do you think, is this an idea worth pursuing?

What We Learned in the “MOOC Moment” Matters Right Now

I tried to share a link to this post, which is on a web site I set up for my book More Than a Moment, but for some reason, Facebook is blocking that– though not this site. Odd. So to get this out there, I’m posting it here as well. –Steve

I received an email from Utah State University Press the other day inviting me to record a brief video to introduce More Than a Moment to the kinds of colleagues who would have otherwise seen the book on display in the press’ booth at the now cancelled CCCCs in Milwaukee. USUP is going to be hosting a “virtual booth” on their web site in an effort to get the word out about books they’ve published recently, including my own.

So that is where this is coming from. Along with recording a bit of video, I decided I’d also write about how I think what I wrote about MOOCs matters right now, when higher education is now suddenly shifting everything online.

I don’t want to oversell this here. MOOCs weren’t a result of an unprecedented global crisis, and MOOCs are not the same thing as online teaching. Plus what faculty are being asked to do right now is more akin to getting into a lifeboat than it is to actual online teaching, a point I write about in some detail here.

That said, I do think there are some lessons learned from the “MOOC Moment” that are applicable to this moment.

Continue reading “What We Learned in the “MOOC Moment” Matters Right Now”