Workin’ 9 to 5 (sort of) and Other Adventures of All FY Writing/All the Time!

As I blogged about earlier this year, I’m doing something this semester that I have never done as a tenure-track professor: I’m teaching a full load (three sections) of first year writing. I’ve had semesters where I’ve taught multiple sections of the same class, but I think the last time I did that was in the early 2000s where I taught two sections of a 300-level course while also having a course release to  do quasi-administrative work. As I explained earlier, my current schedule is a fluke based on the circumstances this semester and I jumped at the chance to just teach first year writing. In other words, this was my idea: I wanted to have one prep for a change of pace, and I also like to teach first year writing.

(Incidentally, when I was hired at EMU in 1998, my primary teaching assignments were an earlier version of this 300-level course and a graduate course on teaching with computers. Times and curriculums have changed and I haven’t taught that 300 level class in eight years and that grad course in at least 15 years, maybe more).

Having only one course to prepare– as opposed to three different classes– has been nice, and it’s especially nice that it’s first year composition, a course that I have literally been teaching regularly in my dreams for most of my life at this point. I’ve been able to keep all three different classes on the same schedule, so with a bit of tweaking and customization for each section, it still is one prep. And not surprisingly, one prep is easier than three.

The downsides? Well, all three of sections are f2f (as is the case with all of the first year writing courses at EMU) and all three sections are on Tuesdays and Thursdays. Now, I haven’t taught three f2f classes since I started teaching online for part of my load, and that was almost 20 years ago. I also haven’t taught this early for a while (my first section is at 9:30 in the morning), and I haven’t taught back-to-back sections with no break between in a long time either. So on Tuesdays and Thursdays, I am in the office by 9 am and working pretty steadily until I’m done at 5 pm.

Because those days end up being nothing but teaching and preparing for teaching, I have also had to come into the office a lot more on other days during the week. I ran into an especially intense stretch in late January/early February when I had conferences with all 60 (or so) of my students– along with having a bunch of other “life” appointments and family stuff. I was on campus and mostly in my office for just about two weeks back then, and almost all day each of those days.

I realize this isn’t a work schedule most people would complain about– and I’m not complaining, at least not exactly. It’s just a very different rhythm from teaching a mix of f2f and online. The upside of teaching a mix of f2f and online is it gives me a lot more scheduling flexibility for when I do things. I do most of my online teaching while at home and in pajamas or sweats, plus I can take a break once in a while to do laundry or something else that needs to be done around the house.

But if I’m not disciplined about scheduling myself about when I do the work– planning, grading, and interacting with the class discussion boards– teaching asynchronously online can become an all day/all night thing where I’m constantly working in a not so efficient multitasking kind of way. So while teaching f2f means I’m spending a lot more time on campus, it does create at least more separation between life and work. That’s a good thing.

And I do like teaching f2f– not really more than teaching online (I like doing that too), but I like it. I like the live performance of f2f teaching and after having taught a zillion sections of first year writing, I have a refined schtick. I like putting on the show three times a day right in a row.

I’ve also been struck by the differences in these three sections. It’s not news to me that different groups of students taking the same course can have very different personalities, dynamics, and responses to readings and assignments. But teaching the same thing to three different classes (back to back to back) makes this very visible. Without getting into any details, it’s pretty clear that these different sections are not equally capable.

It does get a little boring doing the same thing three times in a row. If I’m scheduled to teach three sections of first year writing again like this, I would probably be okay with it. But I think I’d prefer two preps with an online class in the mix. Get back with me when I’m at the end of the semester to see if I feel the same way.

 

Starting 2024 With All First Year Writing/All the Time!

This coming winter term (what every other university calls spring term), I’m going to be doing something I have never done in my career as a tenure-track professor. I’m going to be teaching first year composition and only first year composition.  It’ll be quite a change.

When I came to EMU in 1998, my office was right next to a very senior colleague, Bob Kraft. Bob, who retired from EMU in 2004 and who passed away in December 2022, came back to the department to teach after having been in some administrative positions for quite a while. His office was right next to mine and we chatted with each other often about teaching, EMU politics, and other regular faculty chit-chat. He was a good guy; used to call me “Steve-O!”

Bob taught the same three courses every semester: three sections of a 300-level course called Professional Writing. It was a class he was involved in developing back in the early 1980s and I believe he assigned a course pack that had the complete course in it– and I mean everything: all the readings, in-class worksheets, the assignments, rubrics, you name it. Back in those days and before a university shift to “Writing Intensive” courses within majors, this was a class that was a “restricted elective” in lots of different majors, and we offered plenty of sections of it and similar classes. (In retrospect, the shift away from courses like this one to a “writing in the disciplines” approach/philosophy was perhaps a mistake both because of the way these classes have subsequently been taught in different disciplines and because it dramatically reduced the credit hour production in the English department– but all this is a different topic).

Anyway, Bob essentially did exactly the same thing three times a semester every semester, the same discussions, the same assignments, and the same kinds of papers to grade. Nothing– or almost nothing– changed. I’m pretty sure the only prep Bob had to do was change the dates on the course schedule.

I thought “Jesus, that’d be so boring! I’d go crazy with that schedule.” I mean, he obviously liked the arrangement and I have every reason to believe it was a good class and all, but the idea of teaching the same class the same way every semester for years just gave me hives. Of course, I was quite literally in the opposite place in my career: rather than trying to make the transition into retirement, I was an almost freshly-minted PhD who was more than eager to develop and teach new classes and do new things.

For my first 20 years at EMU (give or take), my work load was a mix of advanced undergraduate writing classes, a graduate course almost every semester, and various quasi-administrative duties. I occasionally have had semesters where I taught two sections of the same course, but most semesters, I taught three different courses– or two different ones plus quasi-admin stuff. I rarely taught first year composition during the regular school year (though I taught it in the summer for extra money while our son Will was still at home) because I was needed to teach the advanced undergrad and MA-level writing classes we had. And this was all a good thing: I got to teach a lot of different courses, I got a chance to do things like help direct the first year writing program or to coordinate our major and grad program, and I had the opportunity to work closely with a lot of MA students who have gone on to successful careers of their own.

But around six or seven years ago, the department (the entire university, actually) started to change and I started to change as well. Our enrollments have fallen across the board, but especially for upper-level undergraduate and MA level courses, which means instead of a grad course every semester, I tend to teach one a school year, along with fewer advanced undergrad writing classes, and now I teach first year writing every semester. One of the things I’ve come to appreciate about this arrangement is the students I work with in first year composition are different from the students I work with on their MA projects– but they’re really not that different, in the big picture of things.

And of course, as I move closer to thinking about retirement myself, Bob’s teaching arrangement seems like a better and better idea. So, scheduling circumstances being what they are, when it became clear I’d have a chance to just teach three sections of first year comp this coming winter, I took it.

We’ll see what happens. I’m looking forward to greatly reducing my prep time because this is the only course I’m teaching this semester (just three times), and also because first year writing is something I’ve taught and thought about A LOT. I’m also looking forward to experimenting with requiring students to use ChatGPT and other AI tools to at least brainstorm and copy-edit– maybe more. What I’m not looking forward to is kind of just repeating the same thing three times in a row each day I teach. Along these lines, I am not looking forward to teaching three classes all on the same days (Tuesdays and Thursdays) and all face to face. I haven’t done that in a long time (possibly never) because I’ve either taught two and been on reassigned time, or I have taught at least a third of my load online. And I’m also worried about keeping all three of these classes in synch. If one group falls behind for some reason, it’ll mess up my plans (this is perhaps inevitable).

What I’m not as worried about is all the essays I’ll have to read and grade. I’m well-aware that the biggest part of the work for anyone teaching first year writing is all the reading and commenting and grading student work, and I’ve figured out a lot over the years about how to do it. Of course, I might be kidding myself with this one….

So, What About AI Now? (A talk and an update)

A couple of weeks ago, I gave a talk/lead a discussion called “So, What About AI Now?” That’s a link to my slides. The talk/discussion was for a faculty development program at Washtenaw Community College, a program organized by my friend, colleague, and former student, Hava Levitt-Phillips.

I covered some of the territory I’ve been writing about here for a while now and I thought both the talk and discussion went well. I think most of the people at this thing (it was over Zoom, so it was a little hard to read the room) had seen enough stories like this one on 60 Minutes the other night: Artificial Intelligence is going to at least be as transformative of a technology as “the internet,” and there is not a zero percent chance that it could end civilization as we know it. All of which is to say we probably need to put the dangers of a few college kids using AI (badly) to cheat on poorly designed assignments into perspective.

I also talked about how we really need to question some of the more dubious claims in the MSM about the powers of AI, such as the article in the Chronicle of Higher Education this past summer, “GPT-4 Can Already Pass Freshman Year at Harvard.”  I blogged about that nonsense a couple months ago here, but the gist of what I wrote there is that all of these claims of AI being able to pass all these tests and freshman year at Harvard (etc.) are wrong. Besides the fact that the way a lot of these tests are run make the claims bogus (and that is definitely the case with this CHE piece), students in our classes still need to show up– and I mean that for both f2f and online courses.

And as we talked about at this session, if a teacher gives students some kind of assignment (an essay, an exam, whatever) that can be successfully completed without ever attending class, then that’s a bad assignment.

So the sense that I got from this group– folks teaching right now the kinds of classes where (according to a lot of the nonsense that’s been in MSM for months) the cheating with ChatGPT et al was going to just make it impossible to assign writing anymore, not in college and not in high school— is it hasn’t been that big of a deal. Sure, a few folks talked about students who tried to cheat with AI who were easily caught, but for the most part it hadn’t been much of a problem. The faculty in this group seemed more interested in trying to figure out a way to make use of AI in their teaching than they were in cheating.

I’m not trying to suggest there’s no reason to worry about what AI means for the future of… well, everything, including education. Any of us who are “knowledge workers”– that is, teachers, professors, lawyers, scientists, doctors, accountants, etc. etc.– needs to pay attention to AI because there’s no question this shit is going to change the way we do our jobs. But my sense from this group (and just the general vibe I get on campus and in social media) is that the freak-out about AI is over, which is good.

One last thing though:  just the other day (long after this talk), I saw what I believe to be my first case of a student trying to cheat with ChatGPT– sort of. I don’t want to go into too many details since this is a student in one of my classes right now. But basically, this student (who is struggling quite a bit) turned in a piece of writing that was first and foremost not the assignment I gave, and it also just happened this person used ChatGPT to generate a lot of the text. So as we met to talk about what the actual assignment was and how this student needed to do it again, etc., I also started asking about what they turned in.

“Did you actually write this?” I asked. “This kind of seems like ChatGPT or something.”

“Well, I did use it for some of it, yes.”

“But you didn’t actually read this book ChatGPT is citing here, did you?”

“Well, no…”

And so forth.  Once again, a good reminder that students who resort to cheating with things like AI are far from criminal masterminds.

A Belated “Beginning of the School Year” Post: Just Teaching

I don’t always write a “beginning of the school year” post and when I do, it’s usually before school starts, some time in August, and not at the end of the second week of classes. But here we are, at what seasonally always feels to me a lot more like the start of the new year than January.

This is the start of my 25th year at EMU. This summer, I selected another one of those goofy “thanks for your service” gifts they give out in five year increments. Five years ago, I picked out a pretty nice casserole dish; this time, I picked out a globe, one which lights up.

I wrote a new school year post like this was in 2021, and back then, I (briefly) contemplated the faculty buyout offer. “Briefly” because as appealing as it was at the time to leave my job behind, there’s just no way I could afford it and I’m not interested in starting some kind of different career. But here in 2023, I’m feeling good about getting back to work. Maybe it’s because I had a busy summer with lots of travel, some house guests, and a touch of Covid. After all of that, it’s just nice to have a change of pace and get back to a job. Or maybe it’s because (despite my recent case) we really are “past” Covid in the sense that EMU (like everywhere else) is no longer going through measures like social distancing, check-ins noting you’re negative, vax cards, free testing, etc. etc. This is not to say Covid is “over” of course because it’s still important for people to get vaxxed and to test.  And while I know the people I see all the time who are continuing to wear masks everywhere think lowering our defenses to Covid is foolish and it is true that cases right now are ticking up, the reality is Covid has become something more or less like the flu: it can potentially kill you, sure, but it is also one of those things we have to live with.

Normally in these kinds of new school year posts, I mention various plans and resolutions for the upcoming year. I have a few personal and not unusual ones– lose weight, exercise more, read more, and so on– but I don’t have any goals that relates to work. I’m not involved in any demanding committees or other service things, and I’d kind of like to keep it that way. I’m also not in the midst of any scholarly projects, and I can’t remember the last time that was the case. And interestingly (at least for me), I don’t know if I’ll be doing another scholarly project at this point. Oh, I will go to conferences that are in places I want to visit, and I’ll keep blogging about AI and other academic-like things I find interesting. That’s a sort of scholarship, I suppose. I’d like to write more commentaries for outlets like IHE or CHE, maybe also something more MSM. But writing or editing another book or article? Meh.

(Note that this could all change on a dime.)

So that leaves teaching as my only focus as far as “the work” goes. I suppose that isn’t that unusual since even when I’ve got a lot going on in terms of scholarly projects and service obligations, teaching is still the bulk of my job. I’ll have plenty to do this semester because I’ve got three different classes (with three different preps), and one of them is a new class I’m sort of/kind of making up as I go.

Still, it feels a little different. I’ve always said that if being a professor just involved teaching my classes– that is, no real service or scholarly obligations– then that wouldn’t be too hard of a job. I guess I’ll get to test that this term.

No, an AI could not pass “freshman year” in college

I am fond of the phrase/quote/mantra/cliché “Ninety percent of success in life is just showing up,” which is usually attributed to Woody Allen. I don’t know if Woody was “the first” person to make this observation (probably not, and I’d prefer if it was someone else), but in my experience, this is very true.

This is why AIs can’t actually pass a college course or their freshmen year or law school or whatever: they can’t show up. And it’s going to stay that way, at least until we’re dealing with advanced AI robots.

This is on my mind because my friend and colleague in the field, Seth Kahn, posted the other day on Facebook about this recent article from The Chronicle of Higher Education by Maya Bodnick, “GPT-4 Can Already Pass Freshman Year at Harvard.” (Bodnick is an undergraduate student at Harvard). It is yet another piece claiming that the AI is smart enough to do just fine on its own at one of the most prestigious universities in the world.

I agreed with all the other comments I saw on Seth’s post. In my comment (which I wrote before I actually read this CHE article), I repeated three points I’ve written about here or on social media before. First, ChatGPT and similar AIs can’t evaluate and cite academic research at even the modest levels I expect in a first year writing class. Second, while OpenAI proudly lists all the “simulated exams” where ChatGPT has excelled (LSAT, SAT, GRE, AP Art History, etc.), you have to click the “show more exams” button on that page to see that none of the versions of their AI has managed better than a “2” on the AP English Language (and also Literature) and Composition exams. It takes a “3” on this exam to get any credit at EMU, and probably a “4” at a lot of other universities.

Third, I think mainstream media and all the rest of us really need to question these claims of AIs passing whatever tests and classes and whatnot much MUCH more carefully than I think most of us have to date.  What I was thinking about when I made that last comment was another article published in CHE and in early July, “A Study Found That AI Could Ace MIT. Three MIT Students Beg to Differ,” by Tom Bartlett. In this article, Bartlett discusses  a study (which I don’t completely understand because it’s too much math and details) conducted by 3 MIT students (class of 2024) who researched the claim that an AI could “ace” MIT classes. The students determined this was bullshit. What were the students’ findings (at least the ones I could understand)? In some of the classes where the AI supposedly had a perfect score, the exams include unsolvable problems, so it’s not even possible to get a perfect score. In other examples, the exam questions the AI supposedly answered correctly did not provide enough information for that to be possible either. The students posted their results online and at least some of the MIT professors who originally made the claims agreed and backtracked.

But then I read this Bodnick article, and holy-moly, this is even more bullshitty than I originally thought. Let me quote at length Bodnick describing her “methodology”:

Three weeks ago, I asked seven Harvard professors and teaching assistants to grade essays written by GPT-4 in response to a prompt assigned in their class. Most of these essays were major assignments which counted for about one-quarter to one-third of students’ grades in the class. (I’ve listed the professors or preceptors for all of these classes, but some of the essays were graded by TAs.)

Here are the prompts with links to the essays, the names of instructors, and the grades each essay received:

  • Microeconomics and Macroeconomics (Jason Furman and David Laibson): Explain an economic concept creatively. (300-500 words for Micro and 800-1000 for Macro). Grade: A-
  • Latin American Politics (Steven Levitsky): What has caused the many presidential crises in Latin America in recent decades? (5-7 pages) Grade: B-
  • The American Presidency (Roger Porter): Pick a modern president and identify his three greatest successes and three greatest failures. (6-8 pages) Grade: A
  • Conflict Resolution (Daniel Shapiro): Describe a conflict in your life and give recommendations for how to negotiate it. (7-9 pages). Grade: A
  • Intermediate Spanish (Adriana Gutiérrez): Write a letter to activist Rigoberta Menchú. (550-600 words) Grade: B
  • Freshman Seminar on Proust (Virginie Greene): Close read a passage from In Search of Lost Time. (3-4 pages) Grade: Pass

I told these instructors that each essay might have been written by me or the AI in order to minimize response bias, although in fact they were all written by GPT-4, the recently updated version of the chatbot from OpenAI.

In order to generate these essays, I inputted the prompts (which were much more detailed than the summaries above) word for word into GPT-4. I submitted exactly the text GPT-4 produced, except that I asked the AI to expand on a couple of its ideas and sequenced its responses in order to meet the word count (GPT-4 only writes about 750 words at a time). Finally, I told the professors and TAs to grade these essays normally, except to ignore citations, which I didn’t include.

Not only can GPT-4 pass a typical social science and humanities-focused freshman year at Harvard, but it can get pretty good grades. As shown in the list above, GPT-4 got all A’s and B’s and one Pass.

JFC. Okay, let’s just think about this for a second:

  • We’re talking about three “essays” that are less than 1000 words and another three that are slightly longer, and based on this work alone, GPT-4 “passed” a year of college at Harvard. That’s all it takes. Really; really?! That’s it?
  • I would like to know more about what Bodnick means when she says that the writing prompts were “much more detailed than the summaries above” because those details matter a lot. But as summarized, these are terrible assignments. They aren’t connected with the context of the class or anything else.  It would be easy to try to answer any of these questions with a minimal amount of Google searching and making educated guesses. I might be going out on a limb here, but I don’t think most writing assignments at Harvard or any other college– even badly assigned ones– are as simplistic as these.
  • It wasn’t just ChatGPT: she had to do some significant editing to put together ChatGPT’s short responses into longer essays. I don’t think the AI could have done that on its own. Unless it hired a tutor.
  • Asking instructors to not pay any attention to the lack of citation (and I am going to guess the need for sources to back up claims in the writing) is giving the AI way WAAAAYYY too much credit, especially since ChatGPT (and other AIs) usually make shit up hallucinate when citing evidence. I’m going to guess that even at Harvard, handing in hallucinations would result in a failing grade. And if the assignment required properly cited sources and the student didn’t do that, then that student would also probably fail.
  • It’s interesting (and Bodnick points this out too) that the texts that received the lowest grades are ones that ask students to “analyze” or to provide their opinions/thoughts, as opposed to assignments that were asking for an “information dump.” Again, I’m going to guess that, even at Harvard, there is a higher value placed on students demonstrating with their writing that they thought about something.

I could go on, but you get the idea. This article is nonsense. It proves literally nothing.

But I also want to return to where I started, the idea that a lot of what it means to succeed in anything (perhaps especially education) is showing up and doing the work. Because after what seems like the zillionth click-bait headline about how ChatGPT could graduate from college or be a lawyer or whatever because it passed a test (supposedly), it finally dawned on me what has been bothering me the most about these kinds of articles: that’s just not how it works! To be a college graduate or a lawyer or damn near anything else takes more than passing a test; it takes the work of showing up.

Granted, there has been a lot more interest and willingness in the last few decades to consider “life experience” credit as part of degrees, and some of these places are kind of legitimate institutions– Southern New Hampshire and the University of Phoenix immediately come to mind. But “life experience” credit is still considered mostly bullshit and the approach taken by a whole lot of diploma mills, and real online universities (like SNHU and Phoenix) still require students to mostly take actual courses, and that requires doing more than writing a couple papers and/or taking a couple of tests.

And sure, it is possible to become a lawyer in California, Vermont, Virginia and Washington without a law degree, and it is also possible to become a lawyer in New York or Maine with just a couple years of law school or an internship. But even these states still require some kind of experience with a law office, most states do require attorneys to have law degrees, and it’s not exactly easy to pass the bar without the experience you get from earning a law degree. Ask Kim Kardashian. 

Bodnick did not ask any of the faculty who evaluated her AI writing examples if it would be possible for a student to pass that professor’s class based solely on this writing sample because she already knew the answer: of course not.

Part of the grade in the courses I teach is based on attendance, participation in the class discussions and peer review, short responses to readings, and so forth. I think this is pretty standard– at least in the humanities. So if some eager ChatGPT enthusiast came to one of my classes– especially one like first year writing, where I post all of the assignments at the beginning of the semester (mainly because I’ve taught this course at least 100 times at this point)– and said to me “Hey Krause, I finished and handed in all the assignments! Does that mean I get an A and go home now?” Um, NO! THAT IS NOT HOW IT WORKS! And of course anyone familiar with how school works knows this.

Oh, and before anyone says “yeah, but what about in an online class?” Same thing! Most of the folks I know who teach online have a structure where students have to regularly participate and interact with assignments, discussions, and so forth. My attendance and participation policies for online courses are only slightly different from my f2f courses.

So please, CHE and MSM in general: stop. Just stop. ChatGPT can (sort of) pass a lot of tests and classes (with A LOT of prompting from the researchers who really really want ChatGPT to pass), but until that AI robot walks/rolls into  a class or sets up its profile on Canvas all on its own, it can’t go to college.

What Counts as Cheating? And What Does AI Smell Like?

Cheating is at the heart of the fear too many academics have about ChatGPT, and I’ve seen a lot of hand-wringing articles from MSM posted on Facebook and Twitter. One of the more provocative screeds on this I’ve seen lately was in the Chronicle of Higher Education, “ChatGPT is a Plagiarism Machine” by Joseph M. Keegin. In the nutshell, I think this guy is unhinged, but he’s also not alone.

Keegin claims he and his fellow graduate student instructors (he’s a PhD candidate in Philosophy at Tulane) are encountering loads of student work that “smelled strongly of AI generation,” and he and some of his peers have resorted to giving in-class handwritten tests and oral exams to stop the AI cheating. “But even then,” Keegin writes, “much of the work produced in class had a vague, airy, Wikipedia-lite quality that raised suspicions that students were memorizing and regurgitating the inaccurate answers generated by ChatGPT.”

(I cannot help but to recall one of the great lines from [the now problematically icky] Woody Allen in Annie Hall: “I was thrown out of college for cheating on a metaphysics exam; I looked into the soul of the boy sitting next to me.” But I digress.)

If Keegin is exaggerating in order to rattle readers and get some attention, then mission accomplished. But if he’s being sincere– that is, if he really believes his students are cheating everywhere on everything all the time and the way they’re cheating is by memorizing and then rewriting ChatGPT responses to Keegin’s in-class writing prompts– then these are the sort of delusions which should be discussed with a well-trained and experienced therapist. I’m not even kidding about that.

Now, I’m not saying that cheating is nothing to worry about at all, and if a student were to turn in whatever ChatGPT provided for a class assignment with no alterations, then a) yes, I think that’s cheating, but b) that’s the kind of cheating that’s easy to catch, and c) Google is a much more useful cheating tool for this kind of thing. Keegin is clearly wrong about ChatGPT being a “Plagiarism Machine” and I’ve written many many many different times about why I am certain of this. But what I am interested in here is what Keegin thinks does and doesn’t count as cheating.

The main argument he’s trying to make in this article is that administrators need to step in to stop this never ending-battle against the ChatGPT plagiarism. Universities should “devise a set of standards for identifying and responding to AI plagiarism. Consider simplifying the procedure for reporting academic-integrity issues; research AI-detection services and software, find one that works best for your institution, and make sure all paper-grading faculty have access and know how to use it.”

Keegin doesn’t define what he means by cheating (though he does give some examples that don’t actually seem like cheating to me), but I think we can figure it out by reading what he means by a “meaningful education.” He writes (I’ve added the emphasis) “A meaningful education demands doing work for oneself and owning the product of one’s labor, good or bad. The passing off of someone else’s work as one’s own has always been one of the greatest threats to the educational enterprise. The transformation of institutions of higher education into institutions of higher credentialism means that for many students, the only thing dissuading them from plagiarism or exam-copying is the threat of punishment.”

So, I think Keegin sees education as an activity where students labor alone at mastering the material delivered by the instructor. Knowledge is not something shared or communal, and it certainly isn’t created through interactions with others. Rather, students receive knowledge, do the work they are asked to do by the instructor, they do that work alone, and then students reproduce that knowledge investment provided by the instructor– with interest. So any work a student might do that involves anyone or anything else– other students, a tutor, a friend, a google search, and yes ChatGPT– is an opportunity for cheating.

More or less, this what Paulo Freire meant by the ineffective and unjust  “banking model of education” which he wrote about over 50 years ago in Pedagogy of the Oppressed. Friere’s work remains very important in many fields specifically interested in pedagogy (including writing studies), and Pedagogy of the Oppressed is one of the most cited books in the social sciences. And yet, I think a lot of people in higher education– especially in STEM fields, business-oriented and other technical majors, and also in disciplines in the humanities that have not been particularly invested in pedagogy (philosophy, for example)– are okay with this system. These folks think education really is a lot like banking and “investing,” and they don’t see any problem with that metaphor. And if that’s your view of education, then getting help from anyone or anything that is not from the teacher is metaphorically like robbing a bank.

But I think it’s odd that Keegin is also upset with “credentialing” in higher education. That’s a common enough complaint, I suppose, especially when we talk about the problems with grading. But if we were to do away with degrees and grades as an indication of successful learning (or at least completion) and if we instead decided students should learn solely for the intrinsic value of learning, then why would it even matter if students cheated or not? That’d be completely their problem. (And btw, if universities did not offer credentials that have financial, social, and cultural value in the larger society, then universities would cease to exist– but that’s a different post).

Perhaps Keegin might say “I don’t have a problem with students seeking help from other people in the writing center or whatever. I have a problem with students seeking help from an AI.” I think that’s probably true with a lot of faculty. Even when professors have qualms about students getting a little too much help from a tutor, they still generally do see the value and usually encourage students to take advantage of support services, especially for students at the gen-ed levels.

But again, why is that different? If a student asks another human for help brainstorming a topic for an assignment, suggesting some ideas for research, creating an outline, suggesting some phrases to use, and/or helping out with proofreading, citation, and formatting, how is that not cheating when this help comes from a human but it is cheating when it comes from ChatGPT? And suppose a student instead turns to the internet and consults things like CliffsNotes, Wikipedia, Course Hero, other summaries and study guides, etc. etc.; is that cheating?

I could go on, but you get the idea. Again, I’m not saying that cheating in general and with ChatGPT in particular is nothing at all to worry about. And also to be fair to Keegin, he even admits “Some departments may choose to take a more optimistic approach to AI chatbots, insisting they can be helpful as a student research tool if used right.” But the more of these paranoid and shrill commentaries I read about “THE END” of writing assignments and how we have got to come up with harsh punishments for students so they stop using AI, the more I think these folks are just scared that they’re not going to be able to give students the same bullshitty non-teaching writing assignments that they’ve been doing for years.

Okay, Now Some Students Should Fail (or, resuming “normal” expectations post-pandemic)

In April 2020, I wrote a post with the headline “No One Should Fail a Class Because of a Fucking Pandemic.” This, of course, was in the completely bonkers early days of the pandemic when everyone everywhere suddenly sheltered in place, when classes suddenly went online, and when the disease was disrupting all of our lives– not to mention the fact that millions of people were getting very sick, and a lot of them were dying. Covid hit many of my students especially hard, which in hindsight is not that surprising since a lot of the students at EMU (and a lot of the students I was teaching back then) come from working poor backgrounds, or they are themselves adult (aka “non-traditional”) students with jobs, sig-Os, houses, kids, etc.

As I wrote back then, before Covid and when it came to things like attendance and deadlines, I was kind of a hard-ass. I took attendance every day for f2f classes and I also had an attendance policy of sorts for online classes. There was no such thing as an excused absence; I allowed students to miss up to the equivalent of two weeks of classes with no questions asked, but there are no exceptions for things like funerals or illness. Unless a student worked out something with me before an assignment was due, late work meant an automatic grade deduction. I’ve been doing it this way since I started as a graduate assistant because it was the advice I was given by the first WPA/professor who supervised and taught me (and my fellow GAs) how to teach. I continued to run a tight ship like this for two reasons: first, I need students to do their job and turn stuff in on time so I can do my job of teaching by responding to their writing. Second, my experience has been that if instructors don’t give clear and unwavering rules about attendance and deadlines, then a certain number of students will chronically not attend and miss deadlines. That just sets these students up to fail and it also creates more work for me.

Pretty much all of this went out the window in Winter 2020 when Covid was raging. EMU allowed students to convert classes they were enrolled in from a normal grading scheme to a “pass/fail” grade, which meant that a lot of my students who would have otherwise failed (or with bad grades) ended up passing because of this, and also because I gave people HUGE breaks. My “lighten up” approach continued through the 2020-21 and the 2021-22 school year, though because all of my teaching was online and asynchronous, the definition of “attend” was a bit more fuzzy. I kept doing this because Covid continued to be a problem– not as big of a problem as it was in April 2020, but lots of people were still getting infected and people were still dying, especially people who were stupid enough to not get the vaccine.

By the end of the 2021-22 school year, things were returning to normal. Oh sure, there was still plenty of nervousness about the virus around campus and such, but the end of the pandemic was near. The most serious dangers of the disease had passed because of a weaker version of the virus, vaccinations, and herd immunity. So I was ready for a return to “normal” for the 2022-23 school year.

But my students weren’t quite ready– or maybe a better way of putting it is Covid’s side-effects continued.

In fall 2022, I taught a f2f section of first year writing, the first f2f section for me since before the pandemic. Most of the students had been in all (or mostly) online classes since March 2020, meaning that this was most of their first semesters back f2f too. Things got off to a rough start with many students missing simple deadlines, blowing off class, and/or otherwise checked out in the first couple of weeks. I felt a bit the same way– not so much blowing stuff off, but after not teaching in real time in front of real people for a couple of years, I was rusty. It felt a bit like getting back on a bicycle after not riding at all for a year or two: I could still do it, but things started out rocky.

So I tried to be understanding and cut students some slack, but I also wanted to get them back on track. It still wasn’t going great. Students were still not quite “present.” I remember at one point, maybe a month into the semester, a student asked quite earnestly “Why are you taking attendance?” It took a bit for me to register the question, but of course! If you’ve been in nothing but online classes for the last two years, you wouldn’t have had a teacher who took attendance because they’d just see the names on Zoom!

There came a point just before the middle of the term when all kinds of students were crashing and burning, and I put aside my plans for the day and just asked “what’s going on?” A lot of students suddenly became very interested in looking at their shoes. “You’re not giving us enough time in class to do the assignments.” That’s what homework is for, I said. “This is just too much work!” No, I said, it’s college. I’ve been doing this for a long time, and it’s not too much, I assure you.

Then I said “Let me ask you this– and no one really needs to answer this question if you don’t want to. How many of you have spent most of the last two years getting up, logging into your Zoom classes, turning off the camera, and then going on to do whatever else you wanted?” Much nodding and some guilty-look smiles. “Oh, I usually just went back to bed” one student said too cheerfully.

Now, look: Covid was hard on everyone for all kinds of different reasons. I get it. A lot of sickness and death, a lot of trauma, a lot of remaining PTSD and depression. Everyone struggled. But mostly blowing off school for two years? On the one hand, that’s on the students themselves because they had to know that it would turn out badly. On the other hand, how does a high school or college teacher allow that to happen? How does a teacher– even a totally burnt-out and overworked one– just not notice that a huge percentage of their students are not there at all?

The other major Covid side-effect I saw last school year was a steep uptick in device distraction. Prior to Covid, my rule for cell phones was to leave them silenced/don’t let them be a distraction, and laptop use was okay for class activities like taking notes, peer review or research. Students still peeked at text messages or Facebook or whatever, but because they had been socialized in previous high school and college f2f classes, students also knew that not paying attention to your peers or the teacher in class because you are just staring at your phone is quite rude. Not to mention the fact that you can’t learn anything if you’re not paying attention at all.

But during Covid, while these students were sort of sitting through (or sleeping through) Zoom classes with their cameras turned off, they also lost all sense of the norms of how to behave with your devices in a setting like a classroom or a workplace. After all, if you can “attend” a class by yourself in the privacy of your own home without ever being seen by other students or the instructor and also without ever having to say anything, what’s the problem of sitting in class and dorking around with your phone?

I noticed this a lot during the winter 2023 semester, maybe because of what I assigned. For the first time in over 30 years of teaching first year writing, I assigned an actual “book” for the class (not a textbook, not a coursepack, but a widely available and best-selling trade book) by Johann Hari called Stolen Focus: Why You Can’t Pay Attention– and How to Think Deeply Again. This book is about “attention” in many different ways and it discusses many different causes for why (according to Hari) we can’t pay attention: pollution, ADHD misdiagnoses, helicopter parenting, stress and exhaustion, etc. But he spends most of his time discussing what I think is the most obvious drain on our attention, which are cell phones and social media. So there I was, trying to lead a class discussion about a chapter from this book describing in persuasive detail why and how cell phone addiction is ruining all of us, while most of the students were staring into their cell phones.

One day in that class (and only once!), I tried an activity I would have never done prior to Covid. After I arrived and set up my things, I asked everyone to put all their devices– phones, tablets, laptops– on a couple of tables at the front of the classroom. Their devices would remain in sight but out of reach. There was a moment where the sense of panic was heavy in the air and more than a few students gave me a “you cannot be serious” look. But I was, and they played along, and we proceeded to have what I think was one of the best discussions in the class so far.

And then everyone went back to their devices for the rest of the semester.

So things this coming fall are going to be different. For both the f2f and online classes I’m scheduled to teach, I’ll probably begin with a little preamble along the lines of this post: this is where we were, let us acknowledge the difficulty of the Covid years, and, for at least while we are together in school (both f2f and online), let us now put those times behind us and return to some sense of normalcy.

In the winter term and for my f2f classes, I tried a new approach to attendance that I will be doing again next year. The policy was the same as I had before– students who miss more than two weeks of class risk failing– but I phrased it a bit differently. I told students they shouldn’t miss any class, but because unexpected things come up, they had four excused absences. I encouraged them to think of this as insurance in case something goes wrong and not as justification for blowing off class. Plus I also gave students who didn’t miss any classes a small bonus for “perfect attendance.” I suppose it was a bit like offering “extra credit” in that the only students who ever do these assignments are the same students who don’t need extra credit, but a few student earned about a half-letter boost to their final grade. And yes, I also had a few students who failed because they missed too much class.

As for devices: The f2f class I’m teaching in the fall is first year writing and I am once again going to have students read (and do research about) Hari’s Stolen Focus. I am thinking about starting the term by collecting everyones’ devices, at least for the first few meetings and discussions of the book. Considering that Hari begins by recalling his own experiences of “unplugging” from his cell phone and social media for a few months, going for 70 or so minutes without being able to touch the phone might help some students understand Hari’s experiences a bit better.

I’m not doing this– returning to my hard-ass ways– just because I want things to be like the were in the before-times or out of some sense of addressing a problem with “the kids” today. I feel like lots of grown-ups (including myself) need to rethink their relationships with the devices and media platforms that fuel surveillance capitalism. At the same time, I think the learning in college– especially in first year writing, but this is true for my juniors and seniors as well– should also include lessons in “adulting,” in preparing for the world beyond the classroom. And in my experience, the first two things anyone has got to do to succeed at anything is to show up and to pay attention.

What Would an AI Grading App Look Like?

While a whole lot of people (academics and non-academics alike) have been losing their minds lately about the potential of students using ChatGPT to cheat on their writing assignments, I haven’t read/heard/seen much about the potential of teachers using AI software to read, grade, and comment on student writing. Maybe it’s out there in the firehose stream of stories about AI I see every day (I’m trying to keep up a list on pinboard) and I’ve just missed it.

I’ve searched and found some discussion of using ChatGPT to grade on Reddit (here and here), and I’ve seen other posts about how teachers might use the software to do things other than grading, but that’s about it. In fact, the reason I’m thinking about this again now is not because of another AI story but because I watched a South Park episode about AI called “Deep Learning.” South Park has been a pretty uneven show for several years, but if you are fan and/or if you’re interested in AI, this is a must-see. A lot happens in this episode, but my favorite reaction about ChatGPT comes from the kids’ infamous teacher, Mr. Garrison. While complaining about grading a stack of long and complicated essays (which the students completed with ChatGPT), Rick (Garrison’s boyfriend) tells him about ChatGPT, and Mr. Garrison has far too honest of a reaction: “This is gonna be amazing! I can use it to grade all my papers and no one will ever know! I’ll just type the title of the essay in, it’ll generate a comment, and I don’t even have to read the stupid thing!”

Of course, even Mr. Garrison knows that would be “wrong” and he must keep this a secret. That probably explains why I still haven’t come across much about an AI grading app. But really though: shouldn’t we be having this discussion? Doesn’t Mr. Garrison have a point?

Teacher concerns about grading/scoring writing with computers are not new, and one of the nice things about having kept a blog so long is I can search and “recall” some of these past discussions. Back in 2005, I had a post about NCTE coming out against the SAT writing test and machine scoring of those tests. There was also a link in that post to an article about a sociologist at the University of Missouri named Edward Brent who had developed a way of giving students feedback on their writing assignments. I couldn’t find the original article, but this one from the BBC in 2005 covers the same story. It seems like it was a tool developed very specifically for the content of Brent’s courses and I’m guessing it was quite crude by today’s standards. I do think Brent makes a good point on the value of these kinds of tools: “It makes our job more interesting because we don’t have to deal so much with the facts and concentrate more on thinking.”

About a decade ago, I also had a couple of other posts about machine grading, both of which were posts that grew out of discussions from the now mostly defunct WPA-L. There was this one from 2012, which included a link to a New York Times article about Educational Testing Service’s product “e-rater,” “Facing a Robo-Grader? Just Keep Obfuscating Mellifluously.” The article features Les Perelman, who was the director of writing at MIT, demonstrating ways to fool e-rater with nonsense and inaccuracies. At the time, I thought Perelman was correct, but also a good argument could be made that if a student was smart enough to fool e-rater, maybe they deserved the higher score.

Then in 2013, there was another kerfuffle on WPA-L about machine grading that involved a petition drive at the website humanreaders.org against machine grading. In my post back then, I agreed with the main goal of the petition,  that “Machine grading software can’t recognize things like a sense of humor or irony, it tends to favor text length over conciseness, it is fairly easy to circumvent with gibberish kinds of writing, it doesn’t work in real world settings, it fuels high stakes testing, etc., etc., etc.” But I also had some questions about all that. I made a comparison between these new tools and the initial resistance to spell checkers, and then I also wrote this:

As a teacher, my least favorite part of teaching is grading. I do not think that I am alone in that sentiment. So while I would not want to outsource my grading to someone else or to a machine (because again, I teach writing, I don’t just assign writing), I would not be against a machine that helps make grading easier. So what if a computer program provided feedback on a chunk of student writing automatically, and then I as the teacher followed behind those machine comments, deleting ones I thought were wrong or unnecessary, expanding on others I thought were useful? What if a machine printed out a report that a student writer and I could discuss in a conference? And from a WPA point of view, what if this machine helped me provide professional development support to GAs and part-timers in their commenting on students’ work?

By the way, an ironic/odd tangent about that post: the domain name humanreaders.org has clearly changed hands. In 2013, it looked like this (this link is from the Internet Archive): basically, a petition form. The current site domain humanreaders.org redirects to this page on some content farm website called we-heart.com. This page, from 2022, is a list of the “six top online college paper writing websites today.”

Anyway, let me state the obvious: I’m not suggesting an AI application for replacing all teacher feedback (as Mr. Garrison is suggesting) at all. Besides the fact that it wouldn’t be “right” no matter how you twist the ethics of it, I don’t think it would work well– yet. Grading/commenting on student writing is my least favorite part of the job, so I understand where Mr. Garrison is coming from. Unfortunately though, reading/ grading/ commenting on student writing is essential to teaching writing. I don’t know how I can evaluate a student’s writing without reading it, and I also don’t know how to help students think about how to revise their writing (and, hopefully, learn how to apply these lessons and advice to writing these students do beyond my class) without making comments.

However, this is A LOT of work that takes A LOT of time. I’ve certainly learned some things that make grading a bit easier than it was when I started. For example, I’ve learned that less is more: marking up every little mistake or thing in the paper and then writing a really long end comment is a waste of time because it confuses and frustrates students and it literally takes longer. But it still takes me about 15-20 minutes to read and comment on each long-ish student essay, which are typically a bit shorter than this blog post. So in a full (25 students) writing class, it takes me 8-10 hours to completely read, comment on, and grade all of their essays; multiply that by two or three or more (since I’m teaching three writing classes a term), and it adds up pretty quickly. Plus we’re talking about student writing here. I don’t mind reading it and students often have interesting and inspiring observations, but by definition, these are writers who are still learning and who often have a lot to learn. So this isn’t like reading The New Yorker or a long novel or something you can get “lost” in as a reader. This ain’t reading for fun– and it’s also one of the reasons why, after reading a bunch of student papers in a day, I’m much more likely to just watch TV at night.

So hypothetically, if there was a tool out there that could help me make this process faster, easier, and less unpleasant, and if this tool also helped students learn more about writing, why wouldn’t I want to use it?

I’ve experimented a bit with ChatGPT with prompts along the lines of “offer advice on how to revise and improve the following text” and then paste in a student essay. The results are mix of (IMO) good, bad, and wrong, and mostly written in the robotic voice typical of AI writing. I think students would have a hard time sorting through these mixed messages. Plus I don’t think there’s a way (yet) for ChatGPT to comment on specific passages in a piece of student writing: that is, it can provide an overall end comment, but it cannot comment on individual sentences and paragraphs and have those comments appear in the margins like the comment feature in Word or Google Docs. Like most writing teachers, that’s a lot of the commenting I do, so an AI that can’t do that (yet) at all just isn’t that useful to me.

But the key phrase there is “yet,” and it does not take a tremendous amount of imagination to figure out how this could work in the near future. For example, what if I could train my own grading AI by feeding it a few classes worth of previous student essays with my comments? I don’t logistically know how that would work, but I am willing to bet that with enough training, a Krause-centric version of ChatGPT would anticipate most of the comments I would make myself on a student writing project. I’m sure it would be far from perfect, and I’d still want to do my own reading and evaluation. But I bet this would save me a lot of time.

Maybe, some time in the future, this will be a real app. But there’s another use of ChatGPT I’ve been playing around with lately, one I hesitate on trying but one that would both help some of my struggling students and save me time on grading. I mentioned this in my first post about using ChatGPT to teach way back in December. What I’ve found in my ChatGPT noodling (so far) is if I take a piece of writing that has a ton of errors in it (incomplete sentences, punctuation in the wrong place, run-on/meandering sentences, stuff like that– all very common issues, especially for first year writing students) and prompt ChatGPT to revise the text so it is grammatically correct, it does a wonderful job.It doesn’t change the meaning or argument of the writing– just the grammar. It generally doesn’t make different word choices and it certainly doesn’t make the student’s argument “smarter”; it just arranges everything so it’s correct.

That might not seem like much, but for a lot of students who struggle with getting these basics right, using ChatGPT like this could really help. And to paraphrase Edward Brent from way back in 2005, if students could use a tool like this to at least deal with basic issues like writing more or less grammatically correct sentences, then I might be able to spend more time concentrating more on the student’s analysis, argument, use of evidence, and so forth.

And yet– I don’t know, it even feels to me like a step too far.

I have students who have diagnosed learning difficulties of one sort or another who show me letters of accommodation from the campus disability resource center which specifically tell me I should allow students to use Grammarly in their writing process. I encourage students to go to the writing center all the time, in part because I want my students– especially the struggling ones– to sit down with a consultant who will help them go through their essays so they can revise and improve it. I never have a problem with students wanting to get feedback on their work from a parent or a friend who is “really good” at writing.

So why does it feel like encouraging students to try this in ChatGPT is more like cheating than it does for me to encourage students to be sure to spell check and to check out the grammar suggestions made by Google Docs? Is it too far? Maybe I’ll find out in class next week.

AI Can Save Writing by Killing “The College Essay”

I finished reading and grading the last big project from my “Digital Writing” class this semester, an assignment that was about the emergence of openai.com’s artificial intelligence technologies GPT-3 and DALL-E. It was interesting and I’ll probably write more about it later, but the short version for now is my students and I have spent the last month or so noodling around with software and reading about both the potentials and dangers of rapidly improving AI, especially when it comes to writing.

So the timing of of Stephen Marche’s recently published commentary with the clickbaity title “The College Essay Is Dead” in The Atlantic could not be better– or worse? It’s not the first article I’ve read this semester along these lines, that GPT-3 is going to make cheating on college writing so easy that there simply will not be any point in assigning it anymore. Heck, it’s not even the only one in The Atlantic this week! Daniel Herman’s “The End of High-School English” takes a similar tact. In both cases, they claim, GPT-3 will make the “essay assignment” irrelevant.

That’s nonsense, though it might not be nonsense in the not so distant future. Eventually, whatever comes after GPT-3 and ChatGPT might really mean teachers can’t get away with only assigning writing. But I think we’ve got a ways to go before that happens.

Both Marche and Herman (and just about every other mainstream media article I’ve read about AI) make it sound like GPT-3, DALL-E, and similar AIs are as easy as working the computer on the Starship Enterprise: ask the software for an essay about some topic (Marche’s essay begins with a paragraph about “learning styles” written by GPT-3), and boom! you’ve got a finished and complete essay, just like asking the replicator for Earl Grey tea (hot). That’s just not true.

In my brief and amateurish experience, using GPT-3 and DALL-E is all about entering a carefully worded prompt. Figuring out how to come up with a good prompt involved trial and error, and I thought it took a surprising amount of time. In that sense, I found the process of experimenting with prompts similar to the kind of  invention/pre-writing activities  I teach to my students and that I use in my own writing practices all the time.  None of my prompts produced more than about two paragraphs of useful text at a time, and that was the case for my students as well. Instead, what my students and I both ended up doing was entering in several different prompts based on the output we were hoping to generate. And my students and I still had to edit the different pieces together, write transitions between AI generated chunks of texts, and so forth.

In their essays, some students reflected on the usefulness of GPT-3 as a brainstorming tool.  These students saw the AI as a sort of “collaborator” or “coach,” and some wrote about how GPT-3 made suggestions they hadn’t thought of themselves. In that sense, GPT-3 stood in for the feedback students might get from peer review, a visit to the writing center, or just talking with others about ideas. Other students did not think GPT-3 was useful, writing that while they thought the technology was interesting and fun, it was far more work to try to get it to “help” with writing an essay than it was for the student to just write the thing themselves.

These reactions square with the results in more academic/less clickbaity articles about GPT-3. This is especially true about  Paul Fyfe’s “How to cheat on your final paper: Assigning AI for student writing.” The assignment I gave my students was very similar to what Fyfe did and wrote about– that is, we both asked students to write (“cheat”) with AI (GPT-2 in the case of Fyfe’s article) and then reflect on the experience. And if you are a writing teacher reading this because you are curious about experimenting with this technology, go and read Fyfe’s article right away.

Oh yeah, one of the other major limitations of GPT-3’s usefulness as an academic writing/cheating tool: it cannot do even basic “research.” If you ask GPT-3 to write something that incorporates research and evidence, it either doesn’t comply or it completely makes stuff up, citing articles that do not exist. Let me share a long quote from a recent article at The Verge by James Vincent on this:

This is one of several well-known failings of AI text generation models, otherwise known as large language models or LLMs. These systems are trained by analyzing patterns in huge reams of text scraped from the web. They look for statistical regularities in this data and use these to predict what words should come next in any given sentence. This means, though, that they lack hard-coded rules for how certain systems in the world operate, leading to their propensity to generate “fluent bullshit.”

I think this limitation (along with the limitation that GPT-3 and ChatGPT are not capable of searching the internet) makes using GPT-3 as a plagiarism tool in any kind of research writing class kind of a deal-breaker. It certainly would not get students far in most sections of freshman comp where they’re expected to quote from other sources.

Anyway, the point I’m trying to make here (and this is something that I think most people who teach writing regularly take as a given) is that there is a big difference between assigning students to write a “college essay” and teaching students how to write essays or any other genre. Perhaps when Marche was still teaching Shakespeare (before he was a novelist/cultural commentator, Marche earned a PhD specializing in early English drama), he assigned his students to write an essay about one of Shakespeare’s plays. Perhaps he gave his students some basic requirements about the number of words and some other mechanics, but that was about it. This is what I mean by only assigning writing: there’s no discussion of audience or purpose, there are no opportunities for peer review or drafts, there is no discussion of revision.

Teaching writing is a process. It starts by making writing assignments that are specific and that require an investment in things like prewriting and a series of assignments and activities that are “scaffolding” for a larger writing assignment. And ideally, teaching writing includes things like peer reviews and other interventions in the drafting process, and there is at least an acknowledgment that revision is a part of writing.

Most poorly designed assigned writing tasks are good examples of prompts that you enter into GPT-3. The results are definitely impressive, but I don’t think it’s quite useful enough to produce work a would-be cheater can pass off as their own. For example, I asked ChatGPT (twice) to “write a 1000 word college essay about the theme of insanity in Hamlet” and it came up with this and this essay. ChatGPT produced some impressive results, sure, but besides the fact that both of these essays are significantly shorter than 1000 word requirement, they both kind of read like… well, like a robot wrote them. I think that most instructors who received this essay from a student– particularly in an introductory class– would suspect that the student cheated. When I asked ChatGPT to write a well researched essay about the theme of insanity in Hamlet, it managed to produce an essay that quoted from the play, but not any research about Hamlet.

Interestingly, I do think ChatGPT has some potential for helping students revise. I’m not going to share the example here (because it was based on actual student writing), but I asked ChatGPT to “revise the following paragraph so it is grammatically correct” and I then added a particularly pronounced example of “basic” (developmental, grammatically incorrect, etc.) writing. The results didn’t improve the ideas in the writing and it changed only a few words. But it did transform the paragraph into a series of grammatically correct (albeit not terribly interesting) sentences.

In any event, if I were a student intent on cheating on this hypothetical assignment, I think I’d just do a Google search for papers on Hamlet instead. And that’s one of the other things Marche and these other commentators have left out: if a student wants to complete a badly designed “college essay” assignment by cheating, there are much much better and easier ways to do that right now.

Marche does eventually move on from “the college essay is dead” argument by the end of his commentary, and he discusses how GPT-3 and similar natural language processing technologies will have a lot of value to humanities scholars. Academics studying Shakespeare now have a reason to talk to computer science-types to figure out how to make use of this technology to analyze the playwright’s origins and early plays. Academics studying computer science and other fields connected to AI will now have a reason to maybe talk with the English-types as to how well their tools actually can write. As Marche says at the end, “Before that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance.”

Plus I have to acknowledge that I have only spent so much time experimenting with my openai.com account because I still only have the free version. That was enough access for my students and me to noodle around enough to complete a short essay composed with the assistance of GPT-3 and to generate an accompanying image with GPT-3. But that was about it. Had I signed up for openai.com’s “pay as you go” payment plan, I might learn more about how to work this thing, and maybe I would have figured out better prompts for that Hamlet assignment. Besides all that, this technology is getting better alarmingly fast. We all know whatever comes after ChatGPT is going to be even more impressive.

But we’re not there yet. And when it is actually as good as Marche fears it might be, and if that makes teachers rethink how they might teach rather than assign writing, that would be a very good thing.

A few big picture takeaways from my research about teaching online during Covid

Despite not posting here all summer, I’ve been busy. I’ve been on what is called at EMU a “Faculty Research Fellowship” since January 2022, writing and researching about teaching online during Covid. These FRFs are one of the nicer perks here. FRFs are competitive awards for faculty to get released from teaching (but not service obligations), and faculty can apply every two years. Since I’m not on any committees right now, it was pretty much the same thing as a sabbatical: I had to go to some meetings and I was working with a grad student on her MA project as well, but for the most part, my time was my own. Annette also had an FRF at the same time.

I’ve had these FRF before, but I never gotten as much research stuff done as I did on this one. Oh sure, there was some vacationing and travel, usually also involving some work. Anyone who follows me on Facebook or Instagram is probably already aware of that, but I’m happy with what I managed to get done. Among other things:

  • I conducted 37 interviews with folks who took my original survey about online teaching during Covid and agreed to talk. Altogether, it’s probably close to 50 hours worth of recordings and maybe 1000 pages of transcript– more on that later.
  • I “gave presentations” at the CCCCs and at the Computers and Writing conference. Though I use the scare quotes because both were online and “on demand” presentations, which is to say not even close to the way I would have run an online conference (not that anyone asked). On the plus-side, both presentations were essentially pre-writing activities for other things, and both also count enough at EMU justify me keeping a 3-3 teaching load.
  • Plus I have an article coming out about all this work in Computers and Composition Online. It is/will be called “The Role of Previous Online Teaching Experience During the Covid Pandemic: An Exploratory Study of Faculty Perceptions and Approaches” (which should give you a sense about what it’s about), and hopefully it will be a “live” article/ website/ publication in the next month or two.

The next steps are going to involve reviewing the transcriptions (made quite a bit easier than it used to be with a website/software called Otter.ai) and to code everything to see what I’ve got. I’m not quite sure what I mean by “code” yet, if it is going to be something systematic that follows the advice in various manuals/textbooks about coding and analyzing qualitative data, or if it is going to be closer to what I did with the interviews I conducted for the MOOC book, where my methodology could probably best be described as journalism. Either way, I have a feeling that’s a project that is going to keep me busy for a couple of years.

But as I reflect on the end of my research fellowship time and also as I gear up for actually teaching again this fall, I thought I’d write a bit about some of the big picture take-aways I have from all of those interviews so far.

First off, I still think it’s weird that so many people taught online synchronously during Covid. I’ve blogged here and written in other places before about how this didn’t make sense to me when we started the “natural experiment” of online teaching during covid, and after a lot of research and interviews, it still doesn’t make sense to me.

I’m not saying that synchronous teaching with Zoom and similar tools didn’t work, though I think one pattern that will emerge when I dig more into the interviews is that faculty who taught synchronously and who also used other tools besides just Zoom (like they included asynch activities, they also used Zoom features like the chat window or breakout rooms, etc.) had better experiences than those who just used Zoom to lecture. It’s also clear that the distinction between asynchronous and synchronous online teaching was fuzzy. Still, given that that 85% or so of all online courses in US higher ed prior to Covid were taught only asynchronously, it is still weird to me that so many people new to teaching online knowingly (or, more likely, unknowingly) decided to take an approach that was (and still is) at odds with what’s considered the standard and “best practice” in distance education.

Second and very broadly speaking, I think faculty who elected to teach online synchronously during Covid did so for some combination of three reasons. And more or less in this descending order.

  • Most of the people who responded to my survey who taught online synchronously said their institution gave faculty a number of different options in terms of mode of teaching (f2f, hybrid, synch, asynch, etc.), and that seems to have been true generally speaking across the board in higher ed. But a lot of institutions– especially ones that focus on the traditional undergraduate college experience for 18-22 year olds and that offered few online courses before Covid– encouraged (and in some cases required) their faculty teach synchronously. And a lot of faculty I interviewed did say that the synchronous experience was indeed a “better than nothing” substitute for these students for what they couldn’t do on campus.

(It’s worth noting that I think this was striking to me in part because I’ve spent my career as a professor at a university where at least half of our students commute some distance to get to campus, are attending part-time, are returning adult students, etc. Institutions like mine have been teaching a significant percentage of classes online for quite a while.)

  • They thought it’d be the easiest way to make the transition to teaching online. I think Sorel Reisman nailed it in his IEEE article when he said: “Teachers can essentially keep doing their quasi-Socratic, one-to-many lecture teaching the way they always have. In a nutshell, Zoom is the lazy person’s way to teach online.” Reisman is okay with this because even though it is far from the approach he would prefer, it as as least getting instructors to engage with the technology. I don’t agree with him about that, but it’s hard to deny that he’s right about how Zoom enabled the far too popular (and largely ineffective) sage on the stage lecture hall experience.
  • But I think the most common reason why faculty decided to teach online synchronously is it didn’t even occur to them that the medium of delivery for a class would make any difference. In other words, it’s not so they decided to teach synchronously because they were encouraged to do so or even because they thought redesigning their courses to teach online asynchronously would be too much work. Rather, I think most faculty who had no previous experience teaching online didn’t think about the method/medium of delivery at all and just delivered the same content (and activities) that they always did before.

Maybe I’m splitting hairs here and these are all three sides (!) of the same coin; then again, maybe not. I read a column by Ezra Klein recently with the headline “I Didn’t Want It to Be True, but the Medium Really Is the Message.” He is not talking about online teaching at all but rather about the media landscape as it has been evolving and how his “love affair” with the internet and social media has faded in that time. Klein is a smart guy and I usually agree with and admire his columns, but this one kind of puzzles me. He writes about how he had been reading Nicholas Carr’s 2010 book The Shallows: What the Internet is Doing to Our Brains, and how he seems to have only now just discovered Marshall McLuhan, Walter Ong, and Neil Postman, and how they all wrote about the importance of the medium that carries messages and content. For example:

We’ve been told — and taught — that mediums are neutral and content is king. You can’t say anything about television. The question is whether you’re watching “The Kardashians” or “The Sopranos,” “Sesame Street” or “Paw Patrol.” To say you read books is to say nothing at all: Are you imbibing potboilers or histories of 18th-century Europe? Twitter is just the new town square; if your feed is a hellscape of infighting and outrage, it’s on you to curate your experience more tightly.

There is truth to this, of course. But there is less truth to it than to the opposite. McLuhan’s view is that mediums matter more than content; it’s the common rules that govern all creation and consumption across a medium that change people and society. Oral culture teaches us to think one way, written culture another. Television turned everything into entertainment, and social media taught us to think with the crowd.

Now, I will admit that since I studied rhetoric, I’m quite familiar with McLuhan and Ong (less so with Postman), and the concept that the medium (aka “rhetorical situation”) does indeed matter a lot is not exactly new. But, I don’t know, have normal people really been told and taught that mediums are neutral? That all that matters is the content? Really? It seems like such a strange and obvious oversight to me. Then again, maybe not.

Third, the main challenge and surprise for most faculty new to online teaching (and also to faculty not so new to it) is in the preparation. I mean this in at least two ways. First off, the hardest part for me about teaching online has always been how to shift material and experiences from the synchronous f2f setting to the asynchronous online one. It’s a lot easier for me to respond to student questions in real time when we’re all sitting in the same room, and it’s much easier to do that f2f because I can “read the room.” Students who are confused and who have questions rarely say (f2f or online) “I’m confused and I have some questions here,” but I can usually figure out the issues when I’m f2f. In online courses– certainly in the asynch ones but I think this was also mostly true for synch ones as well– it’s impossible to adjust in the moment like that. This is why in advance/up-front preparation is so much more important for online courses. As an instructor, I have to explain things and set things up ahead of time to anticipate likely questions and points of confusion. That’s hard to do when you haven’t taught something previously, and it’s impossible to do without a fair amount of preparation.

Which leads to my second point: a lot of faculty, especially in fields like English and other disciplines in the humanities, don’t do as much ahead of time preparation to teach as they probably should. Rather, a lot of faculty I interviewed and a lot of faculty I know essentially have the pedagogical approach of structured improvisation, sometimes to the point of just “winging it.”

This can work out great f2f. I’m thinking of the kind of improvisation accomplished musicians have to improvise and interpret a song on the fly (and more than one of the people I interviewed about teaching online for the first time used an analogy like this). A lot of instructors are very good as performers in f2f class settings because they are especially good lecturers, they’re especially good in building interpersonal relationships with their students, and they’re especially charismatic people. They’re prepared ahead of time, sure, and chances are they’ve done similar performances in f2f classes for a while. These are the kind of instructors who really feed off of the energy of live and in-person students. There are also the kind of instructors who, based again mostly on some of the interviews, were most unhappy about teaching online.

But this simply does not work AT ALL online, and I think it is only marginally more possible to take this approach to teaching with Zoom. If the ideal performance of an instructor in a f2f class is like jazz musicians, stand-up comedians, or a similar kind of stage performer, an online class instructor’s ideal performance has to be more like what the final product of a well-produced movie or TV show looks like: practiced, scripted, performed, and edited, and then ultimately recorded and made available for on-demand streaming.

And let’s be clear: a lot of faculty (myself included) are not at their best when they try the structured improvisation/winging it approach in f2f classrooms. I’ve done many many teaching observations over the years, and I am here to tell you that there are a lot of instructors who think they are good at this at this kind of performance who aren’t. I know I’m not as good of a teacher when I try this, and I think that’s something that became clear to me when I started teaching some of my classes online (asynchronously, of course) about 15 or so years ago. So for me, I think my online teaching practices and preparations do more to shape my f2f practices and preparations rather than the other way around.

In any event, the FRF semester and summer are about over and the fall semester is about here. We start at EMU on Monday, and I am teaching one class f2f for the first time since Winter 2020. Here’s hoping I remember where to stand.