The Year That Was 2023

Last year was, in a lot of ways, A LOT. I was originally going to make this just a post about only “life” stuff, but I decided to make some mentions to work stuff/AI stuff too. And it is one of those posts no one other than me is going to probably read, but whatever.

Okay, let’s see:

The AI stuff for me actually began in late 2022 when I was teaching a class where I included an AI assignment and I wrote a blog post called “AI Can Save Writing by Killing ‘The College Essay.'” That post got (what is for me) a lot of hits, over 4,000 since this time last year. I’ve said it before and I’ll say it again: the reason why I keep blogging (at least once in a while) is that when a blog post hits like this, it gets my writing into circulation with an interested audience better than anything I do. Real scholarly publications don’t even come close.  Most of my blog posts remain mostly unread– and most of the scholarly things I’ve published or presented are in the same boat. But every once in a while, one hits like this one.

I didn’t blog at all in January– though I posted a lot of links to stuff I had been reading about AI– and I was busy getting what were three different courses prepped and running. I taught our MA research methods class using Johnny Saldaña’s The Coding Manual for Qualitative Researchers as the main text, in part because at this stage, I was also still trying to figure out how to code and analyze the hundreds of pages of transcripts from faculty interviews about their experiences teaching online during Covid. I think the book was overkill for both my students and my purposes as a researcher, to be honest. I taught a 300-level research writing course that where I decided to use Try This: Research Methods for Writers by Jennifer Clary-Lemon, Derek Mueller, and Kate Pantelidies. It’s an interesting textbook which focuses mostly on primary research (rather than secondary– aka, look stuff up in the library and online). I thought that class was so-so as well, not because of the book (which I would definitely use again, and I hope I get the chance to teach this class again, maybe next year) but because of some of the things I did or didn’t do in the class.

And for my section of first year writing, I did something I haven’t done since I was an MFA student: I actually assigned a book, a real (not a text) book for students to read. I have no objections to the program textbook; I just wanted to try something I haven’t done in a long long time. I had students read Johann Hari’s Stolen Focus: Why You Can’t Pay Attention– and How to Think Deeply Again, and I also assigned parts of my own textbook and a number of essays from Writing Spaces. I wrote a bit about this as a part of this post from last May; basically, students did their research projects about something that Hari talked about in his book. It’s the kind of thing I can get away with after teaching this for most of my life and I don’t need to rely on the textbook, if that makes sense. I did the same thing for my section of first year writing this past fall, and I’m planning on doing it again (times three, and probably for the last time) this coming term. Setting aside the specifics of Hari’s book, I do think there is something to be said for assigning a mainstream non-fiction trade book like this. It sets the theme of the research students will be doing (I never allow students to do research about whatever they want for a ton of different reasons), and it also provides an example of how to use a variety of different kinds of research to make an argument. More on a lot of this later, I’m sure.

Nothing too exciting that I can remember about last January or most of February– just weather, snow and ice respectively. The end of February/beginning of March though was a lot more interesting with our trip to Los Angeles. I think both Annette and I were kind of prepared to not like it much, but I got to say we had a really good time. Yeah, it’s a lot of driving around and WAY too expensive, but I get why people want to live out there. Highlights included the TMZ tour, watching a taping of Jeopardy!, and a couple nights at The Comedy Store (it wasn’t part of the plan, but we stayed at a hotel across the street– lots of fun). Not so much a highlight was an extra day and a half trapped at a hotel by LAX because of bad weather in Detroit.

We had a fun birthday/birthmonth for me dinner at Freya in Detroit in late March, the semester wrapped up, and I had the chance to talk to folks at Hope College about AI stuff in late April. As I blogged about then, that’s a prime example as to why I still blog: someone at Hope read that blog post I wrote back in December 2023, liked it, and invited me to do a talk, which was pretty cool. They would have done it over Zoom, but I actually wanted to make a little trip out of it; Annette came along and we did a little Holland tourism including taking a picture of a windmill.

May brought a good crop of asparagus, and June was the beginning of a fair amount of travel for both of us. We went “up north,” as they say, at a rental on Big Glen Lake. Did some hiking around, ate some fancy food, and saw a good friend rocking out– a pretty typical stay for us up there.  We came home after a week, and a few days later, Annette went out to a conference in Seattle. A couple days after she got back I went to the Computers and Writing Conference at UC Davis– a good conference, I thought. And then, about a week after that, Annette and I went on a Gate 1 tour that went from Dubrovnik, Montenegro, Split, Piltvice Lakes National Park, a bit of Opatija, the Postojna Caves, the super pretty tourist town Bled (which featured some silly dancing on the second night), the capitol of Slovenia, Ljubljana, a brief but nice stop in Trieste, and then Venice: one day with Gate 1 and two more on our own. It was a great trip–though super-busy, and super hot: quite literally, our trip through southern Europe corresponded with the hottest weather ever recorded on Earth– at least up until that point.

Oh yeah, we came home with Covid, too! I am positive we caught it while actually on the tour bus. There was a couple we chatted with a couple times and such who were both feeling like shit– a bad cold, maybe a flu, they thought– but I know I was sick before I got on the plane for back home. I think Annette was too, but it hit her a little later. We were (and are again! just got a booster back in October or so) all vaxxed up, so I like to think that helped it all be not that big of a deal. Actually, I know many MANY more people who had Covid this last year than I did during the worst of the pandemic.

More summer came, we had visits from both my parents and Annette’s, I made a somewhat impromptu road trip to Iowa in late August to get together with “the originals”– that is, just my parents and sisters without all the spouses and kids, and then the school year started up. As I blogged about in September, this is the first time I can remember since entering my PhD program that I did not have some kind of “scholarly project” cooking up in one stage or another. I’m not really working on anything right now (though I did have a couple of different things come out last year, in addition to my talk at CWCON). The break has been good, though I have a feeling I’m going start doing at least a bit more research/scholarship about teaching with AI this coming year.

I got a chance to give another AI and teaching talk (or lead a discussion/workshop, depending on what you want to call it), this time via Zoom and as part of a faculty development event at Washtenaw Community College. I blogged about that too, and my sense from this event was that most faculty have figured out how to deal with AI. Funny what a difference a year makes with this.

Also for the first time ever, EMU had a “Fall Break” in October. A lot of universities have started doing this actually, I think as a result of a lot more attention on college campuses post-covid in helping students with a little “self-care.” So we went to New York, met up with Will, hung around with our old friend Annette Saddik, saw Sweeney Todd, met up with Troy and Lisa, and generally spent way too much money on fun things.

Oh yeah, in October, we put down money on a new house– a brand-new house that’s being built right now in a subdivision in Pittsfield Township sort of between where we are now and Saline. On the one hand, this might look like a surprising turn of events. We had talked about moving before and also about moving into a condo or something for a while.  But a couple years ago, I would have never guessed we would be building a new house that pretty much looks like all the other new houses in new a suburban development (it was a cornfield two years ago) kind of on the outskirts of things. On the other hand, once we started really thinking about it, this started making sense. We like our house and neighborhood A LOT, but there’s a number of things (mostly minor) that need to be fixed or upgraded around here, and there are other things we want (like a bigger kitchen and a more “open concept” living room/dining room area) that we can’t do here. And say what you want about a cookie-cutter house, this place has the layout and the newness we want. Plus the way the housing market is around here nowadays, there just isn’t much on the market. So new house it is. Stay tuned on that one.

Anyway, one of the things we’re really going to miss around here when we move, for sure, including Halloween— not expecting any trick or treaters in the new sub.  Once again, my side of the family did a combined Thanksgiving/Christmas thing (which did include some cookie decorating), and of course a lot of family fun stuff. The semester wrapped, the school year ended, and (more or less), here we are.

Like I said a lot.

I’m Still Dreaming of an AI Grading Agent (and a bunch of AI things about teaching and writing)

I’m in the thick of the fall semester and I’ve been too busy to think/read/write much about AI for a while. Honestly, I’m too busy to be writing this right now, but I’ve also got a bucket full of AI tabs open on my browser, so I thought I’d do a bit of a procrastination and “round up” post.

In my own classes, students seem to be either leery of or unimpressed with AI. I’ve encouraged my more advanced students to experiment with/play around with AI to help with the assignments, but absent me requiring them to do something with AI, they don’t seem too interested. I’ve talked to my first year writing students about using AI to brainstorm and to revise (and to be careful about trusting what the AI presents as “facts”), but again, they don’t seem interested. I have had at least one (and perhaps more than that) student who tried to use AI to cheat, but it was easy to spot. As I have said before, I think most students want to do the work themselves and to actually learn something, and the students who are inclined to cheat with AI (or just a Google search) are far from criminal geniuses.

That said, there is this report, “GenAI in Higher Education: Fall 2023 Update Time for Class Study,” which was research done by a consulting firm called Tyton Partners and sponsored by Turnitin. I haven’t had a chance to read beyond the executive summary, but they claim half of students are “regular users” of generative AI, though their use is “relatively unsophisticated.” Well, unless a lot of my students are not telling me the truth about using AIs, this isn’t my impression. Of course, they might be using AI stuff more for other classes.

Here’s a very local story about how AI is being used in at least one K-12 school district: “‘AI is here.’ Ypsilanti schools weigh integrity, ethics of new technology,” from MLive. Interestingly, a lot of what this story is about is how teachers are using AI to develop assignments, and also to do some things like helping students who don’t necessarily speak English as their native language:

Serving the roughly 30% of [Ypsilanti Community Schools] students who can speak a language other than English, the English Learner Department has found multiple ways to bring AI into the classroom, including helping teachers develop multilingual explanations of core concepts discussed in the curriculum — and save time doing it.

“A lot of that time saving allows us to focus more on giving that important feedback that allows students to grow an be aware of their progress and their learning,” [Teacher Connor] Laporte said.

Laporte uses an example of a Spanish-speaking intern who improved a vocabulary test by double-checking the translations and using ChatGPT to add more vocabulary words and exercises. Another intern then used ChatGPT to make a French version of the same worksheet.

A lot of the theme of this article is about how teachers have moved beyond being terrified of AI ruining everything to becoming a tool to work with in teaching. That’s happening in lots of places and lots of ways; for example, as Inside Higher Ed noted, “Art Schools Get Creative Tackling AI.” It’s a long piece with a somewhat similar theme: not necessarily embracing AI, but also recognizing the need to work with it.

MLA apparently now has “rules” for how to cite AI. I guess maybe it isn’t the end of the essay then, huh? Of course, that doesn’t mean that a lot of writers are going to be happy about AI.  This one is from a while ago, but in The Atlantic back in September, Alex Reisner wrote about “These 183,000 Books are Fueling the Biggest Fight in Publishing and Tech.” Reisner had written earlier about how Meta’s AI systems were being trained on a collection of more than 191,000 books that were often used without permission. The article has a search feature so you can see if your book(s) were a part of that collection. For what it’s worth, my book and co-edited collection about MOOCs did not make the cut.

Several famous people/famous writers are now involved in various lawsuits where the writers are suing the AI companies for using their work without permission to train (“teach?”) the AIs. There’s a part of me that is more than sympathetic to these lawsuits. After all, I never thought it was fair that companies like Turnitin can use student writing without permission as part of its database for detecting plagiarism. Arguably, this is similar.

But on the other hand, OpenAI et al didn’t “copy” passages from Sarah Silverman or Margaret Atwood or my friend Dennis Danvers (he’s in that database!) and then try to present that work as something the AI wrote. Rather, they trained (taught?) the AI by having the program “read” these books. Isn’t that just how learning works? I mean, everything I’ve ever written has been been influenced in direct and indirect ways by other texts I’ve read (or watched, listened to, seen, etc). Other than scale (because I sure as heck have not read 183,000 books), what’s the difference between me “training” by reading the work of others and the AI doing this?

Of course, even with all of this training and the continual tweaking of the software, AIs still have the problem of making shit up. Cade Metz wrote in The New York Times “Chatbots May ‘Hallucinate’ More Often Than Many Realize.” Among other things, the article is about a new start-up called Vectara that is trying to estimate just how often AIs “hallucinate,” and (to leap ahead a bit) they estimated that different AIs hallucinate at different rates ranging from 3% to 27% of the time. But it’s a little more complicated than that.

Because these chatbots can respond to almost any request in an unlimited number of ways, there is no way of definitively determining how often they hallucinate. “You would have to look at all of the world’s information,” said Simon Hughes, the Vectara researcher who led the project.

Dr. Hughes and his team asked these systems to perform a single, straightforward task that is readily verified: Summarize news articles. Even then, the chatbots persistently invented information.

“We gave the system 10 to 20 facts and asked for a summary of those facts,” said Amr Awadallah, the chief executive of Vectara and a former Google executive. “That the system can still introduce errors is a fundamental problem.”

If I’m understanding this correctly, this means that even when you give the AI a fairly small data-set to analyze (10-20 “facts”), the AI still makes shit up with things not a part of that data-set. That’s a problem.

But it still might not stop me from trying to develop some kind of ChatGPT/AI-based grading tool, and that might be about to get a lot easier. (BTW, talk about burying the lede after that headline!)  OpenAI announced something they’re calling (very confusingly) “GPTs,” which (according to this article by Devin Coldewey in TechCrunch) is “a way for anyone to build their own version of the popular conversational AI system. Not only can you make your own GPT for fun or productivity, but you’ll soon be able to publish it on a marketplace they call the GPT Store — and maybe even make a little cash in the process.”

Needless to say, my first thought was could I use this to make an AI Grading tool? And do I have the technical skills?

As far as I can tell from OpenAI’s announcement about this,  GPTs require upgrading to their $20 a month package and it’s just getting started– the GPT store is rolling out later this month, for example.  Kevin Roose of The New York Times has a thoughtful and detailed article about the dangers and potentials of these things, “Personalized A.I. Agents Are Here. Is the World Ready for Them?” User-created agents will very soon be able to automate responses to questions (that OpenAI announcement has examples like a “Creative Writing Coach,” a “Tech Advisor” for trouble-shooting things, and a “Game Time” advisor that can explain the rules of card and board games. Roose writes a fair amount about how this technology could also be used by customer service or human resource offices, and to handle things like responding to emails or updating schedules. Plus none of this requires any actual programming skills, so I am imagining something like “If This Then That” but much more powerful.

AI agents might also be made to do evil things, which has a lot of security people worried for obvious reasons. Though I don’t think these agents are going to be to powerful enough to do anything too terrible; actually, I don’t think these agents will have the capabilities to make the AI grading app I want, at least not yet. Roose got early access to the OpenAI project, and his article has a couple of examples of how he played around with it:

The first custom bot I made was “Day Care Helper,” a tool for responding to questions about my son’s day care. As the sleep-deprived parent of a toddler, I’m always forgetting details — whether we can send a snack with peanuts or not, whether day care is open or closed for certain holidays — and looking everything up in the parent handbook is a pain.

So I uploaded the parent handbook to OpenAI’s GPT creator tool, and in a matter of seconds, I had a chatbot that I could use to easily look up the answers to my questions. It worked impressively well, especially after I changed its instructions to clarify that it was supposed to respond using only information from the handbook, and not make up an answer to questions the handbook didn’t address.

That sounds pretty cool, and I bet I could create an AI agent capable of writing an summative end-comment on a student essay based on a detailed grading rubric I feed into the machine. But that’s a long way from doing the kind of marginal commenting on student essays that responds to particular sentences, phrases, and paragraphs. I want an AI agent/grading tool that can “read” a piece of student writing that is more like how I would read and comment on a piece of student writing, and that  limited to a rubric.

But this is getting a lot closer to being potentially useful– not a substitute for me actually reading and evaluating student writing, but as a tool to make it easier to do. Right now, the free version of ChatGPT does a good job of revising away grammar and style mistakes and errors, so maybe instead of me making marginal comments on a draft about these issues, students can first try using the AI to help them do this kind of low-level revision before they turn it in. That, combined with a detailed end comment from the AI might, actually work well. I’m not quite sure if this would actually save me any time since it seems like setting up the AI to do this would take a lot of time, and I have a feeling I’d have to set up the AI agent for every unique assignment. Plus, and in addition to the time it would take to set up, this would cost me $20 a month.

Maybe for next semester….

So, What About AI Now? (A talk and an update)

A couple of weeks ago, I gave a talk/lead a discussion called “So, What About AI Now?” That’s a link to my slides. The talk/discussion was for a faculty development program at Washtenaw Community College, a program organized by my friend, colleague, and former student, Hava Levitt-Phillips.

I covered some of the territory I’ve been writing about here for a while now and I thought both the talk and discussion went well. I think most of the people at this thing (it was over Zoom, so it was a little hard to read the room) had seen enough stories like this one on 60 Minutes the other night: Artificial Intelligence is going to at least be as transformative of a technology as “the internet,” and there is not a zero percent chance that it could end civilization as we know it. All of which is to say we probably need to put the dangers of a few college kids using AI (badly) to cheat on poorly designed assignments into perspective.

I also talked about how we really need to question some of the more dubious claims in the MSM about the powers of AI, such as the article in the Chronicle of Higher Education this past summer, “GPT-4 Can Already Pass Freshman Year at Harvard.”  I blogged about that nonsense a couple months ago here, but the gist of what I wrote there is that all of these claims of AI being able to pass all these tests and freshman year at Harvard (etc.) are wrong. Besides the fact that the way a lot of these tests are run make the claims bogus (and that is definitely the case with this CHE piece), students in our classes still need to show up– and I mean that for both f2f and online courses.

And as we talked about at this session, if a teacher gives students some kind of assignment (an essay, an exam, whatever) that can be successfully completed without ever attending class, then that’s a bad assignment.

So the sense that I got from this group– folks teaching right now the kinds of classes where (according to a lot of the nonsense that’s been in MSM for months) the cheating with ChatGPT et al was going to just make it impossible to assign writing anymore, not in college and not in high school— is it hasn’t been that big of a deal. Sure, a few folks talked about students who tried to cheat with AI who were easily caught, but for the most part it hadn’t been much of a problem. The faculty in this group seemed more interested in trying to figure out a way to make use of AI in their teaching than they were in cheating.

I’m not trying to suggest there’s no reason to worry about what AI means for the future of… well, everything, including education. Any of us who are “knowledge workers”– that is, teachers, professors, lawyers, scientists, doctors, accountants, etc. etc.– needs to pay attention to AI because there’s no question this shit is going to change the way we do our jobs. But my sense from this group (and just the general vibe I get on campus and in social media) is that the freak-out about AI is over, which is good.

One last thing though:  just the other day (long after this talk), I saw what I believe to be my first case of a student trying to cheat with ChatGPT– sort of. I don’t want to go into too many details since this is a student in one of my classes right now. But basically, this student (who is struggling quite a bit) turned in a piece of writing that was first and foremost not the assignment I gave, and it also just happened this person used ChatGPT to generate a lot of the text. So as we met to talk about what the actual assignment was and how this student needed to do it again, etc., I also started asking about what they turned in.

“Did you actually write this?” I asked. “This kind of seems like ChatGPT or something.”

“Well, I did use it for some of it, yes.”

“But you didn’t actually read this book ChatGPT is citing here, did you?”

“Well, no…”

And so forth.  Once again, a good reminder that students who resort to cheating with things like AI are far from criminal masterminds.

Fennel Salad

This is a very simple salad recipe my son Will asked about so I though I’d write it down for him here. What’s nice about this (besides being tasty and pretty easy to do) is it travels really well, so it makes it a good thing to bring over for a dinner party or a pot luck or something.

This is also very VERY adjustable, so I’m not bothering to put down too much in terms of measurements here.

Ingredients:

Fennel bulb(s). Sometimes I see this called “anise” in the produce section. I generally think one big fennel bulb is enough for four servings, but more is not bad. Be sure to keep the pretty and tasty fronds!

Lemon juice, fresh of course. For a one bulb salad, I’ll use the whole lemon.

Good olive oil.

Good and freshly grated parmesan cheese–don’t get the pre-shredded stuff.

Salt and Pepper to taste

Italian flat leaf parsley (optional)

Balsamic vinegar glaze (optional)

Procedure:

  • Prepare the fennel. To do that, cut off the fronds (though save the tops for garnish) so you’re just left with the bulb itself. Cut it in half so you slice through the middle of the core, which isn’t edible. Then cut out the core, being careful to keep the layers of fennel together. This is also when you would discard any brown or nasty parts on the outside layer of fennel.
  • If you have a mandolin slicer (good for you!), slice the fennel thinly; I usually go with a 1/16th inch setting on mine. If you don’t have a mandolin slicer or you’re just too lazy to get it out, no problem: just slice it as thin as you can with a sharp knife. Pile the fennel up into a salad bowl.
  • Chop up the reserved fennel fronds (and parsley if you’re using it), add about half of it to the fennel in the salad bowl and then give it a toss.
  • In a small jar with a lid or plastic storage container, add the juice of the lemon, a bit more than an equal amount of olive oil, and a dash of salt and pepper. Put on the lid and shake it, and then give it a taste to adjust. If you’re going to eat it right away (or within about 30 minutes or so), you can add the dressing to the fennel and toss; if you’re taking that salad someplace, keep the dressing in the container and take it with you.
  • Finely grate parmesan cheese. This is to taste of course, but for a one bulb salad, I will use a microplane grater (so the cheese is very thin) and grate up about a cup worth of cheese. If you’re not going anywhere, go ahead and grate the cheese right into the salad bowl; if you are traveling, put that cheese into a plastic baggy or similar container.
  • Toss the salad together. It’s delicious right away, but it will hold up well tossed up for at least an hour, longer if you put it in the fridge. When I’m feeling “fancy,” I’ll add a little garnish of balsamic vinegar glaze too because it is tasty and looks nice.

A Belated “Beginning of the School Year” Post: Just Teaching

I don’t always write a “beginning of the school year” post and when I do, it’s usually before school starts, some time in August, and not at the end of the second week of classes. But here we are, at what seasonally always feels to me a lot more like the start of the new year than January.

This is the start of my 25th year at EMU. This summer, I selected another one of those goofy “thanks for your service” gifts they give out in five year increments. Five years ago, I picked out a pretty nice casserole dish; this time, I picked out a globe, one which lights up.

I wrote a new school year post like this was in 2021, and back then, I (briefly) contemplated the faculty buyout offer. “Briefly” because as appealing as it was at the time to leave my job behind, there’s just no way I could afford it and I’m not interested in starting some kind of different career. But here in 2023, I’m feeling good about getting back to work. Maybe it’s because I had a busy summer with lots of travel, some house guests, and a touch of Covid. After all of that, it’s just nice to have a change of pace and get back to a job. Or maybe it’s because (despite my recent case) we really are “past” Covid in the sense that EMU (like everywhere else) is no longer going through measures like social distancing, check-ins noting you’re negative, vax cards, free testing, etc. etc. This is not to say Covid is “over” of course because it’s still important for people to get vaxxed and to test.  And while I know the people I see all the time who are continuing to wear masks everywhere think lowering our defenses to Covid is foolish and it is true that cases right now are ticking up, the reality is Covid has become something more or less like the flu: it can potentially kill you, sure, but it is also one of those things we have to live with.

Normally in these kinds of new school year posts, I mention various plans and resolutions for the upcoming year. I have a few personal and not unusual ones– lose weight, exercise more, read more, and so on– but I don’t have any goals that relates to work. I’m not involved in any demanding committees or other service things, and I’d kind of like to keep it that way. I’m also not in the midst of any scholarly projects, and I can’t remember the last time that was the case. And interestingly (at least for me), I don’t know if I’ll be doing another scholarly project at this point. Oh, I will go to conferences that are in places I want to visit, and I’ll keep blogging about AI and other academic-like things I find interesting. That’s a sort of scholarship, I suppose. I’d like to write more commentaries for outlets like IHE or CHE, maybe also something more MSM. But writing or editing another book or article? Meh.

(Note that this could all change on a dime.)

So that leaves teaching as my only focus as far as “the work” goes. I suppose that isn’t that unusual since even when I’ve got a lot going on in terms of scholarly projects and service obligations, teaching is still the bulk of my job. I’ll have plenty to do this semester because I’ve got three different classes (with three different preps), and one of them is a new class I’m sort of/kind of making up as I go.

Still, it feels a little different. I’ve always said that if being a professor just involved teaching my classes– that is, no real service or scholarly obligations– then that wouldn’t be too hard of a job. I guess I’ll get to test that this term.

No, an AI could not pass “freshman year” in college

I am fond of the phrase/quote/mantra/cliché “Ninety percent of success in life is just showing up,” which is usually attributed to Woody Allen. I don’t know if Woody was “the first” person to make this observation (probably not, and I’d prefer if it was someone else), but in my experience, this is very true.

This is why AIs can’t actually pass a college course or their freshmen year or law school or whatever: they can’t show up. And it’s going to stay that way, at least until we’re dealing with advanced AI robots.

This is on my mind because my friend and colleague in the field, Seth Kahn, posted the other day on Facebook about this recent article from The Chronicle of Higher Education by Maya Bodnick, “GPT-4 Can Already Pass Freshman Year at Harvard.” (Bodnick is an undergraduate student at Harvard). It is yet another piece claiming that the AI is smart enough to do just fine on its own at one of the most prestigious universities in the world.

I agreed with all the other comments I saw on Seth’s post. In my comment (which I wrote before I actually read this CHE article), I repeated three points I’ve written about here or on social media before. First, ChatGPT and similar AIs can’t evaluate and cite academic research at even the modest levels I expect in a first year writing class. Second, while OpenAI proudly lists all the “simulated exams” where ChatGPT has excelled (LSAT, SAT, GRE, AP Art History, etc.), you have to click the “show more exams” button on that page to see that none of the versions of their AI has managed better than a “2” on the AP English Language (and also Literature) and Composition exams. It takes a “3” on this exam to get any credit at EMU, and probably a “4” at a lot of other universities.

Third, I think mainstream media and all the rest of us really need to question these claims of AIs passing whatever tests and classes and whatnot much MUCH more carefully than I think most of us have to date.  What I was thinking about when I made that last comment was another article published in CHE and in early July, “A Study Found That AI Could Ace MIT. Three MIT Students Beg to Differ,” by Tom Bartlett. In this article, Bartlett discusses  a study (which I don’t completely understand because it’s too much math and details) conducted by 3 MIT students (class of 2024) who researched the claim that an AI could “ace” MIT classes. The students determined this was bullshit. What were the students’ findings (at least the ones I could understand)? In some of the classes where the AI supposedly had a perfect score, the exams include unsolvable problems, so it’s not even possible to get a perfect score. In other examples, the exam questions the AI supposedly answered correctly did not provide enough information for that to be possible either. The students posted their results online and at least some of the MIT professors who originally made the claims agreed and backtracked.

But then I read this Bodnick article, and holy-moly, this is even more bullshitty than I originally thought. Let me quote at length Bodnick describing her “methodology”:

Three weeks ago, I asked seven Harvard professors and teaching assistants to grade essays written by GPT-4 in response to a prompt assigned in their class. Most of these essays were major assignments which counted for about one-quarter to one-third of students’ grades in the class. (I’ve listed the professors or preceptors for all of these classes, but some of the essays were graded by TAs.)

Here are the prompts with links to the essays, the names of instructors, and the grades each essay received:

  • Microeconomics and Macroeconomics (Jason Furman and David Laibson): Explain an economic concept creatively. (300-500 words for Micro and 800-1000 for Macro). Grade: A-
  • Latin American Politics (Steven Levitsky): What has caused the many presidential crises in Latin America in recent decades? (5-7 pages) Grade: B-
  • The American Presidency (Roger Porter): Pick a modern president and identify his three greatest successes and three greatest failures. (6-8 pages) Grade: A
  • Conflict Resolution (Daniel Shapiro): Describe a conflict in your life and give recommendations for how to negotiate it. (7-9 pages). Grade: A
  • Intermediate Spanish (Adriana Gutiérrez): Write a letter to activist Rigoberta Menchú. (550-600 words) Grade: B
  • Freshman Seminar on Proust (Virginie Greene): Close read a passage from In Search of Lost Time. (3-4 pages) Grade: Pass

I told these instructors that each essay might have been written by me or the AI in order to minimize response bias, although in fact they were all written by GPT-4, the recently updated version of the chatbot from OpenAI.

In order to generate these essays, I inputted the prompts (which were much more detailed than the summaries above) word for word into GPT-4. I submitted exactly the text GPT-4 produced, except that I asked the AI to expand on a couple of its ideas and sequenced its responses in order to meet the word count (GPT-4 only writes about 750 words at a time). Finally, I told the professors and TAs to grade these essays normally, except to ignore citations, which I didn’t include.

Not only can GPT-4 pass a typical social science and humanities-focused freshman year at Harvard, but it can get pretty good grades. As shown in the list above, GPT-4 got all A’s and B’s and one Pass.

JFC. Okay, let’s just think about this for a second:

  • We’re talking about three “essays” that are less than 1000 words and another three that are slightly longer, and based on this work alone, GPT-4 “passed” a year of college at Harvard. That’s all it takes. Really; really?! That’s it?
  • I would like to know more about what Bodnick means when she says that the writing prompts were “much more detailed than the summaries above” because those details matter a lot. But as summarized, these are terrible assignments. They aren’t connected with the context of the class or anything else.  It would be easy to try to answer any of these questions with a minimal amount of Google searching and making educated guesses. I might be going out on a limb here, but I don’t think most writing assignments at Harvard or any other college– even badly assigned ones– are as simplistic as these.
  • It wasn’t just ChatGPT: she had to do some significant editing to put together ChatGPT’s short responses into longer essays. I don’t think the AI could have done that on its own. Unless it hired a tutor.
  • Asking instructors to not pay any attention to the lack of citation (and I am going to guess the need for sources to back up claims in the writing) is giving the AI way WAAAAYYY too much credit, especially since ChatGPT (and other AIs) usually make shit up hallucinate when citing evidence. I’m going to guess that even at Harvard, handing in hallucinations would result in a failing grade. And if the assignment required properly cited sources and the student didn’t do that, then that student would also probably fail.
  • It’s interesting (and Bodnick points this out too) that the texts that received the lowest grades are ones that ask students to “analyze” or to provide their opinions/thoughts, as opposed to assignments that were asking for an “information dump.” Again, I’m going to guess that, even at Harvard, there is a higher value placed on students demonstrating with their writing that they thought about something.

I could go on, but you get the idea. This article is nonsense. It proves literally nothing.

But I also want to return to where I started, the idea that a lot of what it means to succeed in anything (perhaps especially education) is showing up and doing the work. Because after what seems like the zillionth click-bait headline about how ChatGPT could graduate from college or be a lawyer or whatever because it passed a test (supposedly), it finally dawned on me what has been bothering me the most about these kinds of articles: that’s just not how it works! To be a college graduate or a lawyer or damn near anything else takes more than passing a test; it takes the work of showing up.

Granted, there has been a lot more interest and willingness in the last few decades to consider “life experience” credit as part of degrees, and some of these places are kind of legitimate institutions– Southern New Hampshire and the University of Phoenix immediately come to mind. But “life experience” credit is still considered mostly bullshit and the approach taken by a whole lot of diploma mills, and real online universities (like SNHU and Phoenix) still require students to mostly take actual courses, and that requires doing more than writing a couple papers and/or taking a couple of tests.

And sure, it is possible to become a lawyer in California, Vermont, Virginia and Washington without a law degree, and it is also possible to become a lawyer in New York or Maine with just a couple years of law school or an internship. But even these states still require some kind of experience with a law office, most states do require attorneys to have law degrees, and it’s not exactly easy to pass the bar without the experience you get from earning a law degree. Ask Kim Kardashian. 

Bodnick did not ask any of the faculty who evaluated her AI writing examples if it would be possible for a student to pass that professor’s class based solely on this writing sample because she already knew the answer: of course not.

Part of the grade in the courses I teach is based on attendance, participation in the class discussions and peer review, short responses to readings, and so forth. I think this is pretty standard– at least in the humanities. So if some eager ChatGPT enthusiast came to one of my classes– especially one like first year writing, where I post all of the assignments at the beginning of the semester (mainly because I’ve taught this course at least 100 times at this point)– and said to me “Hey Krause, I finished and handed in all the assignments! Does that mean I get an A and go home now?” Um, NO! THAT IS NOT HOW IT WORKS! And of course anyone familiar with how school works knows this.

Oh, and before anyone says “yeah, but what about in an online class?” Same thing! Most of the folks I know who teach online have a structure where students have to regularly participate and interact with assignments, discussions, and so forth. My attendance and participation policies for online courses are only slightly different from my f2f courses.

So please, CHE and MSM in general: stop. Just stop. ChatGPT can (sort of) pass a lot of tests and classes (with A LOT of prompting from the researchers who really really want ChatGPT to pass), but until that AI robot walks/rolls into  a class or sets up its profile on Canvas all on its own, it can’t go to college.

Traveling Thoughts

Annette and I have done a lot of traveling this summer– a get away to Glen Arbor, individual travel to conferences on the west coast (mine was Computers and Writing in Davis),  and then a vacation/tour to Croatia, Slavonia, and Venice. Judging from my social media feeds, just about everyone I know was doing something similar. It was great! Though I will admit I could have done without the Covid we picked up at the tail end of our trip to Europe, but that’s a slightly different topic.

Shortly before we left on this latest trip, I read in The New Yorker Agnes Collard’s essay “The Case Agains Travel.” At first, I thought I might have been reading it wrong because travel is so popular– or at least people very commonly describe travel (along with activities like reading and walking on the beach) as something they “love” to do But no, Collard is quite earnest, though in an intentionally contrarian tone. This passage made me feel seen:

If you are inclined to dismiss this as contrarian posturing, try shifting the object of your thought from your own travel to that of others. At home or abroad, one tends to avoid “touristy” activities. “Tourism” is what we call traveling when other people are doing it. And, although people like to talk about their travels, few of us like to listen to them. Such talk resembles academic writing and reports of dreams: forms of communication driven more by the needs of the producer than the consumer.

(My apologies to my tens of social media devotees who have had to endure weeks of Instgram posts from me chronicling my journeys, though as far as I can tell, y’all have been basically posting similar pictures and stories from wherever it is you went too).

Then I heard Collard interviewed just the other day on the NPR show “Today, Explained,” and an episode available here called “Vacation… all I ever wanted?” which features a short (and more accessible) interview with Collard on her thoughts on Travel. Her part of that 30 minute show is in the second half.

She does make one point in both her essay and interview which I do agree with thoroughly: travel does not in and of itself make one “virtuous,” much in the same way that an education does not in and of itself make one “smarter.” I mean, both travel and education can help each of us become better and more virtuous people, but I’ve seen enough “ugly American” style travelers (both domestically and abroad) and also enough half-assed students to know that the benefits of travel and education depend entirely on how each of us individually process and apply those experiences.

Further, travel (and education too) is undeniably a mark of privilege in that both require time and money. Obviously, different kinds of travel require different amounts of time and money, and the tourism I’m able to do now is at least more elaborate (if not better) than what I was able to do when I was in my twenties. There’s a reason why so many people wait to go on those big European vacations until they are closer to retirement.

But mainly, I think Collard is wrong in two crucial ways.

First, she makes no distinction between the different types of travel, which for me is very problematic. In both the essay and the interview, Collard uses her own experiences of a trip to Abu Dhabi and a visit to an animal hospital caring for falcons as evidence to the empty miserableness of travel. But as she makes clear in the interview, Collard travelled to Abu Dhabi not “for fun” but for a conference– that is, for work (she’s a Philosophy professor) and not exclusively for pleasure– and she went to the falcon hospital despite the fact that she describes herself as someone who “does not like animals.” So you sign up to go to a falcon hospital? This just doesn’t make sense.

The reasons for travel define the traveler’s role. When Annette and I visit our extended families, we are not tourists, even though these trips require many hours of car or air travel and usually hotel stays and a lot of eating out. I very much enjoy spending time with parents and sisters and in-laws and the like, and I’m looking forward to upcoming trips at Thanksgiving and Christmas this year, too. But these trips are not vacations for fun; these trips are obligations. 

My work travel is probably similar to Collard’s in that it doesn’t happen that often and I can usually get some more personal pleasures out of the experience– as I did recently when I went to California. But these carved out personal times are also not the same as a vacation, and for people who have to travel a lot for work, I have to think that the distinction between different types of travel are even more stark.

In contrast, the vacation Annette and I just went on was entirely for our own pleasure and amusement. It’s different from going someplace you don’t really want to go for work (even if you do find free time to look at falcons), and it’s different from seeing your siblings and parents and the like. You’re making the trip not as a part of any responsibility or obligation; you’re making the trip because you thought it’d be fun.

Second, Collard is setting the bar way too high.  Collard borrows the definition of tourist from an academic book which describes a tourist as someone “away from home for the purpose of experiencing a change.” That strikes me more how I hear a lot of people who prefer describe themselves as “travelers.” For example, while tourists wait in line and pay a lot to ride in a gondola for 15 minutes; travelers watch and scoff. Tourists take pictures of all the major sites as proof they were there; travelers take pictures that are less identifying and more suitable for framing.

Personally, I’m a tourist. While overseas, I don’t think I have a choice since no one in any other country is going to mistake me for anything other than a dopey white American dude. I can’t pretend that I’m just hanging out in Dubrovnik at a cafe table under a giant umbrella like the locals, especially since all the locals from surrounding areas are the ones actually working in this cafe (and working in the gift shops and the Game of Thrones tours and hauling in all of the cases of wine and soft drinks and hauling away all of the empty bottles and cans).

But again, Collard wants too much from tourism. As a tourist, I do want to see and experience different things, real, (re)constructed, or even sometimes completely contrived (in the form of things like roadside tourist trap attractions), but I don’t necessarily want to change. For me, a lot of the experiences of tourism (restaurants, tours, museums, architecture, vistas, sounds, etc.) are similar to the experiences of media. I certainly have been changed as a person in small and large ways by specific books or movies or songs, but that’s not something I demand or expect every time. “That was pretty good” or “That was fun” is usually enough; even “That was weird” or “Let’s not do that again” can usually be enough. And really, it’s the broader experience with tourism (or media) and not a specific trip (or book) that changes my perspectives and experiences in the world.

Ultimately, as Collard points out in the interview, travel is fun, and (she says) she doesn’t want to talk people out of doing it. I think she just wants people to be, I don’t know, a little less smug about it. That’s cool.

Computers and Writing 2023: Some Miscellaneous Thoughts

Last week, I attended and presented at the 2023 Computers and Writing Conference at the University of California-Davis. Here’s a link to my talk, “What Does ‘Teaching Online’ Even Mean Anymore?” Some thoughts as they occur to me/as I look at my notes:

  • The first academic conference I ever attended and presented at was Computers and Writing almost 30 years ago, in 1994. Old-timers may recall that this was the 10th C&W conference, it was held at the University of Missouri, and it was hosted by Eric Crump. I just did a search and came across this article/review written by the late Michael “Mick” Doherty about the event. All of which is to say I am old.
  • This was the first academic conference I attended in person since Covid; I think that was the case for a lot of attendees.
  • Also worth noting right off the top here: I have had a bad attitude about academic conferences for about 10 years now, and my attitude has only gotten worse. And look, I know, it’s not you, it’s me. My problem with these things is they are getting more and more expensive, most of the people I used to hang out with at conferences have mostly stopped going themselves for whatever reason, and for me, the overall “return on investment” now is pretty low. I mean, when I was a grad student and then a just starting out assistant professor, conferences were extremely important to me. They furthered my education in both subtle and obvious ways, they connected me to lots of other people in the field, and conferences gave me the chance to do scholarship that I could also list on my CV. I used to get a lot out of these events. Now? Well, after (almost) 3o years, things start to sound a little repetitive and the value of yet another conference presentation on my CV is almost zero, especially since I am at a point where I can envision retirement (albeit 10-15 years from now). Like I said, it’s not you, it’s me, but I also know there are plenty of people in my cohort who recognize and even perhaps share a similarly bad attitude.
  • So, why did I go? Well, a big part of it was because I hadn’t been to any conference in about four years– easily the longest stretch of not going in almost 30 years. Also, I had assumed I would be talking in more detail about the interviews I conducted about faculty teaching experiences during Covid, and also about the next phases of research I would be working on during a research release or a sabbatical in 2024. Well, that didn’t work out, as I wrote about here. which inevitably changed my talk to being a “big picture” summary of my findings and an explanation of why I was done.
  • This conference has never been that big, and this year, it was a more “intimate” affair. If a more normal or “robustly” attended C&W gets about 400-500 people to attend (and I honestly don’t know what the average attendance has been at this thing), then I’d guess there was about 200-250 folks there. I saw a lot of the “usual suspects” of course, and also met some new people too.
  • The organizers– Carl Whithaus, Kory Lawson Ching, and some other great people at UC-Davis– put a big emphasis on trying to make the hybrid delivery of panels work. So there were completely on-site panels, completely online (but on the schedule) panels held over Zoom, and hybrid panels which were a mix of participants on-site and online. There was also a small group of completely asynchronous panels as well. Now, this arrangement wasn’t perfect, both because of the inevitable technical glitches and also because there’s no getting around the fact that Zoom interactions are simply not equal to robust face to face interactions, especially for an event like a conference. This was a topic of discussion in the opening town hall meeting, actually.
  • That said, I think it all worked reasonably well. I went to two panels where there was one presenter participating via Zoom (John Gallgher in both presentations, actually) and that went off without (much of a) hitch, and I also attended at least part of a session where all the presenters were on Zoom– and a lot of the audience was on-site.
  • Oh, and speaking of the technology: They used a content management system specifically designed for conferences called Whova that worked pretty well. It’s really for business/professional kinds of conferences so there were some slight disconnects, and I was told by one of the organizers that they found out (after they had committed to using it!) that unlimited storage capacity would have been much more expensive. So they did what C&W folks do well: they improvised, and set up Google Drive folders for every session.
  • My presentation matched up well to my co-presenters, Rich Rice and Jenny Sheppard, in that we were all talking about different aspects of online teaching during Covid– and with no planning on our parts at all! Actually, all the presentations I saw– and I went to more than usual, both the keynotes, one and a half town halls, and four and a half panels– were really quite good.
  • Needless to say, there was a lot of AI and ChatGPT discussion at this thing, even though the overall theme was on hybrid practices. That’s okay– I am pretty sure that AI is just going to become a bigger issue in the larger field and academia as a whole in the next couple of years, and it might stay that way for the rest of my career. Most of what people talked about were essentially more detailed versions of stuff I already (sort of) knew about, and that was reassuring to me. There were a lot of folks who seemed mighty worried about AI, both in the sense of students using it to cheat and also the larger implications of it on society as a whole. Some of the big picture/ethical concerns may have been more amplified here because there were a lot of relatively local participants of course, and Silicon Valley and the Bay Area are more or less at “ground zero” for all things AI. I don’t disagree with the larger social and ethical implications of AI, but these are also things that seem completely out of all of our control in so many different ways.
  • For example, in the second town hall about AI (I arrived late to that one, unfortunately), someone in the audience had one of those impassioned “speech/questions” about how “we” needed to come up with a statement on the problems/dangers/ethical issues about AI. Well, I don’t think there’s a lot of consensus in the field about what we should do about AI at this point. But more importantly and as Wendi Sierra pointed out (she was on the panel, and she is also going to be hosting C&W at Texas Christian University in 2024), there is no “we” here. Computers and Writing is not an organization at all and our abilities to persuade are probably limited to our own institutions. Of course, I have always thought that this was one of the main problems with the Computers and Writing Conference and Community: there is no there there.
  • But hey, let me be clear– I thought this conference was great, one of the best versions of C&W I’ve been to, no question about. It’s a great campus with some interesting quirks, and everything seemed to go off right on schedule and without any glitches at all.
  • Of course, the conference itself was the main reason I went– but it wasn’t the only reason.  I mean, if this had been in, say, Little Rock or Baton Rouge or some other place I would prefer not to visit again or ever, I probably would have sat this out. But I went to C&W when it was at UC-Davis back in 2009 and I had a great time, so going back there seemed like it’d be fun. And it was– though it was a different kind of fun, I suppose. I enjoyed catching up with a lot of folks I’ve known for years at this thing and I also enjoyed meeting some new people too, but it also got to be a little too, um, “much.” I felt a little like an overstimulated toddler after a while. A lot of it is Covid of course, but a lot of it is also what has made me sour on conferences: I don’t have as many good friends at these things anymore– that is, the kind of people I want to hang around with a lot– and I’m also just older. So I embraced opting out of the social events, skipping the banquet or any kind of meet-up with a group at a bar or bowling or whatever, and I played it as a solo vacation. That meant walking around Davis (a lively college town with a lot of similarities to Ann Arbor), eating at the bar at a couple of nice restaurants, and going back to my lovely hotel room and watching things that I know Annette had no interest in watching with me (she did the same back home and at the conference she went to the week before mine). On Sunday, I spent the day as a tourist: I drove through Napa, over to Sonoma Coast Park, and then back down through San Francisco to the airport. It’s not something I would have done on my own without the conference, but like I said, I wouldn’t have gone to the conference if I couldn’t have done something like this on my own for a day.

What Counts as Cheating? And What Does AI Smell Like?

Cheating is at the heart of the fear too many academics have about ChatGPT, and I’ve seen a lot of hand-wringing articles from MSM posted on Facebook and Twitter. One of the more provocative screeds on this I’ve seen lately was in the Chronicle of Higher Education, “ChatGPT is a Plagiarism Machine” by Joseph M. Keegin. In the nutshell, I think this guy is unhinged, but he’s also not alone.

Keegin claims he and his fellow graduate student instructors (he’s a PhD candidate in Philosophy at Tulane) are encountering loads of student work that “smelled strongly of AI generation,” and he and some of his peers have resorted to giving in-class handwritten tests and oral exams to stop the AI cheating. “But even then,” Keegin writes, “much of the work produced in class had a vague, airy, Wikipedia-lite quality that raised suspicions that students were memorizing and regurgitating the inaccurate answers generated by ChatGPT.”

(I cannot help but to recall one of the great lines from [the now problematically icky] Woody Allen in Annie Hall: “I was thrown out of college for cheating on a metaphysics exam; I looked into the soul of the boy sitting next to me.” But I digress.)

If Keegin is exaggerating in order to rattle readers and get some attention, then mission accomplished. But if he’s being sincere– that is, if he really believes his students are cheating everywhere on everything all the time and the way they’re cheating is by memorizing and then rewriting ChatGPT responses to Keegin’s in-class writing prompts– then these are the sort of delusions which should be discussed with a well-trained and experienced therapist. I’m not even kidding about that.

Now, I’m not saying that cheating is nothing to worry about at all, and if a student were to turn in whatever ChatGPT provided for a class assignment with no alterations, then a) yes, I think that’s cheating, but b) that’s the kind of cheating that’s easy to catch, and c) Google is a much more useful cheating tool for this kind of thing. Keegin is clearly wrong about ChatGPT being a “Plagiarism Machine” and I’ve written many many many different times about why I am certain of this. But what I am interested in here is what Keegin thinks does and doesn’t count as cheating.

The main argument he’s trying to make in this article is that administrators need to step in to stop this never ending-battle against the ChatGPT plagiarism. Universities should “devise a set of standards for identifying and responding to AI plagiarism. Consider simplifying the procedure for reporting academic-integrity issues; research AI-detection services and software, find one that works best for your institution, and make sure all paper-grading faculty have access and know how to use it.”

Keegin doesn’t define what he means by cheating (though he does give some examples that don’t actually seem like cheating to me), but I think we can figure it out by reading what he means by a “meaningful education.” He writes (I’ve added the emphasis) “A meaningful education demands doing work for oneself and owning the product of one’s labor, good or bad. The passing off of someone else’s work as one’s own has always been one of the greatest threats to the educational enterprise. The transformation of institutions of higher education into institutions of higher credentialism means that for many students, the only thing dissuading them from plagiarism or exam-copying is the threat of punishment.”

So, I think Keegin sees education as an activity where students labor alone at mastering the material delivered by the instructor. Knowledge is not something shared or communal, and it certainly isn’t created through interactions with others. Rather, students receive knowledge, do the work they are asked to do by the instructor, they do that work alone, and then students reproduce that knowledge investment provided by the instructor– with interest. So any work a student might do that involves anyone or anything else– other students, a tutor, a friend, a google search, and yes ChatGPT– is an opportunity for cheating.

More or less, this what Paulo Freire meant by the ineffective and unjust  “banking model of education” which he wrote about over 50 years ago in Pedagogy of the Oppressed. Friere’s work remains very important in many fields specifically interested in pedagogy (including writing studies), and Pedagogy of the Oppressed is one of the most cited books in the social sciences. And yet, I think a lot of people in higher education– especially in STEM fields, business-oriented and other technical majors, and also in disciplines in the humanities that have not been particularly invested in pedagogy (philosophy, for example)– are okay with this system. These folks think education really is a lot like banking and “investing,” and they don’t see any problem with that metaphor. And if that’s your view of education, then getting help from anyone or anything that is not from the teacher is metaphorically like robbing a bank.

But I think it’s odd that Keegin is also upset with “credentialing” in higher education. That’s a common enough complaint, I suppose, especially when we talk about the problems with grading. But if we were to do away with degrees and grades as an indication of successful learning (or at least completion) and if we instead decided students should learn solely for the intrinsic value of learning, then why would it even matter if students cheated or not? That’d be completely their problem. (And btw, if universities did not offer credentials that have financial, social, and cultural value in the larger society, then universities would cease to exist– but that’s a different post).

Perhaps Keegin might say “I don’t have a problem with students seeking help from other people in the writing center or whatever. I have a problem with students seeking help from an AI.” I think that’s probably true with a lot of faculty. Even when professors have qualms about students getting a little too much help from a tutor, they still generally do see the value and usually encourage students to take advantage of support services, especially for students at the gen-ed levels.

But again, why is that different? If a student asks another human for help brainstorming a topic for an assignment, suggesting some ideas for research, creating an outline, suggesting some phrases to use, and/or helping out with proofreading, citation, and formatting, how is that not cheating when this help comes from a human but it is cheating when it comes from ChatGPT? And suppose a student instead turns to the internet and consults things like CliffsNotes, Wikipedia, Course Hero, other summaries and study guides, etc. etc.; is that cheating?

I could go on, but you get the idea. Again, I’m not saying that cheating in general and with ChatGPT in particular is nothing at all to worry about. And also to be fair to Keegin, he even admits “Some departments may choose to take a more optimistic approach to AI chatbots, insisting they can be helpful as a student research tool if used right.” But the more of these paranoid and shrill commentaries I read about “THE END” of writing assignments and how we have got to come up with harsh punishments for students so they stop using AI, the more I think these folks are just scared that they’re not going to be able to give students the same bullshitty non-teaching writing assignments that they’ve been doing for years.

Okay, Now Some Students Should Fail (or, resuming “normal” expectations post-pandemic)

In April 2020, I wrote a post with the headline “No One Should Fail a Class Because of a Fucking Pandemic.” This, of course, was in the completely bonkers early days of the pandemic when everyone everywhere suddenly sheltered in place, when classes suddenly went online, and when the disease was disrupting all of our lives– not to mention the fact that millions of people were getting very sick, and a lot of them were dying. Covid hit many of my students especially hard, which in hindsight is not that surprising since a lot of the students at EMU (and a lot of the students I was teaching back then) come from working poor backgrounds, or they are themselves adult (aka “non-traditional”) students with jobs, sig-Os, houses, kids, etc.

As I wrote back then, before Covid and when it came to things like attendance and deadlines, I was kind of a hard-ass. I took attendance every day for f2f classes and I also had an attendance policy of sorts for online classes. There was no such thing as an excused absence; I allowed students to miss up to the equivalent of two weeks of classes with no questions asked, but there are no exceptions for things like funerals or illness. Unless a student worked out something with me before an assignment was due, late work meant an automatic grade deduction. I’ve been doing it this way since I started as a graduate assistant because it was the advice I was given by the first WPA/professor who supervised and taught me (and my fellow GAs) how to teach. I continued to run a tight ship like this for two reasons: first, I need students to do their job and turn stuff in on time so I can do my job of teaching by responding to their writing. Second, my experience has been that if instructors don’t give clear and unwavering rules about attendance and deadlines, then a certain number of students will chronically not attend and miss deadlines. That just sets these students up to fail and it also creates more work for me.

Pretty much all of this went out the window in Winter 2020 when Covid was raging. EMU allowed students to convert classes they were enrolled in from a normal grading scheme to a “pass/fail” grade, which meant that a lot of my students who would have otherwise failed (or with bad grades) ended up passing because of this, and also because I gave people HUGE breaks. My “lighten up” approach continued through the 2020-21 and the 2021-22 school year, though because all of my teaching was online and asynchronous, the definition of “attend” was a bit more fuzzy. I kept doing this because Covid continued to be a problem– not as big of a problem as it was in April 2020, but lots of people were still getting infected and people were still dying, especially people who were stupid enough to not get the vaccine.

By the end of the 2021-22 school year, things were returning to normal. Oh sure, there was still plenty of nervousness about the virus around campus and such, but the end of the pandemic was near. The most serious dangers of the disease had passed because of a weaker version of the virus, vaccinations, and herd immunity. So I was ready for a return to “normal” for the 2022-23 school year.

But my students weren’t quite ready– or maybe a better way of putting it is Covid’s side-effects continued.

In fall 2022, I taught a f2f section of first year writing, the first f2f section for me since before the pandemic. Most of the students had been in all (or mostly) online classes since March 2020, meaning that this was most of their first semesters back f2f too. Things got off to a rough start with many students missing simple deadlines, blowing off class, and/or otherwise checked out in the first couple of weeks. I felt a bit the same way– not so much blowing stuff off, but after not teaching in real time in front of real people for a couple of years, I was rusty. It felt a bit like getting back on a bicycle after not riding at all for a year or two: I could still do it, but things started out rocky.

So I tried to be understanding and cut students some slack, but I also wanted to get them back on track. It still wasn’t going great. Students were still not quite “present.” I remember at one point, maybe a month into the semester, a student asked quite earnestly “Why are you taking attendance?” It took a bit for me to register the question, but of course! If you’ve been in nothing but online classes for the last two years, you wouldn’t have had a teacher who took attendance because they’d just see the names on Zoom!

There came a point just before the middle of the term when all kinds of students were crashing and burning, and I put aside my plans for the day and just asked “what’s going on?” A lot of students suddenly became very interested in looking at their shoes. “You’re not giving us enough time in class to do the assignments.” That’s what homework is for, I said. “This is just too much work!” No, I said, it’s college. I’ve been doing this for a long time, and it’s not too much, I assure you.

Then I said “Let me ask you this– and no one really needs to answer this question if you don’t want to. How many of you have spent most of the last two years getting up, logging into your Zoom classes, turning off the camera, and then going on to do whatever else you wanted?” Much nodding and some guilty-look smiles. “Oh, I usually just went back to bed” one student said too cheerfully.

Now, look: Covid was hard on everyone for all kinds of different reasons. I get it. A lot of sickness and death, a lot of trauma, a lot of remaining PTSD and depression. Everyone struggled. But mostly blowing off school for two years? On the one hand, that’s on the students themselves because they had to know that it would turn out badly. On the other hand, how does a high school or college teacher allow that to happen? How does a teacher– even a totally burnt-out and overworked one– just not notice that a huge percentage of their students are not there at all?

The other major Covid side-effect I saw last school year was a steep uptick in device distraction. Prior to Covid, my rule for cell phones was to leave them silenced/don’t let them be a distraction, and laptop use was okay for class activities like taking notes, peer review or research. Students still peeked at text messages or Facebook or whatever, but because they had been socialized in previous high school and college f2f classes, students also knew that not paying attention to your peers or the teacher in class because you are just staring at your phone is quite rude. Not to mention the fact that you can’t learn anything if you’re not paying attention at all.

But during Covid, while these students were sort of sitting through (or sleeping through) Zoom classes with their cameras turned off, they also lost all sense of the norms of how to behave with your devices in a setting like a classroom or a workplace. After all, if you can “attend” a class by yourself in the privacy of your own home without ever being seen by other students or the instructor and also without ever having to say anything, what’s the problem of sitting in class and dorking around with your phone?

I noticed this a lot during the winter 2023 semester, maybe because of what I assigned. For the first time in over 30 years of teaching first year writing, I assigned an actual “book” for the class (not a textbook, not a coursepack, but a widely available and best-selling trade book) by Johann Hari called Stolen Focus: Why You Can’t Pay Attention– and How to Think Deeply Again. This book is about “attention” in many different ways and it discusses many different causes for why (according to Hari) we can’t pay attention: pollution, ADHD misdiagnoses, helicopter parenting, stress and exhaustion, etc. But he spends most of his time discussing what I think is the most obvious drain on our attention, which are cell phones and social media. So there I was, trying to lead a class discussion about a chapter from this book describing in persuasive detail why and how cell phone addiction is ruining all of us, while most of the students were staring into their cell phones.

One day in that class (and only once!), I tried an activity I would have never done prior to Covid. After I arrived and set up my things, I asked everyone to put all their devices– phones, tablets, laptops– on a couple of tables at the front of the classroom. Their devices would remain in sight but out of reach. There was a moment where the sense of panic was heavy in the air and more than a few students gave me a “you cannot be serious” look. But I was, and they played along, and we proceeded to have what I think was one of the best discussions in the class so far.

And then everyone went back to their devices for the rest of the semester.

So things this coming fall are going to be different. For both the f2f and online classes I’m scheduled to teach, I’ll probably begin with a little preamble along the lines of this post: this is where we were, let us acknowledge the difficulty of the Covid years, and, for at least while we are together in school (both f2f and online), let us now put those times behind us and return to some sense of normalcy.

In the winter term and for my f2f classes, I tried a new approach to attendance that I will be doing again next year. The policy was the same as I had before– students who miss more than two weeks of class risk failing– but I phrased it a bit differently. I told students they shouldn’t miss any class, but because unexpected things come up, they had four excused absences. I encouraged them to think of this as insurance in case something goes wrong and not as justification for blowing off class. Plus I also gave students who didn’t miss any classes a small bonus for “perfect attendance.” I suppose it was a bit like offering “extra credit” in that the only students who ever do these assignments are the same students who don’t need extra credit, but a few student earned about a half-letter boost to their final grade. And yes, I also had a few students who failed because they missed too much class.

As for devices: The f2f class I’m teaching in the fall is first year writing and I am once again going to have students read (and do research about) Hari’s Stolen Focus. I am thinking about starting the term by collecting everyones’ devices, at least for the first few meetings and discussions of the book. Considering that Hari begins by recalling his own experiences of “unplugging” from his cell phone and social media for a few months, going for 70 or so minutes without being able to touch the phone might help some students understand Hari’s experiences a bit better.

I’m not doing this– returning to my hard-ass ways– just because I want things to be like the were in the before-times or out of some sense of addressing a problem with “the kids” today. I feel like lots of grown-ups (including myself) need to rethink their relationships with the devices and media platforms that fuel surveillance capitalism. At the same time, I think the learning in college– especially in first year writing, but this is true for my juniors and seniors as well– should also include lessons in “adulting,” in preparing for the world beyond the classroom. And in my experience, the first two things anyone has got to do to succeed at anything is to show up and to pay attention.