Bye-Bye Blog (sort of, at least for now)

This is the 2,677th post I’ve written in/on this blog, and it’s the last one. Well, probably, or as long as this Substack thing works out.

The first post published on this site was “A New Blog is Blogging,” back in 2003. The post is about moving my already created blog from a software I no longer remember called flipsource (which I ran on a desktop computer I was using as a server in my school office) to Blogger. I switched to Moveable Type (which I guess still exists) briefly but I’ve been using WordPress since about 2005. I’m going to keep using that here.

I actually sorta/kinda started blogging the year before that here in September 2002. As I wrote back then, I started my not a blog (just a static website, actually) as a way of updating/promoting an article I wrote which was published in the brand-new College Composition and Communication Online called “Where Do I List This on My CV? Considering the Values of Self-Published Web Sites,” and also to write some things for a talk I was going to give in March 2003 at the CCCCs called ““Why Weblogs Should (and Shouldn’t) Count as Scholarship.” That was the first conference presentation I gave about blogging.

Also a tangent: “Where Do I List This on My CV?” was “disappeared” by NCTE when they gave up on the new online version of the journal after one issue and deleted my article from their servers. Here’s a blog post about that experience. I can’t remember if the Kairos editors reached out to me or if it was me to them, but they published a follow-up “Version 2.0” of the piece in 2007. NCTE tried again to do an all online version of the CCCs a few years later that was a disaster and ended after one issue, though that still is online. No one at NCTE or the CCCs editorial office has ever done anything to restore my disappeared article. Funny how that goes, huh?

Anyway, I’m not quitting blogging, but I am moving that part of things over to Substack. I’ll keep using this space as my homepage, perhaps as a “depository” for other web things, like my textbook (which I am going to update some day, maybe). I explain why in my first post completed on the Substack platform— the other posts are ones I imported from this site.

But the “at least for now” thing is real. Looking back at the origin story of my blogging reminds me that back in the day, I switched platforms and hosting services a couple of times before settling on WordPress. So who knows what will happen in the next couple years.

Anyway, thanks for reading this far, this site isn’t going away, and come see me at Substack.

Why Teaching Citation Practices (yes, I’m talking MLA/APA style) is Even More Important with AI

A couple weeks ago, I wrote about why I use Google docs to teach writing at all levels. I’ve been using it for years–long before AI was a thing–in part because being able to see the history of a student’s Google doc is a teachable moment on the importance of the writing and revision process. This also has the added bonus of making it obvious if a student is skipping that work (by using AI, by copying/pasting from the internet, by stealing a paper from someone else, etc.) because the document history goes from nothing to a complete document in one step. I’m not saying that automatically means the student cheated, but it does prompt me to have a chat with that student.

In a similar vein and while I’m thinking about putting together my classes for the fall term, I thought I’d write about why I think teaching citation practices is increasingly important in research writing courses, particularly first year composition.

TL;DR version: None of this is new or innovative; rather, this is standard “teaching writing as a process” pedagogy and I’ve been teaching research writing like this for decades. But I do think it is even more important to teach citation skills now to help my students distinguish between the different types of sources, almost all of which are digital rather than on paper. Plus this is an assignment where AI might help, but I don’t think it’d help much.

Continue reading “Why Teaching Citation Practices (yes, I’m talking MLA/APA style) is Even More Important with AI”

Why I Use Google Docs to Teach Writing, Especially in the Age of AI

I follow a couple different Facebook groups about AI, each of which have become a firehose of posts lately, a mix of cool new things and brand new freakouts. A while back, someone in one of these groups posted about an app to track the writing process in a student’s document as a way of proving that the text was not AI. My response to this was “why not just use Google docs?”

I wish I could be more specific than this, but I can’t find the original post or my comment to it; maybe it was deleted. Anyway, this person asked “what did I mean?” and I explained it briefly, but then I said I was thinking about writing a blog post about it. Here is that post.

For those interested in the tl;dr version: I think the best way to discourage students from handing in work they didn’t create (be that from a papermill, something copied and pasted from websites, or AI) is to teach writing rather than merely assigning writing. That’s not “my” idea; that’s been the mantra in writing studies for at least 50 years. Also not a new idea and one you already know if you use and/or teach with Google docs: it is a great tool for teaching writing because it helps with peer review and collaborative writing, and the version history feature helps me see a student’s writing process, from the beginning of the draft through revisions. And if a student’s draft goes from nothing to complete in one revision, well, then that student and I have a chat.

Continue reading “Why I Use Google Docs to Teach Writing, Especially in the Age of AI”

No, Student Writing Is Not Dead (or how AI faculty freakout is back)

Now that the 2023-24 school year is long over and my wife and I are (mostly) done moving into our new house, it’s time to start thinking again about AI for teaching in the fall and for some scholarly things beyond. I’ve been mostly ignoring these things for the last couple of months, but even in that short time, it feels like things have changed. AI tech is getting quickly integrated into everything you can imagine, and it feels to me like the AI faculty freakout factor is on the rise once again.

This is just a gut feeling– like I said, I’ve been out of the loop and it’s not like I’ve done any research on this. But the current moment reminds me a bit of late 2022/early 2023 when ChatGPT first appeared. By the time I did a talk about AI at Hope College in late April 2023 and also again a talk/workshop about AI (over Zoom) at Washtenaw Community College in October 2023, teachers had settled down a bit.  Yes, faculty were still worried about cheating and the other implications, but I think most of the folks who attended these events had already learned more about AI and had started to figure out how to both use it as a tool to help teaching. They also realized they needed to make some changes to assignments because of AI tools.

But now the freakout is back. Perhaps it’s because more faculty are starting to realize that “this whole AI thing” is something they’re going to have to deal with after all. And as far as I can tell, a lot of the freaked out faculty are in the humanities in general/in English in particular. I suppose this is because we teach a lot of general education classes and classes that involve a lot writing and reading. But I also think that the reason why the freakout is high in fields like English is because a lot of my department and discipline colleagues describe themselves as being “not really into technology.”

The primary freakout then and now– at least among faculty in the humanities (I assume STEM faculty have different freakout issues)– is that AI makes it impossible to teach writing in college and in high school because it is too easy for students to have ChatGPT (or whatever other AI) to do the work for them. I wrote a post in response to these articles back in December 2022, but there were dozens of freakout articles like these two. These articles almost always assume that AI has uniquely enabled students to cheat on assignments (as if paper mills and copy and pasting from “the internet” hadn’t existed for decades), and that given the chance, students will always cheat. So the only possible solution is to fight AI with things like detection software or returning to handwritten exams.

It’s deja vu all over again.

Consider, for example, Lisa Lieberman’s June 2024 Chronicle of Higher Education article “AI and the Death of Student Writing.” Lieberman, who teaches community college English and composition courses “in California’s Central Valley,” has seen an alarming uptick in students using AI to write their papers. She gives an example of a student’s essay about The Shining that included the sentence “A complex depiction of Jack’s development from a struggling family guy to a vessel of lunacy and malevolence is made possible by Stanley Kubrick’s brilliant direction.” Lieberman writes “I called the student in and asked him to write a sentence with the word ‘depiction.’ He admitted he didn’t know what ‘depiction’ meant, much less how to spell it, much less how to use it in a sentence. He confessed he hadn’t written a single word of the essay.” (For what it’s worth, I would have asked this student about “malevolence”).

Then she moves on to discussing a student writing her essay with the now AI-fueled version of Grammarly. Lieberman “discovered it’s a multilayered computer program that does everything from simple spelling and grammatical corrections to rewriting entire sentences, adjusting tone and fluency.” She estimated that at least of a third of her students were consistently using AI: “Once they believed they could turn in AI assignments undetected, they got bolder … and used AI for every single assignment.”

It’s all just so wrong, Lieberman laments, in part because of how her students are just cheating themselves by using AI. Here’s a long quote from the end of the article:

I remember my days at Berkeley, where, as an English major, I’d take my copy of Wallace Stevens’s The Palm at the End of the Mind, or Chaucer’s “The Wife of Bath’s Tale,” and pick a nice, sunny spot on campus on a grassy knoll underneath a tree, lay out my blanket, and spend the afternoon reading and scribbling notes in my books. It was just me and my books and my thoughts. There was nothing better.

As I lay there reading the writer’s words, they came to life — as if the author were whispering in my ear. And when I scribbled my notes, and wrote my essays, I was talking back to the author. It was a special and deep relationship — between reader and writer. It felt like magic.

This is the kind of magic so many college students will never feel. They’ll never feel the sun on their faces as they lie in the grass, reading words from writers hundreds of years ago. They won’t know the excitement and joy of truly interacting with texts one-on-one and coming up with new ideas all by themselves, without the aid of a computer. They will have no idea what they’re missing.

I understand the anxiety that Lieberman is expressing, and I completely agree that AI technology is forcing us to change how we teach college classes– and, in particular, classes where students are expected to read and to write about that reading.

However:

  • Students have been cheating in school for as long as there has been school. AI make it easier (and more fun!) to cheat, but none of this is new. So any educator who thinks that students have only now started to cheat on the things they assign only because of AI are kidding themselves.
  • In my experience, the vast majority of students do not want to cheat this much. Oh sure, they might cheat by poorly borrowing a quote from a website, or looking over someone’s shoulder to get a quiz answer on a multiple choice test. But in my view, these are misdemeanor offenses at best. Also, when students do not cite sources properly (and this is as true for the MA students I work with as it is with the first year writing students), it’s because they don’t know how. In other words, a lot of plagiarism is a teachable moment.
  • Also in my experience, students who do blatantly cheat by downloading from a papermill or prompting an AI to do the whole assignment are a) already failing and desperate, and b) not exactly “criminal masterminds.” Every freakout narrative I’ve read– including Lieberman’s– includes a “scene” where the instructor confronts the student with the obvious AI cheating. So to me, if it’s this easy to catch students who cheat using AI, what’s the problem? Just punish these students and be done with it.
  • The fundamentals of teaching writing as a process– the mantra of writing studies for the last 50+ years– are still the same and the best way to discourage students from cheating with AI or anything else. Don’t merely assign writing– teach it. Make students show their work through drafts. Use a series of shorter assignments that build to a larger and more complex writing project. In a research-oriented writing class (like first year composition, for example), require students to create an annotated bibliography of all of their sources. Have peer review as a required part of the process. Etc., etc., etc. None of this is foolproof and for all I know, Lieberman is already doing this. But besides actually helping students to become better writers, teaching (rather than just assigning) writing like this makes cheating as much work as just doing the assignments.
  • I think the best way to dissuade students from using AI to cheat is to explain to them why this is a bad idea. Last year, I had a discussion at the beginning of all of my classes on the basics of AI and why it might be useful for some things (see my next bullet), and why it is not useful for cheating, and that’s especially true in classes that involve research and where writing is taught as a process (see my previous bullet). I think by making it clear from the beginning that yes, I too knew about AI and here’s why cheating with it isn’t a good idea, fewer of them were tempted to try that in my classes.
  • I don’t think there’s anything wrong with Grammarly. At EMU, I will often get letters of accommodation from the disability office about students enrolled in my classes that tell me how I am supposed to “accommodate” the student. That usually means more time to take exams or more flexibility for deadlines, but often, these letters say I should allow the student to use Grammarly.

My philosophy on this has always been that it is a good idea for students to seek help with their writing assignments from outside of the class–help that assists, not that does the work for the student. I always encourage students– especially the ones who are struggling– to get help from a writing center consultant/tutor, a trusted friend or parent, and so forth. I think Grammarly– when used properly– falls into that category. I don’t think asking Grammarly to write the whole thing counts as “proper use.” I want students to proofread what they wrote to make sure that the  mechanics of their writing are as clear and “correct” as possible, and if Grammarly or an AI or another electronic tool can help with that, I’m all for it.

I think the objection that Lieberman has with Grammarly is it makes writing mechanically correct prose too easy, and the only way for students to learn this stuff is to make them do it “by hand.” As someone who relies heavily on a calculator for anything beyond basic arithmetic and also as someone who relies on Google Doc’s spell checking and grammar checking features, I do not understand this mindset. Since she’s teaching in a community college setting, I suppose Liberman might be working more with “basic writing” students. I could see more of an argument for getting students to master the basics before relying on Grammarly. But for me and even in classes like first year writing, I want to focus mostly on the arguments my students are making and how they are using evidence to support their points. So if a student gets some help with the mechanics from some combination of a writing center consultant and an application like Grammarly, then I can focus more exclusively on the interesting parts.

Where Lieberman and I might agree though is if a student doesn’t have basic competency with writing mechanics, then Grammarly is not going to solve the problem. It’s a lot like the mistakes students still make with there/their/they’re even if they take the time to spell check everything. And again, that’s why it is is so easy to detect AI cheating: the vast majority of students I have had who have tried to cheat with AI have done it poorly.

  • Finally, about students missing “the magic” of reading and writing, especially while doing something clichéd idealistic like laying on a blanket on the campus lawn and under an impressive oak. I get it, and that’s part of why I went into this line of work myself. But this is the classic mistake so many teachers make: just because the teacher believes reading and writing are magical doesn’t mean your students will. In fact, in required gen ed classes like first year writing or intro to literature, many (sometimes most) of the students in those classes really do not want to take those courses at all. I can assign students to read a book or essay that I think is great or I can encourage students to keep writing on their own and for not just school, and sometimes, I do have students who do discover “the magic,” so to speak. But honestly, if the majority of my first year writing students at the end of the semester come away thinking that the experience did not “totally suck,” I’m happy.

So no, this is not the end fo student wri

Farwell, Normal Park

If you had asked me last May if Annette and I were moving this year, I would have probably shrugged and said “I don’t think so, but we’ll move eventually.” I certainly didn’t think “eventually” would be now. And I also didn’t think we’d be moving out of a house built in the 1950s in a long established, funky, and all around lovely neighborhood to a newly built house in a brand new suburban subdivision blank slate of not quite yet a neighborhood.  I’m as surprised as anyone about this.

We have lived in this house in Normal Park for 25 years. When we bought it in 1999, it was a two bedroom/one bathroom house built in 1953 with a full attic which had never been finished. We thought we’d stay here until it was time for Will to go to grade school, in part because we fantasized about the perfect place in Ann Arbor, maybe in Burns Park or within walking distance of downtown. Well, we couldn’t afford anything like that, and after living here for five years, we liked the neighborhood. So we remodeled things. We redid the attic, adding a main bedroom, a full bathroom, and a loft space I use as an office area. We eventually also remodeled the kitchen and the bathrooms, along with fixing up a ton of other things. But we still thought about moving a few times, once when Will made the transition to middle school, and again six or seven years ago when Will was almost done with college. We even went to look at a house that was in Ann Arbor (albeit not close to downtown) and it was more or less in our budget. But as we talked about it, both of us felt like it just wasn’t worth the hassle of moving out of a house that we still loved. Plus we had paid off the mortgage, so why give that up?

 

View this post on Instagram

 

A post shared by Steve Krause (@stevendkrause)

In other words, we have been thinking about moving since we moved here, actually, but things got real this summer for basically two reasons. First, it’s a hot “seller’s market” around here, and that is especially true for this neighborhood. But second and more important, we’re getting kind of old– I turned 58 this past March and Annette will turn 60 this coming November. Our parents came to visit us at different times last summer, and while they’re all fairly mobile for folks in their late 70s and early 80s, they had some challenges navigating just the stairs in and out of the house– never mind about trying to go to the second floor or the basement. That’s not a problem for us now, but it doesn’t take a lot of imagination to see a future when it will be, and that’s especially true when doing things like hauling laundry up two flights from the basement. Besides, if we don’t take the plunge to do this now, our next move will be to “the home.”

So we started looking and thinking about moving more seriously, and, long story a bit shorter, we landed on new construction in a subdivision of similar homes sort of in suburban no man’s land. It is still an Ypsilanti address but in Pittsfield Township near where Michigan Avenue and US 23 meet. The only usual places we go around town that will be further away from where we are right now is EMU, which means we won’t be able to walk to work anymore. This sub is a far cry from those fantasies of living in a more tony Ann Arbor neighborhood, but that’s just not realistic or as important as it once was for us. Besides the fact that we simply cannot afford to live in anything bigger than a two bedroom condo within walking distances of downtown, we’d still have to drive around a lot no matter where we lived. And after living here for 25 years, now we want to live more in-between Ann Arbor and Ypsilanti because there’s a lot of cool stuff in Ypsi too.

 

View this post on Instagram

 

A post shared by Steve Krause (@stevendkrause)

The new house is gonna be great. It lacks a lot of the charm and character of this house, sure, but one of the nice things about a new house is everything is, well, new. There’s a connected two car garage, a big “open concept” kitchen/dining area/living room, the laundry and the main bedroom are on the ground floor, we’ll each of a home office space, and a really nice deck off the back door. I’m really looking forward to it.

But I am going to miss this neighborhood.

I never got involved in any of the neighborhood association things and I recognize my neighbors but I don’t know them. We don’t really “hang out” with any of our neighbors. But there’s a nice mix of people here, older folks (like us now!) who have been here for decades and people with little kids just starting out, far from all white people, lots of teachers, nurses, librarians, and EMU and UM professors and staff.

We live– or soon once lived– on Wallace Boulevard, which is one of the main streets through the neighborhood. Our new house is on a cul de sac that backs up to some woods, and that will be nice but in a very different way. Here there’s a steady stream of people of all sorts entertaining me as I look out the kitchen window while doing dishes or whatever– lots of people just walking or pushing strollers or riding bikes, but there’s always something new. Just the other day, I saw a group of four or five people each carrying a part of what looked like a full dining room set. A while back, I saw a grown man driving a fully motorized and adult-sized “Big Wheel” style bike/trike down the street, I presume some kind of DIY project.

I’ll miss what Halloween is like around here. People take Halloween decorating serious around here, and we got hundreds of trick or treaters every year, more than that when the weather was nice. I typically bought three or four giant bags of candy from Costco, and we went through all of it most years. It was a walking party for a lot of folks, young parents drinking beers while watching their kids, and the neighborhood also welcomes lots and lots of kids and parents from all over town, especially folks from apartments or neighborhoods where there aren’t a lot of other trick or treating opportunities.

And then there’s the big neighborhood yard sale, which this year is going to be June 1. It’s dozens and dozens of yard sales, some big and some small, some of them happen every year. It’s another good chance to get out and walk around the neighborhood, find some bargains, sell some old things, etc. By the way, one of the reasons why we’re staying here until the second weekend in June is so we can participate in this year’s sale– we’ve got a lot of stuff to sell!

We’re not going to have any of that in this new subdivision, at least not for a while. Then again, who knows what will happen in the time we’re there and beyond. The other day on the Normal Park Facebook Group, someone posted this image of when this neighborhood was a blank slate, farmland being turned into a subdivision:

I think this house is about where it says 22 on this map. And tickets to the World’s Fair, too!

So farewell, though not really goodbye. I’ll still come by once in a while to see how things are going.

Thinking about Bill HD: Friendship Memories, Momento Mori

My friend Bill Hart Davidson died suddenly on April 23, 2024 of a heart attack while on a run after work. He was 53. Here’s a link to the obituary.

Annette and I (along with Steve Benninghoff– unfortunately, his wife was out of town) went up to The Compound for a dinner party the Saturday before. We’ve gotten together like this many times for the last 20 years, and often, there is some kind of activity or game. This time, Bill and Leslie asked us all to put together powerpoint presentations that are funny, interesting, and/or entertaining. Mine was about our new house. It was pretty lame because I was too busy trying to finish the grading for the winter semester. Annette, similarly busy but with her book, did a presentation about why The Big Lebowski is a perfect movie (totally agree). Benninghoff talked about some genealogy research he’s been doing about his family and some lost history going back to the Civil War, a presentation that ended with a sampling of scotch. Leslie and Bill were much more prepared. Leslie had a great talk about Betty Crocker (I think she’s doing some research for another cookbook sort of project), and Bill’s bit, complete with his bass for demonstration purposes, was about the similarities and differences between beat and rhythm. He won the prize for “most likely to do a TED talk.”

A good time was had by one and all, we talked about how Annette and I will have to host the next one of these get-togethers this summer once we move into our new place, and we all went home. Then we get a call from Benninghoff Monday night; he had gotten a call from Leslie that Bill had collapsed while on a run, and he was pronounced dead the next day.

It’s a lot to process, and so this is definitely very rambling and more personal I suppose than most of what I post here, and ultimately less about Bill than it is about memory and death and friendship. FWIW.

Continue reading “Thinking about Bill HD: Friendship Memories, Momento Mori”

TALIA? This is Not the AI Grading App I Was Searching For

(My friend Bill Hart-Davidson unexpectedly died last week. At some point, I’ll write more about Bill here, probably. In the meantime, I thought I’d finish this post I started a while ago about the webinar about Instructify’s AI grading app. Bill and I had been texting/talking more about AI lately, and I wish I would have had a chance to text/talk more about this. Or anything else).

In March 2023, I wrote a blog post titled “What Would an AI Grading App Look Like?” I was inspired by what I still think is one of the best episodes of South Park I have seen in years, “Deep Learning.”  Follow this link for a detailed summary or look at my post from last year, but in the nutshell, the kids start using ChatGPT to write a paper assignment and Mr. Garrison figures out how to use ChatGPT to grade those papers. Hijinks ensue.

Well, about a month ago and at a time when I was up to my eyeballs in grading, I saw a webinar presentation from Instructify about their AI product called TALIA. The title of the webinar was “How To Save Dozens of Hours Grading Essays Using AI.” I missed the live event, but I watched the recording– and you can too, if you want— or at least you could when I started writing this. Much more about it after the break, but the tl;dr version is this AI grading tool is not the one I am looking for (not surprisingly), and I think it would be a good idea for these tech startups to include people with actual experience with teaching writing on their development teams.

Continue reading “TALIA? This is Not the AI Grading App I Was Searching For”

Bomb Threat

It is “that time” of the semester, which is made all the much worse by it also being “that time” of the school year, mid-April. Everyone on campus– students of course, but also faculty and staff and administrators and everyone else– is peak stressed out because this is the time where everything is due. We’re all late. My students are late finishing stuff they were supposed to finish a couple weeks ago, and for me that means I’m late on finishing reading/commenting on/grading all of those things they haven’t quite finished. We are mutually late. And just to make the vibe around it all that much more spooky, there’s the remaining mojo of Monday’s eclipse.

So sure, let’s through a stupid bomb threat into the mix.

This entry from “EMU Today” (EMU’s public relations site) provides a timeline, and this story from The Detroit News is a good summary of the event.  I was in my office during all this, slowly climbing Grading Mountain (the summit is visible, the end is near, and yet the distance to that summit is further away than I had hoped) and responding to earlier student emails about missing class because of “stress” and such. Then I started getting messages from EMU’s emergency alert system. “Emergency reported in Wise, Buell, Putnam (these are dorms). Please evacuate the building immediately.” This was followed a few minutes later by a similar message about clearing several other dorms and an update that said it was a bomb threat.

EMU set up an emergency alert system a few years ago as part of a response to the rise in school and college campus shootings and violence happening around the country. They rolled this out at about the same time the campus security folks started holding workshops about how to properly shelter in place. I believe yesterday’s bomb threat was the first time this system was used for a threat like this. Previously, the only alerts I think I have received from this system (besides regular system tests) had to do with the weather, messages about campus being closed because of a snowstorm. It is also worth mentioning that this time, the alert system didn’t just send everyone a text. It also sent emails and robocalls, which means all the devices were all lit up in a few different ways.

Our son Will (who lives in Connecticut) texted me and Annette because, for whatever reason, he’s signed up to get these EMU emergency messages and he was concerned. Annette, who wasn’t on campus, wasn’t sure what was going on. When EMU alerted a few minutes after the evacuation posts that it was a bomb threat, I knew it had to be a hoax. I knew (well, I assumed) this in part because I have a good view of several of these dorms from my office, and it wasn’t like I was seeing cops and firefighters rushing into those buildings. Mostly what I saw were students hanging around outside the dorm looking at their phones.

I also thought immediately it was a hoax because 99.9999% of the time, bomb threats are hoaxes. One of the few colleagues of mine who was around the offices at the same time as me poked his head in my door and asked if I was going to still have class. “Well, yeah,” I said, “no one has said classes are cancelled.”

Rather than spending another hour or so prepping for my two afternoon classes and at least making a tiny bit more of a dent on all the grading as I had planned, I instead spent the time responding to student emails and then sending out group emails to my afternoon classes that yes indeed, we were meeting because EMU had not cancelled classes. Some were genuinely confused, wondering if we were still having class because the alerts did not make that clear. Some emailed me about the logistics of it all, basically “I don’t know if I can make it because I need to get back into my dorm room to get my stuff first,” or whatever. Some were freaked out about the whole thing, that they didn’t feel safe on campus, there was no way they were coming to class, etc. “Well, EMU has not cancelled classes, so we will be meeting,” I wrote back. And a couple of student seemed to sense this might be the excuse to skip they were hoping for.

About an hour after it all started and before my 2 pm class, we got another alert (or rather, three more alerts simultaneously) that the three dorms that had been named in the initial bomb threat had been inspected and declared clear. The other dorms had been evacuated as a precaution. At about 2:15, I got an email from the dean (forwarded to faculty by the department head) that no, classes were not cancelled.

Before my 2 pm class was over, EMU alerts sent a final message (again, three ways) to announce all was clear. But of course a lot of students were still freaked out– and for good reason, I guess. I talked with one student after my last class and after it was over who said he was nervous about spending the night in his dorm room, and I kind of understand that. But at the same time, maybe there was never anything to be afraid of?

I’m not saying that EMU overreacted because, obviously, all it takes is that 0.0001% chance where bombs go off simultaneously in the dorms like in the end of Fight Club. Not unlike a fire alarm going off in the dorms in the middle of the night (a regular occurance, I’m told), everyone knows (or at least assumes) is because of some jackass. But you still have to evacuate, you still have to call the fire department, etc.

The whole thing pisses me off. At least it was a hoax and it wasn’t a shooter, something that is always always somewhere on everyone’s minds nowadays. At least no one was hurt beyond being freaked out for a while. And at least there are only about two weeks before the end of the semester.

Once Again, the Problem is Not AI (a Response to Justus’ and Janos’ “Assessment of Student Learning is Broken”)

I most certainly do not have the time to be writing this  because it’s the height of the “assessment season” (e.g., grading) for several different assignments my students have been working on for a while now. That’s why posting this took me a while– I wrote it during breaks in a week-long grading marathon. In other words, I have better things to do right now. But I find myself needing to write a bit in response to Zach Justus and Nik Janos’ Inside Higher Ed piece “Assessment of Student Learning is Broken,” and I figured I might as well make it into a blog entry. I don’t want to be a jerk about any of this and I’m just Justus and Janos are swell guys and everything, but this op-ed bothered me a lot.

Justus and Janos are both professors at Chico State in California; Justus is a professor in Communications and is the director of the faculty development program there, and Janos is in sociology. They begin their op-ed about AI “breaking” assessment quite briskly:

Generative artificial intelligence (AI) has broken higher education assessment. This has implications from the classroom to institutional accreditation. We are advocating for a one-year pause on assessment requirements from institutions and accreditation bodies. We should divert the time we would normally spend on assessment toward a reevaluation of how to measure student learning. This could also be the start of a conversation about what students need to learn in this new age.

I hadn’t thought a lot about how AI might figure into institutional accreditation, so I kept reading. And that’s where I first began to wonder about the argument they’re making, because very quickly, they seem to equate institutional assessment with assessment in individual classes (grading). Specifically, most of this piece is about the problems caused by AI (supposedly) of a very specific assignment in a very specific sociology class.

I have no direct experience with institutional assessment, but as part of the Writing Program Administration work I’ve dipped into a few times over the years, I have some experience with program assessment. In those kind of assessments, we’re looking at the forest rather than the individual trees. For example, maybe as part of a program assessment, the WPAs might want to consider the average grades of all sections of first year writing. That sort of measure could tell us stuff about the overall pass rate and grade distribution across sections, and so on.  But that data can’t tell you much about grades for specific students or the practices of a specific instructor. As far as I can tell, institutional assessments are similar “big picture” evaluations.

Justus and Janos see it differently, I guess:

“Take an introductory writing class as an example. One instructor may not have an AI policy, another may have a “ban” in place and be using AI detection software, a third may love the technology and be requiring students to use it. These varied policies make the aggregated data as evidence of student learning worthless.”

Yes, different teachers across many different sections of the same introductory writing class take different approaches to teaching writing, including with (or without) AI. That’s because individual instructors are, well, individuals– plus each group of students is different as well. Some of Justus and Janos’ reaction to these differences probably have to do with their disciplinary presumptions about “data”: if it’s not uniform and if it not something that can be quantified, then it is, as they say, “worthless.” Of course in writing studies, we have no problem with much more fuzzy and qualitative data. So from my point of view, as long as the instructors are more or less following the same outcomes/curriculum, I don’t see the problem.

But like I said, Justus and Janos aren’t talking about institutional assessment. Rather, they devote most of this piece to a very specific assignment. Janos teaches a sociology class that has an institutional writing competency requirement for the major. The class has students “writing frequently” with a variety of assignments for “nonacademic audiences,” like “letters-to-the-editor, … encyclopedia articles, and mock speeches to a city council” meeting. Justus and Janos say “Many of these assignments help students practice writing to show general proficiency in grammar, syntax and style.” That may or may not be true, but it’s not at all clear how this was assigned or what sort of feedback students received. .

Anyway, one of the key parts of this class is a series of assignments about:

“a foundational concept in sociology called the sociological imagination (SI), developed by C. Wright Mills. The concept helps people think sociologically by recognizing that what we think of as personal troubles, say being homeless, are really social problems, i.e., homelessness.”

It’s not clear to me what students read and study to learn about SI, but it’s a concept that’s been around for a long time– Mills wrote about it in a book in the 1950s. So not surprisingly, there is A LOT of information about this available online, and presumably that has been the case for years.

Students read about SI and as part of their study, they “are asked to provide, in their own words and without quotes, a definition of the SI.” To help do this, students do activities like “role play” to they are talking to friends or family about a social problem such as homelessness. “Lastly,” (to quote at length one last time):

…students must craft a script of 75 words or fewer that defines the SI and uses it to shed light on the social problem. The script has to be written in everyday language, be set in a gathering of friends or family, use and define the concept, and make one point about the topic.

Generative AI, like ChatGPT, has broken assessment of student learning in an assignment like this. ChatGPT can meet or exceed students’ outcomes in mere seconds. Before fall 2022 and the release of ChatGPT, students struggled to define the sociological imagination, so a key response was to copy and paste boilerplate feedback to a majority of the students with further discussion in class. This spring, in a section of 27 students, 26 nailed the definition perfectly. There is no way to know whether students used ChatGPT, but the outcomes were strikingly different between the pre- and post-AI era.

Hmm. Okay, I have questions.

  • You mean to tell me that the key deliverable/artifact that students produce in this class to demonstrate that they’ve met a university-mandated gen ed writing requirement is a 75 word or fewer passage? That’s it? Really. Really? I am certainly not saying that being able to produce a lot of text should not be the main factor for demonstrating “writing competency,” but this seems more than weird and hard to believe.
  • Is there any instructional apparatus for this assignment at all? In other words, do students have to produce drafts of this script? Are there any sort of in-class work with the role-play that’s documented in some way? Any reflection on the process? Anything?
  • I have no idea what the reading assignments and lectures were for this assignment, so I could very well be missing a key concept with SI. But I feel like I could have copied and pasted together a pretty good script just based on some Google searching around– if I was inclined to cheat in the first place. So given that, why are Justus and Janos confident that students hadn’t been cheating before Fall 2022?
  • The passage about the “before Fall 2022” approach to teaching this writing assignment says a lot. It sounds like there’s no actual discussion of what students wrote, and the main instructions to students back then was to follow “boilerplate feedback.” So, in assessing this assignment, was Janos evaluating the unique choices students made in crafting their SI scripts? Or rather, was he evaluating these SI scripts for the “right answer” he provided in the readings or lectures?
  • And as Justus and Janos note, there is no good way to know for certain if a student handed in something made in part or in whole by AI, so why are they assuming that all of those students who got the “right answer” with their SI scripts were cheating?

So, Justus and Janos conclude, because now instructors are evaluating “some combination of student/AI work,” it is simply impossible to make any assessment for institutional accreditation. Their solution is “we should have a one-year pause wherein no assessment is expected or will be received.” What kinds of assessments are they talking about? Why only a year pause? None of this is clear.

Clearly, the problem here is not institutional assessment or the role of AI; the problem is the writing assignment. The solutions are also obvious.

First, there’s the teaching writing versus assigning it.  I have blogged a lot about this in the last couple years (notably here), but teaching writing means a series of assignments where students need to “show their work.” That seems extremely doable with this particular assignment, too. Sure, it would require more actual instruction and evaluation than “boilerplate feedback,” but this seems like a small class (27 students), so that doesn’t seem that big of a deal.

Second, if you have an assignment in anything that can successfully be completed with a simple prompt into ChatGPT (as in “write a 75 word script explaining SI in everyday language”), then that’s definitely now a bad assignment. That’s the real “garbage in, garbage out” issue here.

And third, one of the things that AI has made me realize is if an instructor has an assignment in a class– and I mean any assignment in any class– which can be successfully completed without having any experience or connection to that instructor or the class, then that’s a bad assignment. Again, that seems like an extremely easy to address with the assignment that Justus and Janos describe. They’d have to make changes to the assignment and assessment, of course, but doesn’t that make more sense than trying to argue that we should completely revamp the institutional accreditation process?