Is AI Going to be “Something” or “Everything?”

Way back in January, I applied for release time from teaching for one semester next year– either a sabbatical or what’s called here a “faculty research fellowship” (FRF)– in order to continue the research I’ve been doing about teaching online during Covid. This is work I’ve been doing since fall 2020, including a Zoom talk at a conference in Europe, a survey I ran for about six months, and from that survey, I was able to recruit and interview a bunch of faculty about their experiences. I’ve gotten a lot out of this work already: a couple conference presentations (albeit in the kind of useless “online/on-demand” format), a website (which I had to code myself!) article, and, just last year, I was on one of those FRFs.

Well, a couple weeks ago, I found out that I will not be on sabbatical or FRF next year. My proposal, which was about seeking time to code and analyze all of the interview transcripts I collected last year, got turned down. I am not complaining about that: these awards are competitive, and I’ve been fortunate enough to receive several of these before, including one for this research. But not getting release time is making me rethink how much I want to continue this work, or if it is time for something else.

I think studying how Covid impacted faculty attitudes about online courses is definitely something important worth doing. But it is also looking backwards, and it feels a bit like an autopsy or one of those commissioned reports. And let’s be honest: how many of us want to think deeply about what happened during the pandemic, recalling the mistakes that everyone already knows they made? A couple years after the worst of it, I think we all have a better understanding now why people wanted to forget the 1918 pandemic.

It’s 20/20 hindsight, but I should have put together a sabbatical/research leave proposal about AI. With good reason, the committee that decides on these release time awards tends to favor proposals that are for things that are “cutting edge.” They also like to fund releases for faculty who have book contracts who are finishing things up, which is why I have been lucky enough to secure these awards both at the beginning and end of my MOOC research.

I’ve obviously been blogging about AI a lot lately, and I have casually started amassing quite a number of links to news stories and other resources related to Artificial Intelligence in general, ChatGPT and OpenAI in particular. As I type this entry in April 2023, I already have over 150 different links to things without even trying– I mean, this is all stuff that just shows up in my regular diet of social media and news. I even have a small invited speaking gig about writing and AI, which came about because of a blog post I wrote back in December— more on that in a future post, I’m sure.

But when it comes to me pursuing AI as my next “something” to research, I feel like I have two problems. First, it might already be too late for me to catch up. Sure, I’ve been getting some attention by blogging about it, and I had a “writing with GPT-3” assignment in a class I taught last fall, which I guess kind of puts me at least closer to being current with this stuff in terms of writing studies. But I also know there are already folks in the field (and I know some of these people quite well) who have been working on this for years longer than me.

Plus a ton of folks are clearly rushing into AI research at full speed. Just the other day, the CWCON at Davis organizers sent around a draft of the program for the conference in June. The Call For Proposals they released last summer describes the theme of this year’s event, “hybrid practices of engagement and equity.” I skimmed the program to get an idea of the overall schedule and some of what people were going to talk about, and there were a lot of mentions of ChatGPT and AI, which makes me think a lot of people are likely to be not talking about the CFP theme at all.

This brings me to the bigger problem I see with researching and writing about AI: it looks to me like this stuff is moving very quickly from being “something” to “everything.” Here’s what I mean:

A research agenda/focus needs to be “something” that has some boundaries. MOOCs were a good example of this. MOOCs were definitely “hot” from around 2012 to 2015 or so, and there was a moment back then when folks in comp/rhet thought we were all going to be dealing with MOOCs for first year writing. But even then, MOOCs were just a “something”  in the sense that you could be a perfectly successful writing studies scholar (even someone specializing in writing and technology) and completely ignore MOOCs.

Right now, AI is a myriad of “somethings,” but this is moving very quickly toward “everything.” It feel to me like very soon (five years, tops), anyone who wants to do scholarship in writing studies is going to have to engage with AI. Successful (and even mediocre) scholars in writing studies (especially someone specializing in writing and technology) are not going to be able to ignore AI.

This all reminds me a bit about what happened with word processing technology. Yes, this really was something people studied and debated way back when. In the 1980s and early 1990s, there were hundreds of articles and presentations about whether or not to use word processing to teach writing— for example, “The Word Processor as an Instructional Tool: A Meta-Analysis of Word Processing in Writing Instruction” by Robert L. Bangert-Drowns, or “The Effects of Word Processing on Students’ Writing Quality and Revision Strategies” by Ronald D. Owston, Sharon Murphy, Herbert H. Wideman. These articles were both published in the early 1990s and in major journals, and both are trying to answer the question which one is “better.” (By the way, most but far from all of these studies concluded that word processing is better in the sense it helped students generate more text and revise more frequently. It’s also worth mentioning that a lot of this research overlaps with studies about the role of spell-checking and grammar-checking with writing pedagogy).

Yet in my recollection of those times, this comparison between word processing and writing by hand was rendered irrelevant because everyone– teachers, students, professional writers (at least all but the most stubborn, as Wendell Berry declares in his now cringy and hopelessly dated short essay “Why I Am not Going to Buy a Computer”)– switched to word processing software on computers to write. When I started teaching as a grad student in 1988, I required students to hand in typed papers and I strongly encouraged them to write at least one of their essays with a word processing program. Some students complained because they were never asked to type anything in high school. By the time I started my PhD program five years later in 1993, students all knew they needed to type their essays on a computer and generally with MS Word.

Was this shift a result of some research consensus that using a computer to type texts was better than writing texts out by hand? Not really, and obviously, there are still lots of reasons why people still write some things by hand– a lot of personal writing (poems, diaries, stories, that kind of thing) and a lot of note-taking. No, everyone switched because everyone realized word processing made writing easier (but not necessarily better) in lots and lots of different ways and that was that. Even in the midst of this panicky moment about plagiarism and AI, I have yet to read anyone seriously suggest that we make our students give up Word or Google Docs and require them to turn in handwritten assignments. So, as a researchable “something,” word processing disappeared because (of course) everyone everywhere who writes obviously uses some version of word processing, which means the issue is settled.

One of the other reasons why I’m using word processing scholarship as my example here is because both Microsoft and Google have made it clear that they plan on integrating their versions of AI into their suites of software– and that would include MS Word and Google Docs. This could be rolling out just in time for the start of the fall 2023 semester, maybe earlier. Assuming this is the case, people who teach any kind of writing at any kind of level are not going to have time to debate if AI tools will be “good” or “bad,” and we’re not going to be able to study any sorts of best practices either. This stuff is just going to be a part of the everything, and for better or worse, that means the issue will soon be settled.

And honestly, I think the “everything” of AI is going to impact, well, everything. It feels to me a lot like when “the internet” (particularly with the arrival of web browsers like Mosaic in 1993) became everything. I think the shift to AI is going to be that big, and it’s going to have as big of an impact on every aspect of our professional and technical lives– certainly every aspect that involves computers.

Who the hell knows how this is all going to turn out, but when it comes to what this means for the teaching of writing, as I’ve said before, I’m optimistic. Just as the field adjusted to word processing (and spell-checkers and grammar-checkers, and really just the whole firehouse of text from the internet), I think we’ll be able to adjust to this new something to everything too.

As far as my scholarship goes though: for reasons, I won’t be able to eligible for another release from teaching until the 2025-26 school year. I’m sure I’ll keep blogging about AI and related issues and maybe that will turn into a scholarly project. Or maybe we’ll all be on to something entirely different in three years….

 

What Would an AI Grading App Look Like?

While a whole lot of people (academics and non-academics alike) have been losing their minds lately about the potential of students using ChatGPT to cheat on their writing assignments, I haven’t read/heard/seen much about the potential of teachers using AI software to read, grade, and comment on student writing. Maybe it’s out there in the firehose stream of stories about AI I see every day (I’m trying to keep up a list on pinboard) and I’ve just missed it.

I’ve searched and found some discussion of using ChatGPT to grade on Reddit (here and here), and I’ve seen other posts about how teachers might use the software to do things other than grading, but that’s about it. In fact, the reason I’m thinking about this again now is not because of another AI story but because I watched a South Park episode about AI called “Deep Learning.” South Park has been a pretty uneven show for several years, but if you are fan and/or if you’re interested in AI, this is a must-see. A lot happens in this episode, but my favorite reaction about ChatGPT comes from the kids’ infamous teacher, Mr. Garrison. While complaining about grading a stack of long and complicated essays (which the students completed with ChatGPT), Rick (Garrison’s boyfriend) tells him about ChatGPT, and Mr. Garrison has far too honest of a reaction: “This is gonna be amazing! I can use it to grade all my papers and no one will ever know! I’ll just type the title of the essay in, it’ll generate a comment, and I don’t even have to read the stupid thing!”

Of course, even Mr. Garrison knows that would be “wrong” and he must keep this a secret. That probably explains why I still haven’t come across much about an AI grading app. But really though: shouldn’t we be having this discussion? Doesn’t Mr. Garrison have a point?

Teacher concerns about grading/scoring writing with computers are not new, and one of the nice things about having kept a blog so long is I can search and “recall” some of these past discussions. Back in 2005, I had a post about NCTE coming out against the SAT writing test and machine scoring of those tests. There was also a link in that post to an article about a sociologist at the University of Missouri named Edward Brent who had developed a way of giving students feedback on their writing assignments. I couldn’t find the original article, but this one from the BBC in 2005 covers the same story. It seems like it was a tool developed very specifically for the content of Brent’s courses and I’m guessing it was quite crude by today’s standards. I do think Brent makes a good point on the value of these kinds of tools: “It makes our job more interesting because we don’t have to deal so much with the facts and concentrate more on thinking.”

About a decade ago, I also had a couple of other posts about machine grading, both of which were posts that grew out of discussions from the now mostly defunct WPA-L. There was this one from 2012, which included a link to a New York Times article about Educational Testing Service’s product “e-rater,” “Facing a Robo-Grader? Just Keep Obfuscating Mellifluously.” The article features Les Perelman, who was the director of writing at MIT, demonstrating ways to fool e-rater with nonsense and inaccuracies. At the time, I thought Perelman was correct, but also a good argument could be made that if a student was smart enough to fool e-rater, maybe they deserved the higher score.

Then in 2013, there was another kerfuffle on WPA-L about machine grading that involved a petition drive at the website humanreaders.org against machine grading. In my post back then, I agreed with the main goal of the petition,  that “Machine grading software can’t recognize things like a sense of humor or irony, it tends to favor text length over conciseness, it is fairly easy to circumvent with gibberish kinds of writing, it doesn’t work in real world settings, it fuels high stakes testing, etc., etc., etc.” But I also had some questions about all that. I made a comparison between these new tools and the initial resistance to spell checkers, and then I also wrote this:

As a teacher, my least favorite part of teaching is grading. I do not think that I am alone in that sentiment. So while I would not want to outsource my grading to someone else or to a machine (because again, I teach writing, I don’t just assign writing), I would not be against a machine that helps make grading easier. So what if a computer program provided feedback on a chunk of student writing automatically, and then I as the teacher followed behind those machine comments, deleting ones I thought were wrong or unnecessary, expanding on others I thought were useful? What if a machine printed out a report that a student writer and I could discuss in a conference? And from a WPA point of view, what if this machine helped me provide professional development support to GAs and part-timers in their commenting on students’ work?

By the way, an ironic/odd tangent about that post: the domain name humanreaders.org has clearly changed hands. In 2013, it looked like this (this link is from the Internet Archive): basically, a petition form. The current site domain humanreaders.org redirects to this page on some content farm website called we-heart.com. This page, from 2022, is a list of the “six top online college paper writing websites today.”

Anyway, let me state the obvious: I’m not suggesting an AI application for replacing all teacher feedback (as Mr. Garrison is suggesting) at all. Besides the fact that it wouldn’t be “right” no matter how you twist the ethics of it, I don’t think it would work well– yet. Grading/commenting on student writing is my least favorite part of the job, so I understand where Mr. Garrison is coming from. Unfortunately though, reading/ grading/ commenting on student writing is essential to teaching writing. I don’t know how I can evaluate a student’s writing without reading it, and I also don’t know how to help students think about how to revise their writing (and, hopefully, learn how to apply these lessons and advice to writing these students do beyond my class) without making comments.

However, this is A LOT of work that takes A LOT of time. I’ve certainly learned some things that make grading a bit easier than it was when I started. For example, I’ve learned that less is more: marking up every little mistake or thing in the paper and then writing a really long end comment is a waste of time because it confuses and frustrates students and it literally takes longer. But it still takes me about 15-20 minutes to read and comment on each long-ish student essay, which are typically a bit shorter than this blog post. So in a full (25 students) writing class, it takes me 8-10 hours to completely read, comment on, and grade all of their essays; multiply that by two or three or more (since I’m teaching three writing classes a term), and it adds up pretty quickly. Plus we’re talking about student writing here. I don’t mind reading it and students often have interesting and inspiring observations, but by definition, these are writers who are still learning and who often have a lot to learn. So this isn’t like reading The New Yorker or a long novel or something you can get “lost” in as a reader. This ain’t reading for fun– and it’s also one of the reasons why, after reading a bunch of student papers in a day, I’m much more likely to just watch TV at night.

So hypothetically, if there was a tool out there that could help me make this process faster, easier, and less unpleasant, and if this tool also helped students learn more about writing, why wouldn’t I want to use it?

I’ve experimented a bit with ChatGPT with prompts along the lines of “offer advice on how to revise and improve the following text” and then paste in a student essay. The results are mix of (IMO) good, bad, and wrong, and mostly written in the robotic voice typical of AI writing. I think students would have a hard time sorting through these mixed messages. Plus I don’t think there’s a way (yet) for ChatGPT to comment on specific passages in a piece of student writing: that is, it can provide an overall end comment, but it cannot comment on individual sentences and paragraphs and have those comments appear in the margins like the comment feature in Word or Google Docs. Like most writing teachers, that’s a lot of the commenting I do, so an AI that can’t do that (yet) at all just isn’t that useful to me.

But the key phrase there is “yet,” and it does not take a tremendous amount of imagination to figure out how this could work in the near future. For example, what if I could train my own grading AI by feeding it a few classes worth of previous student essays with my comments? I don’t logistically know how that would work, but I am willing to bet that with enough training, a Krause-centric version of ChatGPT would anticipate most of the comments I would make myself on a student writing project. I’m sure it would be far from perfect, and I’d still want to do my own reading and evaluation. But I bet this would save me a lot of time.

Maybe, some time in the future, this will be a real app. But there’s another use of ChatGPT I’ve been playing around with lately, one I hesitate on trying but one that would both help some of my struggling students and save me time on grading. I mentioned this in my first post about using ChatGPT to teach way back in December. What I’ve found in my ChatGPT noodling (so far) is if I take a piece of writing that has a ton of errors in it (incomplete sentences, punctuation in the wrong place, run-on/meandering sentences, stuff like that– all very common issues, especially for first year writing students) and prompt ChatGPT to revise the text so it is grammatically correct, it does a wonderful job.It doesn’t change the meaning or argument of the writing– just the grammar. It generally doesn’t make different word choices and it certainly doesn’t make the student’s argument “smarter”; it just arranges everything so it’s correct.

That might not seem like much, but for a lot of students who struggle with getting these basics right, using ChatGPT like this could really help. And to paraphrase Edward Brent from way back in 2005, if students could use a tool like this to at least deal with basic issues like writing more or less grammatically correct sentences, then I might be able to spend more time concentrating more on the student’s analysis, argument, use of evidence, and so forth.

And yet– I don’t know, it even feels to me like a step too far.

I have students who have diagnosed learning difficulties of one sort or another who show me letters of accommodation from the campus disability resource center which specifically tell me I should allow students to use Grammarly in their writing process. I encourage students to go to the writing center all the time, in part because I want my students– especially the struggling ones– to sit down with a consultant who will help them go through their essays so they can revise and improve it. I never have a problem with students wanting to get feedback on their work from a parent or a friend who is “really good” at writing.

So why does it feel like encouraging students to try this in ChatGPT is more like cheating than it does for me to encourage students to be sure to spell check and to check out the grammar suggestions made by Google Docs? Is it too far? Maybe I’ll find out in class next week.

The Problem is Not the AI

The other day, I heard the opening of this episode of the NPR call-in show 1A, “Know It All: ChatGPT In the Classroom.” It opened with this recorded comment from a listener named Kate:

“I teach freshman English at a local university, and three of my students turned in chatbot papers written this past week. I spent my entire weekend trying to confirm they were chatbot written, then trying to figure out how to confront them, to turn them in as plagiarist, because that is what they are, and how I’m going penalize their grade. This is not pleasant, and this is not a good temptation. These young men’s academic careers now hang in the balance because now they’ve been caught cheating.”

Now, I didn’t listen to the show for long beyond this opener (I was driving around running errands), and based on what’s available on the website, the discussion  also included information about incorporating ChatGPT into teaching. Also, I don’t want to be too hard on poor Kate; she’s obviously really flustered and I am guessing there were a lot of teachers listening to Kate’s story who could very personally relate.

But look, the problem is not the AI.

Perhaps Kate was teaching a literature class and not a composition and rhetoric class, but let’s assume whatever “freshman English” class she was teaching involved a lot of writing assignments. As I mentioned in the last post I had about AI and teaching with GPT-3 back in December, there is a difference between teaching writing and assigning writing. This is especially important in classes where the goal is to help students become better at the kind of writing skills they’ll need in other classes and “in life” in general.

Teaching writing means a series of assignments that build on each other, that involve brainstorming and prewriting activities, and that involve activities like peer reviews, discussions of revision, reflection from students on the process, and so forth. I require students in my first year comp/rhet classes to “show their work” through drafts that is in a way they similar to how they’d be expected to in an Algebra or Calculus course. It’s not just the final answer that counts. In contrast, assigning writing is when teachers give an assignment (often a quite formulaic one, like write a 5 paragraph essay about ‘x’) with no opportunities to talk about getting started, no consideration of audience or purpose, no interaction with the other students who are trying to do the same assignment, and no opportunity to revise or reflect.

While obviously more time-consuming and labor-intensive, teaching writing has two enormous advantages over only assigning writing. First, we know it “works” in that this approach improves student writing– or at least we know it works better than only assigning writing and hoping for the best. We know this because people in my field have been studying this for decades, despite the fact that there are still a lot of people just assigning writing, like Kate. Second, teaching writing makes it extremely difficult to cheat in the way Kate’s students have cheated– or maybe cheated. When I talk to my students about cheating and plagiarism, I always ask “why do you think I don’t worry much about you doing that in this class?” Their answer typically is “because we have to turn in all this other stuff too” and “because it would be too much work,” though I also like to believe that because of the way the assignments are structured, students become interested in their own writing in a way that makes cheating seem silly.

Let me just note that what I’m describing has been the conventional wisdom among specialists in composition and rhetoric for at least the last 30 (and probably more like 50) years. None of this is even remotely controversial in the field, nor is any of this “new.”

But back to Kate: certain that these three students turned in “chatbot papers,” she spent the “entire weekend” working to prove these students committed the crime of plagiarism and they deserve to be punished. She thinks this is a remarkably serious offense– their “academic careers now hang in the balance”– but I don’t think she’s going through all this because of some sort of abstract and academic ideal. No, this is personal. In her mind, these students did this to her and she’s going to punish them. This is beyond a sense of justice. She’s doing this to get even.

I get that feeling, that sense that her students betrayed her. But there’s no point in making teaching about “getting even” or “winning” because as the teacher, you create the game and the rules, you are the best player and the referee, and you always win. Getting even with students is like getting even with a toddler.

Anyway, let’s just assume for a moment that Kate’s suspicions are correct and these three students handed in essays created entirely by ChatGPT. First off, anyone who teaches classes like “Freshman English” should not need an entire weekend or any special software to figure out if these essays were written by an AI. Human writers– at all levels, but especially comparatively inexperienced human writers– do not compose the kind of uniform, grammatically correct, and robotically plodding prose generated by ChatGPT. Every time I see an article with a passage of text that asks “was this written by a robot or a student,” I always guess right– well, almost always I guess right.

Second,  if Kate did spend her weekend trying to find “the original” source ChatGPT used to create these essays, she certainly came up empty handed. That was the old school way of catching plagiarism cheats: you look for the original source the student plagiarized and confront the student with it, court room drama style. But ChatGPT (and other AI tools) do not “copy” from other sources; rather, the AI creates original text every time. That’s why there have been several different articles crediting an AI as a “co-author.”

Instead of wasting a weekend, what Kate should have done is called each of these students into her office or taken them aside one by one in a conference and asked them about their essays. If the students cheated,  they would not be able to answer basic questions about what they handed in, and 99 times out of 100, the confronted cheating student will confess.

Because here’s the thing: despite all the alarm out there that all students are cheating constantly, my experience has been the vast majority do not cheat like this, and they don’t want to cheat like this. Oh sure, students will sometimes “cut corners” by looking over to someone else’s answers on an exam, or maybe by adding a paragraph or two from something without citing it. But in my experience, the kind of over-the-top sort of cheating Kate is worried about is extremely rare. Most students want to do the right thing by doing the work, trying to learn something, and by trying their best– plus students don’t want to get in trouble from cheating either.

Further, the kinds of students who do try to blatantly plagiarize are not “criminal masterminds.” Far from it. Rather, students blatantly plagiarize when they are failing and desperate, and they are certainly not thinking of their “academic careers.” (And as a tangent: seems to me Kate might be overestimating the importance of her “Freshman English” class a smidge).

But here’s the other issue: what if Kate actually talked to these students, and what if it turned out they either did not realize using ChatGPT was cheating, and/or they used ChatGPT in a way that wasn’t significantly different from getting some help from the writing center or a friend? What do you do then? Because– and again, I wrote about this in December— when I asked students to use GPT-3 (OpenAI’s software before ChatGPT) to write an essay and to then reflect on that process, a lot of them described the software as being a brainstorming tool, sort of like a “coach,” and not a lot different from getting help from others in peer review or from a visit to the writing center.

So like I said, I don’t want to be too hard on Kate. I know that there are a lot of teachers who are similarly freaked out about students using AI to cheat, and I’m not trying to suggest that there is nothing to worry about either. I think a lot of what is being predicted as the “next big thing” with AI is either a lot further off in the future than we might think, or it is in the same category as other famous “just around the corner” technologies like flying cars. But no question that this technology is going to continue to improve, and there’s also no question that it’s not going away. So for the Kates out there: instead of spending your weekend on the impossible task of proving that those students cheated, why not spend a little of that time playing around with ChatGPT and seeing what you find out?

The Year That Was 2022 (turning some corners?)

If 2020 was horrible and 2021 was, I don’t know, what?, then I think the best description of 2022 was “shows improvement.”

My first prediction of what was to come in 2022 (I made in that last post of 2021) turned out to be wrong: we did not go to the MLA convention in Washington, D.C. because Covid numbers (oh hi, Omicron!) were through the roof. MLA’s approach to dealing with Covid was remarkably reasonable. As I understand it (from what my wife said since she was the one participating), the conference organizers told folks if they still wanted to present f2f they could (because it was too late for MLA to cancel the whole thing), but if people wanted to present electronically and via synchronous conferencing software, then they could do that instead. All the panel chairs/organizers had to do was give the MLA a link to how they were going to do it. In my opinion, that was a smart way to schedule and adjust a conference during Covid: let presenters figure out their own synchronous conferencing software instead of putting all the presentations and materials in a junky content management system behind a firewall. I wish my field’s conferences had taken this approach. Anyway, Annette did her presentation via Zoom with a typical conference audience; D.C. would have to come later.

January was the start of Annette’s and my own faculty research fellowships, and for me, that meant doing a whole lot of interviews of folks who had earlier participated in my “Online Teaching and the ‘New Normal'” survey, which is about the experiences of teaching online during Covid. I ended up doing around 37 or so of these interviews, and I’m still trying to figure out how I’m going to analyze the pile of transcripts I’ve got. The sun rose and I took a picture. Travel included Annette going on a trip with friends to Puerto Rico and about at the same time, I went down to Orange Beach, Alabama where I met up with my parents and my sisters to celebrate my father’s 80th birthday. Movies included the kind of forgettable Midnight Alley and the rest of The Beatles documentary Get Back!

February was work stuff– interviews and also some other writing, but also working off and on on my CCCCs presentation. I had been very much looking forward to going to the f2f conference in Chicago in March 2022, but that was (prematurely and wrongly, IMO) cancelled. I continued to make bread. Did more interviews. Saw (among many other things) Licorice Pizza and The Big Lebowski for about the 90th time.

March was the CCCs Online, which was, um, unpleasant. I think this post from Mike Edwards (where he does quote me, actually) sums up things fairly well. Here’s also a link to my first and second posts about the conference. I won’t be attending this year because (for like the fifth time in a row) the theme for the conference has nothing to do with the kind of research and scholarship I do. But that’s okay. Maybe I’ll go again someday, maybe I won’t.

March also took us on the road to the Charleston, South Carolina area to do something that got us out of the too cold for at least a while. We stopped in Charleston, West Virginia on the way (gross) and then spent a night in Durham, North Carolina to catch up with Rachel and Collin and a lovely meal out at a French restaurant they like. Then we spent a week at a condo on Seabrook Island. It was a pretty good get-away: we got some work done (we both did a lot of reading and writing things), went into Charleston a couple times (meh, it was nice I guess), went on a cool plantation tour, I attended (via Zoom) a department meeting while walking on the beach one nice day, and we did have some good food here and there too. It was all nice enough and I don’t rule out going again, but it wasn’t quite our thing, I don’t think. I started working on this Computers and Composition Online article based on my online teaching survey (more on that later too). Among other things, watched Painting With John on HBO, another season of Survivor, rewatched The French Dispatch.

April and more interviewing, more working on the CCO piece, and starting to work on the Computers and Writing Conference session. I was originally going to go to that (it was in Greenville, NC), but life/home plans got in the way. So once again I was online, and also once again, it was “on demand,” which is to say that I also ended up presenting to the online equivalent of an empty room– not the first time I’ve done that, but still, a group like computers and writing should do better. I posted my “talk” here. I’m afraid I will probably not be able to be there face to face for the 2023 CWCON at UC-Irvine; that trip is still TBA, though those organizers seem more committed to hosting a viable online experience.  In April, I saw probably the best movie I’ve seen this year, Everything Everywhere All At Once, and listened to (or started listening to) a book by Johann Hari called Stolen Focus which I’m going to assign in WRTG 121 this coming winter term. Started doing yard stuff, Annette got a kayak, I baked still more bread. Oh, also saw a movie called Jesus Shows You the Way to the Highway that was bonkers.

May and more interviewing, more working o the CCO piece, the CWCON 22 happened (I wasn’t as involved as much as I should have been, but I did poke around at some other “on demand” materials that were interesting), started planting stuff in the garden, started golfing some, ate a fair amount of asparagus, etc. And then at the end of the month, we went up north to stay at a fantastic house on Big Glen Lake. We were planning on going back there in 2023, but after a series of events I don’t understand (was the house sold? is there a problem with the rental company? something else?), we’re staying someplace different. Stay tuned for early June 2023. Among other things, we watched Gog.

By June, I started having some “interesting” discussions with the editors of the CCO about my article. Let’s just say that the reviewer involved in the process was “problematic” and leave it at that. Eventually, I think the editors were able to give me some good direction that helped me make this into a good piece (IMO), but it wasn’t easy. More interviews, but that was the last of them. There was more gardening, more going out for lunch while Will was visiting, more of “the work,” seeing movies, etc.

July was a lot of travel. We went to D.C.– I suppose because the trip in January was scrubbed– and then to New Haven to see Will, then to New York City via train for a couple of nights (saw our friend Annette, a kind of off-Broadway production of Little Shop of Horrors, and went walking on the high line park and to the Whitney museum), then to Portland, Maine (only for a night– I’d go back for sure), and then to Bar Harbor and Arcadia National Park. It was a really lovely trip. I think I am more fond of the grand “road trip” than Annette is, but she played along. After the cruise (see below), I believe I have two states left on my “having at least passed through” list: Rhode Island (which I figure we can tick off the next time we go out to visit Will) and North Dakota, which might require a more purposeful trip. Among other things this month, watched at least one Vincent Price movie.

August was more travel– and getting ready to teach too. We went to Iowa to celebrate my mother’s 80th birthday party, and then (of all things!) we went on a cruise to Alaska. Among the lessons learned from that trip are if you are going to take a cruise for Alaska, you need to go for longer than just the 5 nights we went. Highlights include actually touching ground in Ketchikan, Alaska (briefly) and a stop in the delightful town of Victoria, British Columbia. Then back here and getting ready for teaching again– for the first time in eight months.

September and EMU started up again– at least for about a week. Then the faculty went on strike, which was the first time that’d happened around here since 2006. I blogged about some of this back here. It was interesting being one of the old hands around here this time around. I got here in 1998, and by 2006, I think we had been on strike or close to it twice before, and the 2006 strike was “the big one.” So 16 years between strikes was a long time. It was disruptive and chaotic and frustrating, but also necessary and probably the most justifiable strike I’ve experienced, and we did end up getting a better deal than we would have otherwise. Oh, and I need to note this here (since I will someday look back at this post and go “oh yeah, that’s right!”): One of the things that really seemed to make the administration want to settle things up is that Michael Tew, who was a vice provost and one of the four or five people who run stuff at EMU, was busted for masturbating while he was driving around naked in Dearborn with all of the doors and the roof off of his Jeep. Classy. Anyway, there was teaching on either side of the few days we had off on striking, and it was kind of a rough start of the term for me. I have said and written this elsewhere: it was like getting back on a bike after having not ridden one in a long time in that I remembered how to do it, but I wasn’t quite sure how to go too fast or to turn too quickly or whatever. My students in my f2f class (first year writing) seemed to feel mostly the same way. Among other things, we watched Shakes the Clown.

A word about Covid here: by the end of the first month or so of the semester, and after a summer of travel that included a LOT of potentially infectious places like crowded museums, restaurants, planes, trains, and a cruise ship, and I still haven’t had Covid– or if I have had it, I never knew it (and that’s perhaps most likely). I’m not saying it is “over” or it’s nothing at all to worry about, and I’m fully vaxxed up (and I got a flu shot too). But for the most part, it feels like Covid is mostly over.

October was more work stuff with a trip up north in the middle of the month. It was both nice and not: “nice” because it’s always good to get-away, we caught up with friends who live up there, saw some pretty leaves, had a Chubby Mary, etc., but “not” because the hot tub at the place we rented didn’t work (and look, that was the point of renting that place) and it was cold and rainy and even snowy. And as is so often the case in Michigan, it was stunningly beautiful weather for like 10 days after our trip, both up there and down here.  Also in a note of not being over with Covid but just not worrying about it a lot anymore: Halloween was back to full-on trick or treating– no delivery tubes, for example.

November started off with politics, and that turned out great in Michigan, pretty okay everywhere else. Yeah, the Republicans didn’t do as well as they should have, but they still control the House– well, they have more votes. I don’t think there’s going to be a lot of “control” in the next year or so. Lots of teaching stuff and work stuff, some pie making, and then to Iowa for the Krause Thanksgiving-Christmas get-together.

December and things got a little more interesting around here. I blogged some about ChatGPT and having my students in a class use GPT-3 for an assignment. That post got a lot of hits. If I wasn’t already kind of committed to working on the transcripts of the interviews of people teaching online during Covid, I might very well spend some time and effort on researching this stuff. It’s quite interesting, and given the completely unnecessary and goofy level of freak-out I’ve seen on social media about, it’s also necessary work. Oh, and that Computers and Composition Online article finally came out. I’ll have to read some of the other articles in this issue, too. Then the semester was over and it was time for a trip to the in-laws, who moved into a smaller place. So new adventures for them, and for us too: we stayed at a pretty nice airbnb, actually rented a car, explored new restaurants and dressy dining rooms. And still a fair amount of damage from Ian.

Well, that’s it– at least the stuff I’m willing to write down here.

AI Can Save Writing by Killing “The College Essay”

I finished reading and grading the last big project from my “Digital Writing” class this semester, an assignment that was about the emergence of openai.com’s artificial intelligence technologies GPT-3 and DALL-E. It was interesting and I’ll probably write more about it later, but the short version for now is my students and I have spent the last month or so noodling around with software and reading about both the potentials and dangers of rapidly improving AI, especially when it comes to writing.

So the timing of of Stephen Marche’s recently published commentary with the clickbaity title “The College Essay Is Dead” in The Atlantic could not be better– or worse? It’s not the first article I’ve read this semester along these lines, that GPT-3 is going to make cheating on college writing so easy that there simply will not be any point in assigning it anymore. Heck, it’s not even the only one in The Atlantic this week! Daniel Herman’s “The End of High-School English” takes a similar tact. In both cases, they claim, GPT-3 will make the “essay assignment” irrelevant.

That’s nonsense, though it might not be nonsense in the not so distant future. Eventually, whatever comes after GPT-3 and ChatGPT might really mean teachers can’t get away with only assigning writing. But I think we’ve got a ways to go before that happens.

Both Marche and Herman (and just about every other mainstream media article I’ve read about AI) make it sound like GPT-3, DALL-E, and similar AIs are as easy as working the computer on the Starship Enterprise: ask the software for an essay about some topic (Marche’s essay begins with a paragraph about “learning styles” written by GPT-3), and boom! you’ve got a finished and complete essay, just like asking the replicator for Earl Grey tea (hot). That’s just not true.

In my brief and amateurish experience, using GPT-3 and DALL-E is all about entering a carefully worded prompt. Figuring out how to come up with a good prompt involved trial and error, and I thought it took a surprising amount of time. In that sense, I found the process of experimenting with prompts similar to the kind of  invention/pre-writing activities  I teach to my students and that I use in my own writing practices all the time.  None of my prompts produced more than about two paragraphs of useful text at a time, and that was the case for my students as well. Instead, what my students and I both ended up doing was entering in several different prompts based on the output we were hoping to generate. And my students and I still had to edit the different pieces together, write transitions between AI generated chunks of texts, and so forth.

In their essays, some students reflected on the usefulness of GPT-3 as a brainstorming tool.  These students saw the AI as a sort of “collaborator” or “coach,” and some wrote about how GPT-3 made suggestions they hadn’t thought of themselves. In that sense, GPT-3 stood in for the feedback students might get from peer review, a visit to the writing center, or just talking with others about ideas. Other students did not think GPT-3 was useful, writing that while they thought the technology was interesting and fun, it was far more work to try to get it to “help” with writing an essay than it was for the student to just write the thing themselves.

These reactions square with the results in more academic/less clickbaity articles about GPT-3. This is especially true about  Paul Fyfe’s “How to cheat on your final paper: Assigning AI for student writing.” The assignment I gave my students was very similar to what Fyfe did and wrote about– that is, we both asked students to write (“cheat”) with AI (GPT-2 in the case of Fyfe’s article) and then reflect on the experience. And if you are a writing teacher reading this because you are curious about experimenting with this technology, go and read Fyfe’s article right away.

Oh yeah, one of the other major limitations of GPT-3’s usefulness as an academic writing/cheating tool: it cannot do even basic “research.” If you ask GPT-3 to write something that incorporates research and evidence, it either doesn’t comply or it completely makes stuff up, citing articles that do not exist. Let me share a long quote from a recent article at The Verge by James Vincent on this:

This is one of several well-known failings of AI text generation models, otherwise known as large language models or LLMs. These systems are trained by analyzing patterns in huge reams of text scraped from the web. They look for statistical regularities in this data and use these to predict what words should come next in any given sentence. This means, though, that they lack hard-coded rules for how certain systems in the world operate, leading to their propensity to generate “fluent bullshit.”

I think this limitation (along with the limitation that GPT-3 and ChatGPT are not capable of searching the internet) makes using GPT-3 as a plagiarism tool in any kind of research writing class kind of a deal-breaker. It certainly would not get students far in most sections of freshman comp where they’re expected to quote from other sources.

Anyway, the point I’m trying to make here (and this is something that I think most people who teach writing regularly take as a given) is that there is a big difference between assigning students to write a “college essay” and teaching students how to write essays or any other genre. Perhaps when Marche was still teaching Shakespeare (before he was a novelist/cultural commentator, Marche earned a PhD specializing in early English drama), he assigned his students to write an essay about one of Shakespeare’s plays. Perhaps he gave his students some basic requirements about the number of words and some other mechanics, but that was about it. This is what I mean by only assigning writing: there’s no discussion of audience or purpose, there are no opportunities for peer review or drafts, there is no discussion of revision.

Teaching writing is a process. It starts by making writing assignments that are specific and that require an investment in things like prewriting and a series of assignments and activities that are “scaffolding” for a larger writing assignment. And ideally, teaching writing includes things like peer reviews and other interventions in the drafting process, and there is at least an acknowledgment that revision is a part of writing.

Most poorly designed assigned writing tasks are good examples of prompts that you enter into GPT-3. The results are definitely impressive, but I don’t think it’s quite useful enough to produce work a would-be cheater can pass off as their own. For example, I asked ChatGPT (twice) to “write a 1000 word college essay about the theme of insanity in Hamlet” and it came up with this and this essay. ChatGPT produced some impressive results, sure, but besides the fact that both of these essays are significantly shorter than 1000 word requirement, they both kind of read like… well, like a robot wrote them. I think that most instructors who received this essay from a student– particularly in an introductory class– would suspect that the student cheated. When I asked ChatGPT to write a well researched essay about the theme of insanity in Hamlet, it managed to produce an essay that quoted from the play, but not any research about Hamlet.

Interestingly, I do think ChatGPT has some potential for helping students revise. I’m not going to share the example here (because it was based on actual student writing), but I asked ChatGPT to “revise the following paragraph so it is grammatically correct” and I then added a particularly pronounced example of “basic” (developmental, grammatically incorrect, etc.) writing. The results didn’t improve the ideas in the writing and it changed only a few words. But it did transform the paragraph into a series of grammatically correct (albeit not terribly interesting) sentences.

In any event, if I were a student intent on cheating on this hypothetical assignment, I think I’d just do a Google search for papers on Hamlet instead. And that’s one of the other things Marche and these other commentators have left out: if a student wants to complete a badly designed “college essay” assignment by cheating, there are much much better and easier ways to do that right now.

Marche does eventually move on from “the college essay is dead” argument by the end of his commentary, and he discusses how GPT-3 and similar natural language processing technologies will have a lot of value to humanities scholars. Academics studying Shakespeare now have a reason to talk to computer science-types to figure out how to make use of this technology to analyze the playwright’s origins and early plays. Academics studying computer science and other fields connected to AI will now have a reason to maybe talk with the English-types as to how well their tools actually can write. As Marche says at the end, “Before that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance.”

Plus I have to acknowledge that I have only spent so much time experimenting with my openai.com account because I still only have the free version. That was enough access for my students and me to noodle around enough to complete a short essay composed with the assistance of GPT-3 and to generate an accompanying image with GPT-3. But that was about it. Had I signed up for openai.com’s “pay as you go” payment plan, I might learn more about how to work this thing, and maybe I would have figured out better prompts for that Hamlet assignment. Besides all that, this technology is getting better alarmingly fast. We all know whatever comes after ChatGPT is going to be even more impressive.

But we’re not there yet. And when it is actually as good as Marche fears it might be, and if that makes teachers rethink how they might teach rather than assign writing, that would be a very good thing.

Higher Education Didn’t Cause the Rise of MAGA Conservatism and It is a Major Part of the Only Possible Solution

As a college professor who also follows politics fairly closely, I’ve been noticing a lot of commentaries about how universities are making the political divide in America worse. I think that’s ridiculous (and the tl;dr version of this post is college educated people are leaving the Republican party not because college “makes” people into Democrats, but because the party has gone crazy). I guess these ideas have been in the air for a couple years now, though it’s gotten a bit more intense lately.

The version of this most in my mind now is Will Bunch’s After the Ivory Tower Falls: How College Broke the American Dream and Blew Up Our Politics—and How to Fix It, which I finished listening to a couple ago. There’s a lot to unpack in that book about things he got right and wrong (IMO), and I completely agree with this review in The New York Times. But in broad terms, Bunch argues higher education is the primary cause of political division and the rise of “MAGA” conservatism in the United States. Universities perpetuate a rigged meritocracy, they’ve grown increasingly liberal (I guess), and they have become horrifically expensive, all of which puts college out of reach for a lot of the same working class/working poor people who show up at Trump rallies.

This kind of thing seems to be in the air nowadays. For example, there’s this recent article from New York magazine, “How the Diploma Divide Is Remaking American Politics” by Eric Levitz. There’s no question that there have been shifts in how education aligns with political parties. Levitz notes that Kennedy lost the college-educated vote by a two-to-one margin, while Biden lost the non-college-educated vote by a two-to one margin. Levitz goes on to argue, with fairly convincing evidence, that higher education as an experience does tend to present people with similar ideas and concepts about things like science, art, ethics, and the like, and those tend to be the ideas and concepts embraced by people who identify as Democrats.

Or at least identify more as Democrats now– because as both Bunch and Levitz point out, college graduates were about equally split between the two parties until about 2004. In fact, as this 2015 article from the Pew Research Center discusses, more college graduates identified as Republicans between 1992 (where the data in that article begins) and 2004. And I’m old enough to vividly remember the presidential campaign between Al Gore and George W. Bush in 2000 and how one of the common complaints among undecided voters was Bush and Gore held the same positions on most of the major issues. How times have changed.

Anyway, U.S. universities did not tell state legislatures and voters during the Regan administration to cut funding to what once were public universities; politicians and voters did that. Higher education did not tell corporate America that a bachelors degree should be the required credential to apply for an entry-level white collar position, even when there seems little need for that kind of credential. That standard was put in place by corporate America itself, and corporate America is lead by the same people who said we shouldn’t support higher education with taxes. In other words, the systematic defunding of public higher education has been a double-whammy on poor people. The costs of college are putting it financially out of the reach of the kinds of students who could most benefit from a degree, and at the same time, it makes it easier for parents with plenty of money to send their kids (even the ones who did poorly in high school) to college so they can go on to a nice and secure white collar job.

I’m not saying that higher education isn’t a part of the problem. It is, and by definition, granting students credentials perpetuates a division between those who have a degree and those who do not. Universities have nothing to do with company polices that require salaried employees to have a bachelors degree in something, but universities are also very happy to admit all those students who have been told their entire lives that this is the only option they have.

But the main cause of the political division in this country? I’m not even sure if it’s in the top five. For starters– and Bunch acknowledges this– the lack of decent health care and insurance are at least as responsible for the divide between Americans as anything happening in higher education. A lot of Americans have student loan debt of course, but even more have crippling medical debt. Plus our still unfair and broken health care system enables/causes political division in “spin-off” ways like deaths and ruined lives from opioids and the Covid pandemic, both of which impact people who lack a college degree and who are poor at a higher rate. Plus the lack of access to both health care and higher education for so many poor people is both a symptom and a result of an even larger cause of political division in the U.S., which is the overall gap between rich and poor.

Then there’s been the changes in manufacturing in the U.S. A lot of good factory jobs that used to employ the people Bunch talks about–including white guys with just a high school diploma who voted for Obama twice and then Trump– moved to China, and/or disappeared because of technical innovations. One particular example from Bunch’s book is of a guy who switched from an Obama voter to a wildly enthusiastic MAGA Trump-type. Bunch wants to talk about how he became disillusioned with a Democratic party catering to educated and elite voters. That’s part of it, sure, but the fact that this guy used to work for a factory that made vinyl records and music CDs probably was a more significant factor in his life. I could go on, but you get the idea.

But again, I think these arguments that higher ed has caused political polarization because there are now more Democrats with college degrees than Republicans are backwards. The reason why there are fewer Republicans with college degrees now than there used to be is because the GOP, which has been moving steadily right since Bush II, has gone completely insane under Trump.

There have been numerous examples of what I’m talking about since around 2015 or so, but we don’t need to look any further than the current events of when I’m writing this post. Paul Pelosi, who is the husband of Nancy Pelosi, the Speaker of the House of Representatives, was violently attacked and nearly killed by a man who broke into the Pelosi’s San Francisco home. The intruder, who is clearly deranged in a variety of different ways, appears to have been inspired to commit this attack from a variety of conspiracies popular with the MAGA hardcore, including the idea that the election was fixed and that the leaders in the Democratic party in the US are intimately involved in an international child sex ring.

US Senate minority leader Mitch McConnell and House minority leader Kevin McCarthy condemned the attacks after they happened on Friday, but just a few days later, Republicans started to make false claims about the attack. For example, one theory has it that the guy who attacked Paul Pelosi was actually a male prostitute and it was a deal gone wrong. Others said the story just “didn’t add up,” and used it as an example of how Democrats are soft on crime. Still other Republicans– including GOP candidate for governor in Arizona Kari Lake and current Virginia Governor Glenn Youngkin— made jokes about what was a violent assault on the campaign trail. And of course, Trump is fueling these wacko theories as well.

Now, I’m not saying that college graduates are “smarter” than those who don’t have college degrees, and most of us who are college graduates still have a relatively narrow amount of knowledge and expertise. But besides providing expertise that leads to professions– like being an engineer or a chemist of an elementary school teacher or a writer or whatever– higher education also provides students at least some sense of cultural norms (as Levitz argues) about things like “Democracy,” the value of science and expertise, ethics, history, and art, and it equips students with the basic critical thinking skills that allows people to be better able to spot the lies, cons, and deceptions that are at the heart of MAGA conservatism.

So right now, I think people who are registered Republicans (I’m not talking about independents who lean conservative– I’ll come back to that in a moment) basically fall into three categories. There are people who still proudly declare they are Republicans but who are also “never Trumpers,”  though never Trumpers no longer have any candidates representing their views. Then there are those Republicans who actually believe all this stuff, and I think most of these people are white men (and their families) who have a high school degree and who were working some kind of job (a factory making records, driving trucks, mining coal, etc.) that has been “taken away” from them. These people have a lot of anger and Trump taps into all that, stirs it up even more, and he enables the kind of conspiracy thinking and racism that makes people not loyal to the Republican party but loyal to Trump as a charismatic leader. It’s essentially a cult, and the cult leaders are a whole lot more culpable than the followers they brain-washed.

Then there are Republicans who know all the conspiracies about the 2020 election and everything else are just bullshit but they just “go along with it,” maybe because they still agree with most of the conservative policies and/or maybe they’re just too attached to the party leave. But at the same time, it’s hard to know what these people actually believe. Does Trump believe his own bullshit? Hard to say. How about Rudy Giuliani or Lindsey Graham or  Kevin McCarthy? Sometimes, I think they know it’s all a con, and sometimes I don’t.

Either way, that’s why college grads aren’t joining the Republican party– and actually, why membership in the Republican party as a whole has gone down, even among people without a college degree. It certainly isn’t because people like me, Democrat-voting college professors, have “indoctrinated” college students or something. Hell, as many academic-types have said long before me, I can’t even get my students to routinely read the syllabus and complete assignments correctly; you think that I have the power to convince them that the Democrats are always right? I wish!

In other words, these would-be Republicans are not becoming Democrats; rather, they are contributing the growing number of independent voters, though ones who tend to vote for Republican candidates. I’ve seen this shift in my extended family as my once Republican in-laws and such talk about how they are no longer in the party. My more conservative relatives didn’t vote for Trump in 2020 and probably won’t in 2024 either, but that doesn’t mean they are going to vote for Biden.

One last thing: I’m not going to pretend to have the answer for how we get out of the political polarization that’s going on in this country, and I have no idea how we can possibly “un-brainwash” the hardcore MAGA and Qanon-types. I think these people are a lost cause, and I don’t think any of this division is going away as long as Trump is a factor. But there is no way we are ever going to get back to something that seems like “normal” without more education, and part of that means college.

On the Eve of a (Possible) Strike, Thinking Back on the Strike of 2006

We started classes here at EMU on Monday, August 29, and we might be halting them– at least all the ones taught by faculty– on Thursday, September 1, because that’s when the EMU-AAUP faculty union contract expires. Here’s a link to a story about all this on the Detroit NBC affiliate’s web site which kind of gets it right, but not quite.

I think the main sticking point right now is trying to figure out a way to give everyone a modest raise but that also covers a steep increase in health insurance. That is not an easy problem to solve at all because there are so many variables in play. For example, our only son is turning 25 and thus just about done with being eligible for our insurance anyway, and both my wife and I are in the “senior faculty” category and thus a lot more secure and settled in our positions. So for me, a contract that pays 3-4% a year plus some money to offset the increase in insurance premiums is fine. But for someone without that level of seniority (and the pay raises that accompany that) or who has many more dependents, especially if some of those children, spouses, other insured family members have some kind of condition that requires more elaborate (and expensive) insurance, the deal that EMU administration is proposing– even as they characterize it as an “up to 8% raise for most faculty”– really could be a pay cut for a lot of folks.

Anyway, I was thinking about some of that on my first day of teaching Tuesday and as I explained to my students that I might be on strike on Thursday, and I realized that the last time the EMU faculty went on strike was way back in the fall of 2006. This was before things like Facebook or Twitter were much of a thing, and I spent most of the energy I now spend on social media just on blogging here. And back during the strike, I blogged about it A LOT.

I don’t even know how many posts I wrote about all this and labeled The Strike of 2006— maybe 40? Maybe more? The chronology is a bit wonky here, so the “beginning” (back in August 2006) starts on the bottom of page 5 of this archive. It’s not worth rehashing all of it, but there are some interesting things. Once again, healthcare costs were the sticking point, which also once again reminds me that if we had a version of the kind of universal/government run health care program that’s available in most of the other countries in the world, or if we could just extend Medicare to everyone and not just people over 65, we probably would not have gone on strike back then, and we certainly wouldn’t go on strike now. But I digress.

More problematically perhaps, the other similarity between then and now seems to be the approach to negotiations taken by the administration. They have once again hired Dykema’s James P. Greene, who was even before the 2006 strike known around EMU as a “union busting” lawyer. I think he was the administration’s main negotiator before 2006 (and I recall being on strike a couple times before 2006 when I believe Greene was in charge), and that ended up being the ugliest strike in my time here. Back in 2006, there were complaints from both sides at the table similar to what we have now: a lack of willingness to actually negotiate, a lot of sketchy numbers being presented (mostly by the administration), a lot of “we almost have a deal” until we don’t, mediators, etc.

Hopefully, things will not turn as ugly as they did in 2006. For example, after being out on strike for four days, EMU (from then president Jim Fallon and BoR chair Karen Valvo) issued an ultimatum demanding (basically) that the faculty give up their childish strike and accept the administration’s terms by 10 PM on September 6 “or else.” Here’s my blog post about that, and (thanks to the Wayback Machine) here’s the administration’s original press release on all this. Well, that move (IMO) backfired on the administration badly. Before that, a lot of faculty– including me– were starting to say to each other that maybe it’d be best to settle and get on with the school year. But that threat really pissed people off, and (a long story made much shorter) we ended up staying out on strike for about two weeks, we “suspended” the strike and went back to work while the university and the union went through a “fact finding” and arbitration process that didn’t get resolved until the following spring. We actually ended up with a deal that was closer to what the faculty was originally asking for, but like I said, I’d just as soon avoid that.

One other difference I’m noticing this time around, at least in myself: I think the union/faculty is even more right this time around. As I wrote here way back when, I thought both sides of the table were playing pretty “fast and loose” with some of the facts in the name of a pissing contest that they both hoped to win. There’s still some of that going on, no question. But I think the administration is the one that’s prolonging this thing this time.

I guess we’ll see what the next 24 or so hours brings. Hopefully we’ll have a deal because a strike is not a “win” for anyone, not for our students of course, but not for the administration or the faculty either. Hopefully, the administration does recall that the last time they tried these tough guy bullshit tactics.

A few big picture takeaways from my research about teaching online during Covid

Despite not posting here all summer, I’ve been busy. I’ve been on what is called at EMU a “Faculty Research Fellowship” since January 2022, writing and researching about teaching online during Covid. These FRFs are one of the nicer perks here. FRFs are competitive awards for faculty to get released from teaching (but not service obligations), and faculty can apply every two years. Since I’m not on any committees right now, it was pretty much the same thing as a sabbatical: I had to go to some meetings and I was working with a grad student on her MA project as well, but for the most part, my time was my own. Annette also had an FRF at the same time.

I’ve had these FRF before, but I never gotten as much research stuff done as I did on this one. Oh sure, there was some vacationing and travel, usually also involving some work. Anyone who follows me on Facebook or Instagram is probably already aware of that, but I’m happy with what I managed to get done. Among other things:

  • I conducted 37 interviews with folks who took my original survey about online teaching during Covid and agreed to talk. Altogether, it’s probably close to 50 hours worth of recordings and maybe 1000 pages of transcript– more on that later.
  • I “gave presentations” at the CCCCs and at the Computers and Writing conference. Though I use the scare quotes because both were online and “on demand” presentations, which is to say not even close to the way I would have run an online conference (not that anyone asked). On the plus-side, both presentations were essentially pre-writing activities for other things, and both also count enough at EMU justify me keeping a 3-3 teaching load.
  • Plus I have an article coming out about all this work in Computers and Composition Online. It is/will be called “The Role of Previous Online Teaching Experience During the Covid Pandemic: An Exploratory Study of Faculty Perceptions and Approaches” (which should give you a sense about what it’s about), and hopefully it will be a “live” article/ website/ publication in the next month or two.

The next steps are going to involve reviewing the transcriptions (made quite a bit easier than it used to be with a website/software called Otter.ai) and to code everything to see what I’ve got. I’m not quite sure what I mean by “code” yet, if it is going to be something systematic that follows the advice in various manuals/textbooks about coding and analyzing qualitative data, or if it is going to be closer to what I did with the interviews I conducted for the MOOC book, where my methodology could probably best be described as journalism. Either way, I have a feeling that’s a project that is going to keep me busy for a couple of years.

But as I reflect on the end of my research fellowship time and also as I gear up for actually teaching again this fall, I thought I’d write a bit about some of the big picture take-aways I have from all of those interviews so far.

First off, I still think it’s weird that so many people taught online synchronously during Covid. I’ve blogged here and written in other places before about how this didn’t make sense to me when we started the “natural experiment” of online teaching during covid, and after a lot of research and interviews, it still doesn’t make sense to me.

I’m not saying that synchronous teaching with Zoom and similar tools didn’t work, though I think one pattern that will emerge when I dig more into the interviews is that faculty who taught synchronously and who also used other tools besides just Zoom (like they included asynch activities, they also used Zoom features like the chat window or breakout rooms, etc.) had better experiences than those who just used Zoom to lecture. It’s also clear that the distinction between asynchronous and synchronous online teaching was fuzzy. Still, given that that 85% or so of all online courses in US higher ed prior to Covid were taught only asynchronously, it is still weird to me that so many people new to teaching online knowingly (or, more likely, unknowingly) decided to take an approach that was (and still is) at odds with what’s considered the standard and “best practice” in distance education.

Second and very broadly speaking, I think faculty who elected to teach online synchronously during Covid did so for some combination of three reasons. And more or less in this descending order.

  • Most of the people who responded to my survey who taught online synchronously said their institution gave faculty a number of different options in terms of mode of teaching (f2f, hybrid, synch, asynch, etc.), and that seems to have been true generally speaking across the board in higher ed. But a lot of institutions– especially ones that focus on the traditional undergraduate college experience for 18-22 year olds and that offered few online courses before Covid– encouraged (and in some cases required) their faculty teach synchronously. And a lot of faculty I interviewed did say that the synchronous experience was indeed a “better than nothing” substitute for these students for what they couldn’t do on campus.

(It’s worth noting that I think this was striking to me in part because I’ve spent my career as a professor at a university where at least half of our students commute some distance to get to campus, are attending part-time, are returning adult students, etc. Institutions like mine have been teaching a significant percentage of classes online for quite a while.)

  • They thought it’d be the easiest way to make the transition to teaching online. I think Sorel Reisman nailed it in his IEEE article when he said: “Teachers can essentially keep doing their quasi-Socratic, one-to-many lecture teaching the way they always have. In a nutshell, Zoom is the lazy person’s way to teach online.” Reisman is okay with this because even though it is far from the approach he would prefer, it as as least getting instructors to engage with the technology. I don’t agree with him about that, but it’s hard to deny that he’s right about how Zoom enabled the far too popular (and largely ineffective) sage on the stage lecture hall experience.
  • But I think the most common reason why faculty decided to teach online synchronously is it didn’t even occur to them that the medium of delivery for a class would make any difference. In other words, it’s not so they decided to teach synchronously because they were encouraged to do so or even because they thought redesigning their courses to teach online asynchronously would be too much work. Rather, I think most faculty who had no previous experience teaching online didn’t think about the method/medium of delivery at all and just delivered the same content (and activities) that they always did before.

Maybe I’m splitting hairs here and these are all three sides (!) of the same coin; then again, maybe not. I read a column by Ezra Klein recently with the headline “I Didn’t Want It to Be True, but the Medium Really Is the Message.” He is not talking about online teaching at all but rather about the media landscape as it has been evolving and how his “love affair” with the internet and social media has faded in that time. Klein is a smart guy and I usually agree with and admire his columns, but this one kind of puzzles me. He writes about how he had been reading Nicholas Carr’s 2010 book The Shallows: What the Internet is Doing to Our Brains, and how he seems to have only now just discovered Marshall McLuhan, Walter Ong, and Neil Postman, and how they all wrote about the importance of the medium that carries messages and content. For example:

We’ve been told — and taught — that mediums are neutral and content is king. You can’t say anything about television. The question is whether you’re watching “The Kardashians” or “The Sopranos,” “Sesame Street” or “Paw Patrol.” To say you read books is to say nothing at all: Are you imbibing potboilers or histories of 18th-century Europe? Twitter is just the new town square; if your feed is a hellscape of infighting and outrage, it’s on you to curate your experience more tightly.

There is truth to this, of course. But there is less truth to it than to the opposite. McLuhan’s view is that mediums matter more than content; it’s the common rules that govern all creation and consumption across a medium that change people and society. Oral culture teaches us to think one way, written culture another. Television turned everything into entertainment, and social media taught us to think with the crowd.

Now, I will admit that since I studied rhetoric, I’m quite familiar with McLuhan and Ong (less so with Postman), and the concept that the medium (aka “rhetorical situation”) does indeed matter a lot is not exactly new. But, I don’t know, have normal people really been told and taught that mediums are neutral? That all that matters is the content? Really? It seems like such a strange and obvious oversight to me. Then again, maybe not.

Third, the main challenge and surprise for most faculty new to online teaching (and also to faculty not so new to it) is in the preparation. I mean this in at least two ways. First off, the hardest part for me about teaching online has always been how to shift material and experiences from the synchronous f2f setting to the asynchronous online one. It’s a lot easier for me to respond to student questions in real time when we’re all sitting in the same room, and it’s much easier to do that f2f because I can “read the room.” Students who are confused and who have questions rarely say (f2f or online) “I’m confused and I have some questions here,” but I can usually figure out the issues when I’m f2f. In online courses– certainly in the asynch ones but I think this was also mostly true for synch ones as well– it’s impossible to adjust in the moment like that. This is why in advance/up-front preparation is so much more important for online courses. As an instructor, I have to explain things and set things up ahead of time to anticipate likely questions and points of confusion. That’s hard to do when you haven’t taught something previously, and it’s impossible to do without a fair amount of preparation.

Which leads to my second point: a lot of faculty, especially in fields like English and other disciplines in the humanities, don’t do as much ahead of time preparation to teach as they probably should. Rather, a lot of faculty I interviewed and a lot of faculty I know essentially have the pedagogical approach of structured improvisation, sometimes to the point of just “winging it.”

This can work out great f2f. I’m thinking of the kind of improvisation accomplished musicians have to improvise and interpret a song on the fly (and more than one of the people I interviewed about teaching online for the first time used an analogy like this). A lot of instructors are very good as performers in f2f class settings because they are especially good lecturers, they’re especially good in building interpersonal relationships with their students, and they’re especially charismatic people. They’re prepared ahead of time, sure, and chances are they’ve done similar performances in f2f classes for a while. These are the kind of instructors who really feed off of the energy of live and in-person students. There are also the kind of instructors who, based again mostly on some of the interviews, were most unhappy about teaching online.

But this simply does not work AT ALL online, and I think it is only marginally more possible to take this approach to teaching with Zoom. If the ideal performance of an instructor in a f2f class is like jazz musicians, stand-up comedians, or a similar kind of stage performer, an online class instructor’s ideal performance has to be more like what the final product of a well-produced movie or TV show looks like: practiced, scripted, performed, and edited, and then ultimately recorded and made available for on-demand streaming.

And let’s be clear: a lot of faculty (myself included) are not at their best when they try the structured improvisation/winging it approach in f2f classrooms. I’ve done many many teaching observations over the years, and I am here to tell you that there are a lot of instructors who think they are good at this at this kind of performance who aren’t. I know I’m not as good of a teacher when I try this, and I think that’s something that became clear to me when I started teaching some of my classes online (asynchronously, of course) about 15 or so years ago. So for me, I think my online teaching practices and preparations do more to shape my f2f practices and preparations rather than the other way around.

In any event, the FRF semester and summer are about over and the fall semester is about here. We start at EMU on Monday, and I am teaching one class f2f for the first time since Winter 2020. Here’s hoping I remember where to stand.

 

 

 

A lot of what Leonhardt said in ‘Not Good for Learning’ is just wrong

I usually agree with David Leonhardt’s analysis in his New York Times newsletter “The Morning” because I think he does a good job of pointing out how both the left and the right have certain beliefs about issues– Covid in particular for the last couple years, of course– that are sometimes at odds with the evidence. But I have to say that this morning’s newsletter and the section “Not Good For Learning” ticks me off.

While just about every K-12 school went online when Covid first hit in spring 2020, a lot of schools/districts resumed in-person classes in fall 2020, and a lot did not. Leonhardt said:

These differences created a huge experiment, testing how well remote learning worked during the pandemic. Academic researchers have since been studying the subject, and they have come to a consistent conclusion: Remote learning was a failure.

Now, perhaps I’m overreacting to this passage because of my research about teaching online at the college-level, but the key issue here is he’s talking about K-12 schools that had never done anything close to online/remote instruction ever before. He is not talking about post-secondary education at all, which is where the bulk of remote learning has worked just fine for 125+ years. Maybe that’s a distinction that most readers will understand anyway, but I kind of doubt it, and not bringing that up at all is inaccurate and just sloppy.

Obviously, remote learning in the vast majority of K-12 schools went poorly during Covid and in completely predictable ways. Few of these teachers had any experience or training to teach online, and few of these school districts had the kinds of technologies and tools (like Canvas and Blackboard and other LMSes) to support these courses. This has been a challenge at the college level too, but besides the fact that I think a lot more college teachers at various levels and various types of institutions have had at least some prior to Covid experience teaching online and most colleges and university have more tech support, a lot (most?) college teachers were already making use of an LMS tool and using a lot more electronic tools for essays and tests (as opposed to paper) in their classes.

The students are also obviously different. When students in college take classes online, it’s a given that they will have the basic technology of a laptop and easy access to the internet. It’s also fairly clear from the research (and I’ve seen this in my own experiences teaching online) that the students who do best in these formats are more mature and more self-disciplined. Prior to Covid, online courses were primarily for “non-traditional” students who were typically older, out in the workforce, and with responsibilities like caring for children or others, paying a mortgage, and so forth. These students, who are typically juniors/seniors or grad students, have been going to college for a while, they understand the expectations of a college class, and (at least the students who are most successful) have what I guess I’d describe as the “adulting” skills to succeed in the format. I didn’t have a lot of first and second year students in online classes before Covid, but a lot of the ones I did have during the pandemic really struggled with these things. Oh sure, I did have some unusually mature and “together” first year students who did just fine, but a lot of the students we have at EMU at this level started college underprepared for the expectations, and adding on the additional challenge of the online format was too much.

So it is not even a teeny-weeny surprise that a lot of teenagers/secondary students– many of whom were struggling to learn and succeed in traditional classrooms– did not succeed in hastily thrown together and poorly supported online courses, and do not even get me started on the idea of grade school kids being forced to sit through hours of Zoom calls. I mean honestly, I think these students probably would have done better if teachers had just sent home worksheets and workbooks and other materials to the kids and the parents to study on their own.

I think a different (and perhaps more accurate) way to study the effectiveness of remote learning would be to look at what some K-12 schools were doing before Covid. Lots and lots of kids and their parents use synch and asynch technology to supplement home schooling, and programs like the Michigan Online School have been around for a while now. Obviously, home schooling or online schooling is not right for everyone, but these programs are also not “failures.”

Leonhardt goes on to argue that more schools that serve poor students and/or non-white students went remote for longer than schools. Leonhardt claims there were two reasons for this:

Why? Many of these schools are in major cities, which tend to be run by Democratic officials, and Republicans were generally quicker to reopen schools. High-poverty schools are also more likely to have unionized teachers, and some unions lobbied for remote schooling.

Second, low-income students tended to fare even worse when schools went remote. They may not have had reliable internet access, a quiet room in which to work or a parent who could take time off from work to help solve problems.

First off, what Leonhardt seems to forget that Covid was most serious in “the major cities” in this country, and also among populations that were non-white and that were poor. So of course school closings were more frequent in these areas because of Covid.

Second, while it is quite easy to complain about the teacher unions, let us all remember it was not nearly as clear in Fall 2020 as Leonhardt is implying that the risks of Covid in the schools were small. It did turn out that those settings weren’t as risky as we thought, but at the same time, that “not as risky” analysis primarily applies to students. A lot of teachers got sick and a few died. I wrote about some of this back in February here. I get the idea that most people who were demanding their K-12 schools open immediately only had their kids in mind (though a lot of these parents were also the same ones adamant against mask and vaccine mandates), and if I had a kid still in school, I might feel the same way. But most people (and I’d put Leonhardt in this camp in this article) didn’t think for a second about the employees, and at the end of the day, working in a public school setting is not like being in the ministry or some other job where we expect people to make huge personal sacrifices for others. Being a teacher is a white collar job. Teachers love to teach, sure, but we shouldn’t expect them to put their own health and lives at any level of risk–even if it’s small– just because a lot of parents haven’t sorted out their childcare situations.

Third, the idea that low-income students fared worse in remote classes (and I agree, they certainly did) is bad, but that has nothing to do with why they spent more time online in the first place. That just doesn’t make sense.

Leonhardt goes on:

In places where schools reopened that summer and fall, the spread of Covid was not noticeably worse than in places where schools remained closed. Schools also reopened in parts of Europe without seeming to spark outbreaks.

I wrote about back in February: these schools didn’t reopen because they never closed! They tried the best they could and often failed, but as far as I can tell, no K-12 school in this country, public or private, just closed and told folks “we’ll reopen after Covid is over.” Second, most of the places where public schools (and universities as well) that went back to at least some f2f instruction in Fall 2020 were in parts of the country where being outside and/or leaving the windows open to classrooms is a lot easier than in Michigan, and/or most of these schools had the resources to do things like create smaller classes for social distancing, to install ventilation equipment, and so forth.

Third– and I cannot believe Leonhardt doesn’t mention this because I know this is an issue he has written about in the past– the comparison to what went on with schools in Europe is completely bogus. In places like Germany and France, they put a much much higher priority on opening schools– especially as compared to things like restaurants and bars and other places where Covid likes to spread. So they kept those kinds of places closed longer so the chances of a Covid outbreak in the schools was smaller. Plus Europeans are much MUCH smarter about things like mask and vaccine mandates too.

No, the pandemic was not good for learning, but it was not good for anything else, either. It wasn’t good for our work/life balances, our mental health, a lot of our household incomes, on and on and on. We have all suffered mightily for it, and I am certain that as educators of all stripes study and reflect on the last year and a half, we’ll all learn a lot about what worked and what didn’t. But after two years of trying their fucking best to do the right things, there is no reason to through K-12 teachers under the bus now.

Country White Bread Made with Poolish

The other day, I baked some bread that turned out exceptionally well and I posted a couple of pictures on Instagram (and that also showed up on Facebook):

 

View this post on Instagram

 

A post shared by Steve Krause (@stevendkrause)

My friend Lisa asked about a recipe, and since I haven’t written/blogged about anything like that for a while, I thought I would procrastinate a bit (okay, procrastinate a lot) and write this.

Back in 2017, I wrote in some detail about my bread making ways as directed/guided by Ken Forkish’s excellent book Flour Water Salt Yeast. Sure, I have read other things about baking bread and have followed other recipes, but this is what I always go back to. It’s an extraordinarily detailed and well-written book, and considering the fact that the recipes in this book are all just variations of the same ingredients (thus the title) with slightly different techniques, I think that’s quite the accomplishment. And apparently, he has a new book coming out too.

I had been making mostly natural levain (aka sourdough) breads the last two or three years, but besides taking a few days to revive the starter and proofing, my results lately have been inconsistent and not great. Maybe I need to make some new starter. So I went back to Forkish’s book and gave the poolish recipe another try.

First things first (and this is stuff I kind of cover in the post from a few years ago):

  • This recipe makes two French “boule” style loaves of bread: round, ball-shaped loaves that are very crusty and the sort of thing that’s great for hearty sandwiches, toast, or just eating by the slice when it’s still warm. It’s not like baguette (though you can use this dough to make baguette, but that’s a different thing),and definitely not like soft sliced grocery story bread.
  • This isn’t rocket science, and if you follow the recipe closely, it will probably turn out well even if you don’t do a lot of baking. There are a lot of details here both because I had a lot of procrastinating to do, and also because I wanted to describe the steps in as much detail as possible. That said, this does take a bit of practice and your results might not be that great right out of the gate. Just keep trying.
  • The measurements matter, both in terms of ingredients but also in terms of temperatures and time. I can never get it perfect (the original recipe calls for .4 grams of yeast for the poolish, for example), but you want to get as close as you can and actually measure things. And as a tangent: that’s basically the difference between “cooking” and “baking,” as far as I can tell.
  • This does require some special equipment.
    • At a minimum, you need a kitchen scale and at least one four or five quart cast iron Dutch oven that can go into the oven at 475 degrees– so not one with a plastic knob on the top. I think the kitchen scale I’ve got cost me $10 or $20 and I use it all the time, so a very worthwhile investment. I have a fancy enameled Dutch oven I use for stews and soups and stuff, but for baking bread, I use the much less expensive, cast iron models you can get for around $50 at a hardware store (and those work just as well for stews and soups and stuff as well). Everyone who cooks regularly should have both of these things anyway. I bake bread at least once a month (and usually more), so I have two of the cast iron Dutch ovens– and as you will see with the steps below, if you bake a lot, using two instead of just one Dutch oven helps speed things up A LOT.
    • It’s helpful to have a couple of large food storage containers, too; here’s a link to what I’ve got on amazon, though I bought mine at the local Gordon Food Service store. You can just use a couple of really big bowls and some plastic wrap to cover them, but besides being  great for baking, these containers are also useful for things like brining a chicken or a turkey.
    • While not essential (and probably not something you want to spend the money on unless you want to regularly bake bread like this), a couple of wicker proofing baskets. Besides helping to create the cool texture of the finished bread, they also allow the dough to proof properly– and it’s what professional bakers use. Here’s a link to the kind of ones I have (also on amazon); I’d recommend just getting the baskets and none of the other baking doodads like a “lame” (which a French knife used to score the bread– I just use a razor blade or a sharp knife) or weird pattern molds or anything else.
    • Finally (and also all stuff in the category of you probably already have these things if you cook at all regularly), a bowl large enough to hold all the ingredients (or large food storage containers), two medium-sized bowls lined with clean tea towels for proofing each loaf (or the proofing baskets), a dough knife/board scraper, a razor blade or very sharp knife, an instant read thermometer to measure the water temperature and some very heavy-duty oven mitt or grill gloves (which is what I use) to handle the smoking hot Dutch ovens, and a cooling rack for the finished bread. Oh, also: two plastic shopping bags, or a couple of small plastic garbage bags.

Okay, with all that out of the way:

Ingredients:

For the poolish:

  • 450 grams white flour
  • 50 grams whole wheat flour
  • 1/8th teaspoon of instant dried yeast
  • 500 grams of water (a bit warm, at about 80 degrees or so)

For the final dough:

  • 450 grams white flour
  • 50 grams wheat flour
  • 3/4 teaspoon of instant dried yeast
  • 1 tablespoon plus 1 teaspoon salt
  • 250 grams of water (quite warm, at about 105 degrees)

Steps:

  • You want to start with high quality flour. My go-to is King Arthur, though I also sometimes splurge on some kind of artisanal, stone-ground, small batch flours once in a while too. You can also make this with all white flour or try adding more wheat or maybe a little rye flour, but be careful about using too high of a ratio of not white flour because it can throw things off in terms of amount of water, yeast, time, etc.
  • At about 6 pm and the evening before you plan to finish and bake the bread, make the poolish. In a 6 quart tub (or a very large bowl), whisk together the flours and the yeast, and then mix in 500 grams (by weight, though volume is the same) of somewhat warm (80 degrees) water. Mix thoroughly so there are no pockets of dry flour left at all. Snap on the lid or cover snuggly in plastic wrap, and leave it out on the kitchen counter overnight.
  • At about 8 am the next morning, start to make the final dough. You have a little bit of “wiggle room” on when to start this step– a bit earlier, a bit later, etc.– but you don’t want to start much earlier than 12 hours after you started making the poolish, and not much later than about 14 hours.
  • In another larger bowl (or a 12 quart tub), whisk together the final dough flour, yeast, and salt until well-combined.
  • Measure out 250 grams of very warm/bordering on hot water, around 105 degrees. Uncover your poolish, which by now should be quite bubbly and tripled in size. Carefully pour the water around the edges of the poolish to loosen it from the container, and then poor the whole thing into the larger container where you mixed the other dry ingredients.
  • Mix this dough thoroughly. Now, Forkish goes into surprising detail about “the best” method for doing this by hand with large pinching motions, but I honestly don’t usually want to get my hands that goopy with the dough. So I just use a big metal spoon I like that keeps my hands a bit cleaner and that gets at all the dry flour bits out of the corner of the container. Mix this so there are no dry parts left and cover it back up.
  • This first proofing/resting lasts about 2 hours, though you do need to fold the dough at least twice. Again, Forkish goes into a lot of detail about what “folding” means, but what I do is lightly flour my hands and then scoop underneath the dough, folding it back over onto itself. I go all around the tub so that I’m folding/turning over the whole mess of dough so what was on the bottom is on the top. I try to do this the first time after it’s proofing/resting for about 30-45 minutes and then the second time about another 45 minutes later. After 2 or 3 hours, the dough should be more than doubled in size.
  • Next, it’s time to make the loaves. You’ll want to start this at about 10 or 10:30 am; again, there’s some wiggle room here, but it should be ready in about 2 hours and you don’t want to wait longer than 3 hours. You’ll need about 2 feet of cleared off and squeaky-clean counter space to deal with the dough; once you have that, spread a light dusting of flour onto the counter. If you don’t have wicker bread baskets, you’ll need two bowls that are each about 8 or 9 inches wide and a couple of clean tea towels. Set up your bowls/baskets first by liberally flouring the inside of them. This helps the dough to not stick, and it also gives that cool color/texture to the finished bread. Set the bowls/baskets nearby.
  • Take the lid off of the now proofed dough, flour your hands, and dump the dough out of the container and on to the floured work surface. You don’t want to add too much more flour to the dough, but you also don’t want to make it into loaves while it’s sticky. So what I tend to do is flatten the dough out into roughly a rectangle shape, add a little more flour to the top of the dough, flip it all over, and flatten it out again. You don’t really have to knead the dough much, but you do want to work it so you squeeze out some of the bigger air bubbles that will have developed.
  • Using a dough knife/bench scraper, divide the flattened out dough in half. You don’t need to obsess over it or anything, but you want to shoot for more or less equal halves. Bring the corners of each half of dough up together and form the dough into a tight ball and smooth ball. Put the rougher side/seam side of the ball in the bottom of the basket/bowl.
  • Put each basket/bowl inside a large plastic bag, making sure that the opening of the bag is bunched up/closed at the bottom. The best thing for this are the sort of plastic shopping bags you get from the drugstore or grocery store, though a (obviously clean and never used) small garbage bag works as well. These loaves will be ready for baking in about an hour.
  • Right after you bag up your bread for the final proof, put your Dutch oven(s) on the middle rack of the oven and pre-heat it to 475 degrees. You want to have the lids on too because you are preheating both the larger oven and the smaller, baking Dutch oven(s).
  • If you only have one Dutch oven, you’ll have to bake in stages. So after about 40 minutes of the oven pre-heating and the loaves sitting out on the counter for their final rise, put one of your proofing loaves into the refrigerator, still contained in that plastic bag. You’ll take it out of the fridge again after the first loaf bakes. Of course, if you have two Dutch ovens, you can bake both loaves at the same time.
  • Either way, about an hour to 90 minutes after you divided the bread up into two loaves and after the oven has been preheating with one or two Dutch ovens for at least 30 minutes and after it is indeed at 475, you’re ready to bake. This step moves kind of quickly and can be a little nerve-racking because the dough can be a little tricky to handle, and of course, the pots you’re going to cook this in are dangerously hot. But here’s what I do:
    • Put on this grill gloves or heavy-duty oven mitts, take the Dutch Oven(s) out of the oven, place them on top of the stove, and remove the lids. Take off the grill gloves.
    • Turning to the bread, take them out of their plastic bags and carefully invert the dough on to the floured counter. Using either a single razor blade or a very sharp knife, make a few scoring cuts on the top of the loaf. You can get super fancy with this or you can skip this step entirely, but I like to make two or three gashes in the top because it helps release some steam and it looks cool at the end.
    • With floured and otherwise bare hands, carefully scoop under the dough to pick up the entire loaf and then gently lower it into the waiting and ripping hot Dutch oven. Now, three important things to note. First, the dough at this point can be kind of tricky to pick up; it’s sort of like handling a half-pound blob of jello, so you kind of have to get your fingers under the loaf and cup it with your hands. Second, that pot is super-duper hot so be careful to lower the dough into the pot while not touching the pot with your bare hands! Third, don’t worry too much if the dough ends up being kind of uneven or whatever when you put it into the Dutch oven(s) because as long as it is proofed properly, it will still bake fine.
    • Put those grill gloves or oven mitts back on, put on the lid(s), and put the Dutch oven(s) back into the oven at 475. Don’t peek! Keeping the Dutch oven(s) closed for this first 30 minutes is key to a crunchy crust, and also it is what enables the “oven spring” that will cause the bread to rise and round-out further, and, unless you really fumble getting the bread into the Dutch oven(s) (it happens), this is also what will “round out” (so to speak) the shape of your loaf.
  • Bake for 30 minutes– again, no looking and no opening the oven, either.
  • After 30 minutes, get out those grill gloves/oven mitts again, open the oven, take off the lids and briefly admire your now lovely but not quite browned bread, and close up the oven again. Set up a cooling rack on the counter.
  • Reduce the heat to 450 and continue baking for about 30 more minutes without the lids, checking it again after about 20 minutes to make sure it’s not getting too dark on top. How dark (burnt?) is too dark/too much is probably a matter of personal tastes, but I’d encourage you to let it get really dark brown even to the point of a few burnt-looking spots for the best crusty flavor. If it looks like it is getting just too dark too quickly, you can always turn the oven off and let the bread continue to bake, or, after about 20 minutes, take the Dutch oven(s) out of the oven and leave it on top of the stove to bake through for another 10 minutes.
  • For one last time, put on those grill gloves/oven mitts and tip your now complete bread onto the cooling rack. The best (and most satisfying) sign that you have succeeded in making a lovely and crusty bread is the cracking sound it makes as cools.
  • Leave the bread alone at least an hour before you cut into it! This is a “discussion” I have with my wife all the time who always wants to cut immediately into the steaming hot bread. I understand that, but the bread is still basically baking as it cools, and if you cut into it too early and while it’s still really hot, you’ll release a ton of heat and steam and the inside of the bread (the “crumb”) will be more sticky than ideal. It’s hard to resist, but it’s worth it.