My Talk About AI at Hope College (or why I still post things on a blog)

I gave a talk at Hope College last week about AI. Here’s a link to my slides, which also has all my notes and links. Right after I got invited to do this in January, I made it clear that I am far from an expert with AI. I’m just someone who had an AI writing assignment last fall (which was mostly based on previous teaching experiments by others), who has done a lot of reading and talking about it on Facebook/Twitter, and who blogged about it in December. So as I promised then, my angle was to stay in my lane and focus on how AI might impact the teaching of writing.

I think the talk went reasonably well. Over the last few months, I’ve watched parts of a couple of different ChatGPT/AI presentations via Zoom or as previously recorded, and my own take-away from them all has been a mix of “yep, I know that and I agree with you” and “oh, I didn’t know that, that’s cool.” That’s what this felt like to me: I talked about a lot of things that most of the folks attending knew about and agreed with, along with a few things that were new to them. And vice versa: I learned a lot too. It probably would have been a little more contentious had this taken place back when the freakout over ChatGPT was in full force. Maybe there still are some folks there who are freaked out by AI and cheating who didn’t show up. Instead, most of the people there had played around with the software and realized that it’s not quite the “cheating machine” being overhyped in the media. So it was a good conversation.

But that’s not really what I wanted to write about right now. Rather, I just wanted to point out that this is why I continue to post here, on a blog/this site, which I have maintained now for almost 20 years. Every once in a while, something I post “lands,” so to speak.

So for example: I posted about teaching a writing assignment involving AI at about the same time MSM is freaking out about ChatGPT. Some folks at Hope read that post (which has now been viewed over 3000 times), and they invited me to give this talk. Back in fall 2020, I blogged about how weird I thought it was that all of these people were going to teach online synchronously over Zoom. Someone involved with the Media & Learning Association, which is a European/Belgian organization, read it, invited me to write a short article based on that post and they also invited me to be on a Zoom panel that was a part of a conference they were having. And of course all of this was the beginning of the research and writing I’ve been doing about teaching online during Covid.

Back in April 2020, I wrote a post “No One Should Fail a Class Because of a Fucking Pandemic;” so far, it’s gotten over 10,000 views, it’s been quoted in a variety of places, and it was why I was interviewed by someone at CHE in the fall. (BTW, I think I’m going to write an update to that post, which will be about why it’s time to return to some pre-Covid requirements). I started blogging about MOOCs in 2012, which lead to a short article in College Composition and Communication and numerous more articles and presentations, a few invited speaking gigs (including TWO conferences sponsored by the University of Naples on the Isle of Capri), an edited collection and a book.

Now, most of the people I know in the field who once blogged have stopped (or mostly stopped) for one reason or another. I certainly do not post here nearly as often as I did before the arrival of Facebook and Twitter, and it makes sense for people to move on to other things. I’ve thought about giving it up, and there have been times where I didn’t post anything for months. Even the extremely prolific and smart local blogger Mark Maynard gave it all up, I suspect because of a combination of burn-out, Trump being voted out, and the additional work/responsibility of the excellent restaurant he co-owns/operates, Bellflower.

Plus if you do a search for “academic blogging is bad,” you’ll find all sorts of warnings about the dangers of it– all back in the day, of course. Deborah Brandt seemed to think it was mostly a bad idea (2014)The Guardian suggested it was too risky (2013), especially for  grad students posting work in progress. There were lots of warnings like this back then. None of them ever made any sense to me, though I didn’t start blogging until after I was on the tenure-track here. And no one at EMU has ever had anything negative to me about doing this, and that includes administrators even back in the old days of EMUTalk.

Anyway, I guess I’m just reflecting/musing now about why this very old-timey practice from the olde days of the Intertubes still matters, at least to me. About 95% of the posts I’ve written are barely read or noticed at all, and that’s fine. But every once in a while, I’ll post something, promote it a bit on social media, and it catches on. And then sometimes, a post becomes something else– an invited talk, a conference presentation, an article. So yeah, it’s still worth it.

Is AI Going to be “Something” or “Everything?”

Way back in January, I applied for release time from teaching for one semester next year– either a sabbatical or what’s called here a “faculty research fellowship” (FRF)– in order to continue the research I’ve been doing about teaching online during Covid. This is work I’ve been doing since fall 2020, including a Zoom talk at a conference in Europe, a survey I ran for about six months, and from that survey, I was able to recruit and interview a bunch of faculty about their experiences. I’ve gotten a lot out of this work already: a couple conference presentations (albeit in the kind of useless “online/on-demand” format), a website (which I had to code myself!) article, and, just last year, I was on one of those FRFs.

Well, a couple weeks ago, I found out that I will not be on sabbatical or FRF next year. My proposal, which was about seeking time to code and analyze all of the interview transcripts I collected last year, got turned down. I am not complaining about that: these awards are competitive, and I’ve been fortunate enough to receive several of these before, including one for this research. But not getting release time is making me rethink how much I want to continue this work, or if it is time for something else.

I think studying how Covid impacted faculty attitudes about online courses is definitely something important worth doing. But it is also looking backwards, and it feels a bit like an autopsy or one of those commissioned reports. And let’s be honest: how many of us want to think deeply about what happened during the pandemic, recalling the mistakes that everyone already knows they made? A couple years after the worst of it, I think we all have a better understanding now why people wanted to forget the 1918 pandemic.

It’s 20/20 hindsight, but I should have put together a sabbatical/research leave proposal about AI. With good reason, the committee that decides on these release time awards tends to favor proposals that are for things that are “cutting edge.” They also like to fund releases for faculty who have book contracts who are finishing things up, which is why I have been lucky enough to secure these awards both at the beginning and end of my MOOC research.

I’ve obviously been blogging about AI a lot lately, and I have casually started amassing quite a number of links to news stories and other resources related to Artificial Intelligence in general, ChatGPT and OpenAI in particular. As I type this entry in April 2023, I already have over 150 different links to things without even trying– I mean, this is all stuff that just shows up in my regular diet of social media and news. I even have a small invited speaking gig about writing and AI, which came about because of a blog post I wrote back in December— more on that in a future post, I’m sure.

But when it comes to me pursuing AI as my next “something” to research, I feel like I have two problems. First, it might already be too late for me to catch up. Sure, I’ve been getting some attention by blogging about it, and I had a “writing with GPT-3” assignment in a class I taught last fall, which I guess kind of puts me at least closer to being current with this stuff in terms of writing studies. But I also know there are already folks in the field (and I know some of these people quite well) who have been working on this for years longer than me.

Plus a ton of folks are clearly rushing into AI research at full speed. Just the other day, the CWCON at Davis organizers sent around a draft of the program for the conference in June. The Call For Proposals they released last summer describes the theme of this year’s event, “hybrid practices of engagement and equity.” I skimmed the program to get an idea of the overall schedule and some of what people were going to talk about, and there were a lot of mentions of ChatGPT and AI, which makes me think a lot of people are likely to be not talking about the CFP theme at all.

This brings me to the bigger problem I see with researching and writing about AI: it looks to me like this stuff is moving very quickly from being “something” to “everything.” Here’s what I mean:

A research agenda/focus needs to be “something” that has some boundaries. MOOCs were a good example of this. MOOCs were definitely “hot” from around 2012 to 2015 or so, and there was a moment back then when folks in comp/rhet thought we were all going to be dealing with MOOCs for first year writing. But even then, MOOCs were just a “something”  in the sense that you could be a perfectly successful writing studies scholar (even someone specializing in writing and technology) and completely ignore MOOCs.

Right now, AI is a myriad of “somethings,” but this is moving very quickly toward “everything.” It feel to me like very soon (five years, tops), anyone who wants to do scholarship in writing studies is going to have to engage with AI. Successful (and even mediocre) scholars in writing studies (especially someone specializing in writing and technology) are not going to be able to ignore AI.

This all reminds me a bit about what happened with word processing technology. Yes, this really was something people studied and debated way back when. In the 1980s and early 1990s, there were hundreds of articles and presentations about whether or not to use word processing to teach writing— for example, “The Word Processor as an Instructional Tool: A Meta-Analysis of Word Processing in Writing Instruction” by Robert L. Bangert-Drowns, or “The Effects of Word Processing on Students’ Writing Quality and Revision Strategies” by Ronald D. Owston, Sharon Murphy, Herbert H. Wideman. These articles were both published in the early 1990s and in major journals, and both are trying to answer the question which one is “better.” (By the way, most but far from all of these studies concluded that word processing is better in the sense it helped students generate more text and revise more frequently. It’s also worth mentioning that a lot of this research overlaps with studies about the role of spell-checking and grammar-checking with writing pedagogy).

Yet in my recollection of those times, this comparison between word processing and writing by hand was rendered irrelevant because everyone– teachers, students, professional writers (at least all but the most stubborn, as Wendell Berry declares in his now cringy and hopelessly dated short essay “Why I Am not Going to Buy a Computer”)– switched to word processing software on computers to write. When I started teaching as a grad student in 1988, I required students to hand in typed papers and I strongly encouraged them to write at least one of their essays with a word processing program. Some students complained because they were never asked to type anything in high school. By the time I started my PhD program five years later in 1993, students all knew they needed to type their essays on a computer and generally with MS Word.

Was this shift a result of some research consensus that using a computer to type texts was better than writing texts out by hand? Not really, and obviously, there are still lots of reasons why people still write some things by hand– a lot of personal writing (poems, diaries, stories, that kind of thing) and a lot of note-taking. No, everyone switched because everyone realized word processing made writing easier (but not necessarily better) in lots and lots of different ways and that was that. Even in the midst of this panicky moment about plagiarism and AI, I have yet to read anyone seriously suggest that we make our students give up Word or Google Docs and require them to turn in handwritten assignments. So, as a researchable “something,” word processing disappeared because (of course) everyone everywhere who writes obviously uses some version of word processing, which means the issue is settled.

One of the other reasons why I’m using word processing scholarship as my example here is because both Microsoft and Google have made it clear that they plan on integrating their versions of AI into their suites of software– and that would include MS Word and Google Docs. This could be rolling out just in time for the start of the fall 2023 semester, maybe earlier. Assuming this is the case, people who teach any kind of writing at any kind of level are not going to have time to debate if AI tools will be “good” or “bad,” and we’re not going to be able to study any sorts of best practices either. This stuff is just going to be a part of the everything, and for better or worse, that means the issue will soon be settled.

And honestly, I think the “everything” of AI is going to impact, well, everything. It feels to me a lot like when “the internet” (particularly with the arrival of web browsers like Mosaic in 1993) became everything. I think the shift to AI is going to be that big, and it’s going to have as big of an impact on every aspect of our professional and technical lives– certainly every aspect that involves computers.

Who the hell knows how this is all going to turn out, but when it comes to what this means for the teaching of writing, as I’ve said before, I’m optimistic. Just as the field adjusted to word processing (and spell-checkers and grammar-checkers, and really just the whole firehouse of text from the internet), I think we’ll be able to adjust to this new something to everything too.

As far as my scholarship goes though: for reasons, I won’t be able to eligible for another release from teaching until the 2025-26 school year. I’m sure I’ll keep blogging about AI and related issues and maybe that will turn into a scholarly project. Or maybe we’ll all be on to something entirely different in three years….

 

AI Can Save Writing by Killing “The College Essay”

I finished reading and grading the last big project from my “Digital Writing” class this semester, an assignment that was about the emergence of openai.com’s artificial intelligence technologies GPT-3 and DALL-E. It was interesting and I’ll probably write more about it later, but the short version for now is my students and I have spent the last month or so noodling around with software and reading about both the potentials and dangers of rapidly improving AI, especially when it comes to writing.

So the timing of of Stephen Marche’s recently published commentary with the clickbaity title “The College Essay Is Dead” in The Atlantic could not be better– or worse? It’s not the first article I’ve read this semester along these lines, that GPT-3 is going to make cheating on college writing so easy that there simply will not be any point in assigning it anymore. Heck, it’s not even the only one in The Atlantic this week! Daniel Herman’s “The End of High-School English” takes a similar tact. In both cases, they claim, GPT-3 will make the “essay assignment” irrelevant.

That’s nonsense, though it might not be nonsense in the not so distant future. Eventually, whatever comes after GPT-3 and ChatGPT might really mean teachers can’t get away with only assigning writing. But I think we’ve got a ways to go before that happens.

Both Marche and Herman (and just about every other mainstream media article I’ve read about AI) make it sound like GPT-3, DALL-E, and similar AIs are as easy as working the computer on the Starship Enterprise: ask the software for an essay about some topic (Marche’s essay begins with a paragraph about “learning styles” written by GPT-3), and boom! you’ve got a finished and complete essay, just like asking the replicator for Earl Grey tea (hot). That’s just not true.

In my brief and amateurish experience, using GPT-3 and DALL-E is all about entering a carefully worded prompt. Figuring out how to come up with a good prompt involved trial and error, and I thought it took a surprising amount of time. In that sense, I found the process of experimenting with prompts similar to the kind of  invention/pre-writing activities  I teach to my students and that I use in my own writing practices all the time.  None of my prompts produced more than about two paragraphs of useful text at a time, and that was the case for my students as well. Instead, what my students and I both ended up doing was entering in several different prompts based on the output we were hoping to generate. And my students and I still had to edit the different pieces together, write transitions between AI generated chunks of texts, and so forth.

In their essays, some students reflected on the usefulness of GPT-3 as a brainstorming tool.  These students saw the AI as a sort of “collaborator” or “coach,” and some wrote about how GPT-3 made suggestions they hadn’t thought of themselves. In that sense, GPT-3 stood in for the feedback students might get from peer review, a visit to the writing center, or just talking with others about ideas. Other students did not think GPT-3 was useful, writing that while they thought the technology was interesting and fun, it was far more work to try to get it to “help” with writing an essay than it was for the student to just write the thing themselves.

These reactions square with the results in more academic/less clickbaity articles about GPT-3. This is especially true about  Paul Fyfe’s “How to cheat on your final paper: Assigning AI for student writing.” The assignment I gave my students was very similar to what Fyfe did and wrote about– that is, we both asked students to write (“cheat”) with AI (GPT-2 in the case of Fyfe’s article) and then reflect on the experience. And if you are a writing teacher reading this because you are curious about experimenting with this technology, go and read Fyfe’s article right away.

Oh yeah, one of the other major limitations of GPT-3’s usefulness as an academic writing/cheating tool: it cannot do even basic “research.” If you ask GPT-3 to write something that incorporates research and evidence, it either doesn’t comply or it completely makes stuff up, citing articles that do not exist. Let me share a long quote from a recent article at The Verge by James Vincent on this:

This is one of several well-known failings of AI text generation models, otherwise known as large language models or LLMs. These systems are trained by analyzing patterns in huge reams of text scraped from the web. They look for statistical regularities in this data and use these to predict what words should come next in any given sentence. This means, though, that they lack hard-coded rules for how certain systems in the world operate, leading to their propensity to generate “fluent bullshit.”

I think this limitation (along with the limitation that GPT-3 and ChatGPT are not capable of searching the internet) makes using GPT-3 as a plagiarism tool in any kind of research writing class kind of a deal-breaker. It certainly would not get students far in most sections of freshman comp where they’re expected to quote from other sources.

Anyway, the point I’m trying to make here (and this is something that I think most people who teach writing regularly take as a given) is that there is a big difference between assigning students to write a “college essay” and teaching students how to write essays or any other genre. Perhaps when Marche was still teaching Shakespeare (before he was a novelist/cultural commentator, Marche earned a PhD specializing in early English drama), he assigned his students to write an essay about one of Shakespeare’s plays. Perhaps he gave his students some basic requirements about the number of words and some other mechanics, but that was about it. This is what I mean by only assigning writing: there’s no discussion of audience or purpose, there are no opportunities for peer review or drafts, there is no discussion of revision.

Teaching writing is a process. It starts by making writing assignments that are specific and that require an investment in things like prewriting and a series of assignments and activities that are “scaffolding” for a larger writing assignment. And ideally, teaching writing includes things like peer reviews and other interventions in the drafting process, and there is at least an acknowledgment that revision is a part of writing.

Most poorly designed assigned writing tasks are good examples of prompts that you enter into GPT-3. The results are definitely impressive, but I don’t think it’s quite useful enough to produce work a would-be cheater can pass off as their own. For example, I asked ChatGPT (twice) to “write a 1000 word college essay about the theme of insanity in Hamlet” and it came up with this and this essay. ChatGPT produced some impressive results, sure, but besides the fact that both of these essays are significantly shorter than 1000 word requirement, they both kind of read like… well, like a robot wrote them. I think that most instructors who received this essay from a student– particularly in an introductory class– would suspect that the student cheated. When I asked ChatGPT to write a well researched essay about the theme of insanity in Hamlet, it managed to produce an essay that quoted from the play, but not any research about Hamlet.

Interestingly, I do think ChatGPT has some potential for helping students revise. I’m not going to share the example here (because it was based on actual student writing), but I asked ChatGPT to “revise the following paragraph so it is grammatically correct” and I then added a particularly pronounced example of “basic” (developmental, grammatically incorrect, etc.) writing. The results didn’t improve the ideas in the writing and it changed only a few words. But it did transform the paragraph into a series of grammatically correct (albeit not terribly interesting) sentences.

In any event, if I were a student intent on cheating on this hypothetical assignment, I think I’d just do a Google search for papers on Hamlet instead. And that’s one of the other things Marche and these other commentators have left out: if a student wants to complete a badly designed “college essay” assignment by cheating, there are much much better and easier ways to do that right now.

Marche does eventually move on from “the college essay is dead” argument by the end of his commentary, and he discusses how GPT-3 and similar natural language processing technologies will have a lot of value to humanities scholars. Academics studying Shakespeare now have a reason to talk to computer science-types to figure out how to make use of this technology to analyze the playwright’s origins and early plays. Academics studying computer science and other fields connected to AI will now have a reason to maybe talk with the English-types as to how well their tools actually can write. As Marche says at the end, “Before that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance.”

Plus I have to acknowledge that I have only spent so much time experimenting with my openai.com account because I still only have the free version. That was enough access for my students and me to noodle around enough to complete a short essay composed with the assistance of GPT-3 and to generate an accompanying image with GPT-3. But that was about it. Had I signed up for openai.com’s “pay as you go” payment plan, I might learn more about how to work this thing, and maybe I would have figured out better prompts for that Hamlet assignment. Besides all that, this technology is getting better alarmingly fast. We all know whatever comes after ChatGPT is going to be even more impressive.

But we’re not there yet. And when it is actually as good as Marche fears it might be, and if that makes teachers rethink how they might teach rather than assign writing, that would be a very good thing.

A lot of what Leonhardt said in ‘Not Good for Learning’ is just wrong

I usually agree with David Leonhardt’s analysis in his New York Times newsletter “The Morning” because I think he does a good job of pointing out how both the left and the right have certain beliefs about issues– Covid in particular for the last couple years, of course– that are sometimes at odds with the evidence. But I have to say that this morning’s newsletter and the section “Not Good For Learning” ticks me off.

While just about every K-12 school went online when Covid first hit in spring 2020, a lot of schools/districts resumed in-person classes in fall 2020, and a lot did not. Leonhardt said:

These differences created a huge experiment, testing how well remote learning worked during the pandemic. Academic researchers have since been studying the subject, and they have come to a consistent conclusion: Remote learning was a failure.

Now, perhaps I’m overreacting to this passage because of my research about teaching online at the college-level, but the key issue here is he’s talking about K-12 schools that had never done anything close to online/remote instruction ever before. He is not talking about post-secondary education at all, which is where the bulk of remote learning has worked just fine for 125+ years. Maybe that’s a distinction that most readers will understand anyway, but I kind of doubt it, and not bringing that up at all is inaccurate and just sloppy.

Obviously, remote learning in the vast majority of K-12 schools went poorly during Covid and in completely predictable ways. Few of these teachers had any experience or training to teach online, and few of these school districts had the kinds of technologies and tools (like Canvas and Blackboard and other LMSes) to support these courses. This has been a challenge at the college level too, but besides the fact that I think a lot more college teachers at various levels and various types of institutions have had at least some prior to Covid experience teaching online and most colleges and university have more tech support, a lot (most?) college teachers were already making use of an LMS tool and using a lot more electronic tools for essays and tests (as opposed to paper) in their classes.

The students are also obviously different. When students in college take classes online, it’s a given that they will have the basic technology of a laptop and easy access to the internet. It’s also fairly clear from the research (and I’ve seen this in my own experiences teaching online) that the students who do best in these formats are more mature and more self-disciplined. Prior to Covid, online courses were primarily for “non-traditional” students who were typically older, out in the workforce, and with responsibilities like caring for children or others, paying a mortgage, and so forth. These students, who are typically juniors/seniors or grad students, have been going to college for a while, they understand the expectations of a college class, and (at least the students who are most successful) have what I guess I’d describe as the “adulting” skills to succeed in the format. I didn’t have a lot of first and second year students in online classes before Covid, but a lot of the ones I did have during the pandemic really struggled with these things. Oh sure, I did have some unusually mature and “together” first year students who did just fine, but a lot of the students we have at EMU at this level started college underprepared for the expectations, and adding on the additional challenge of the online format was too much.

So it is not even a teeny-weeny surprise that a lot of teenagers/secondary students– many of whom were struggling to learn and succeed in traditional classrooms– did not succeed in hastily thrown together and poorly supported online courses, and do not even get me started on the idea of grade school kids being forced to sit through hours of Zoom calls. I mean honestly, I think these students probably would have done better if teachers had just sent home worksheets and workbooks and other materials to the kids and the parents to study on their own.

I think a different (and perhaps more accurate) way to study the effectiveness of remote learning would be to look at what some K-12 schools were doing before Covid. Lots and lots of kids and their parents use synch and asynch technology to supplement home schooling, and programs like the Michigan Online School have been around for a while now. Obviously, home schooling or online schooling is not right for everyone, but these programs are also not “failures.”

Leonhardt goes on to argue that more schools that serve poor students and/or non-white students went remote for longer than schools. Leonhardt claims there were two reasons for this:

Why? Many of these schools are in major cities, which tend to be run by Democratic officials, and Republicans were generally quicker to reopen schools. High-poverty schools are also more likely to have unionized teachers, and some unions lobbied for remote schooling.

Second, low-income students tended to fare even worse when schools went remote. They may not have had reliable internet access, a quiet room in which to work or a parent who could take time off from work to help solve problems.

First off, what Leonhardt seems to forget that Covid was most serious in “the major cities” in this country, and also among populations that were non-white and that were poor. So of course school closings were more frequent in these areas because of Covid.

Second, while it is quite easy to complain about the teacher unions, let us all remember it was not nearly as clear in Fall 2020 as Leonhardt is implying that the risks of Covid in the schools were small. It did turn out that those settings weren’t as risky as we thought, but at the same time, that “not as risky” analysis primarily applies to students. A lot of teachers got sick and a few died. I wrote about some of this back in February here. I get the idea that most people who were demanding their K-12 schools open immediately only had their kids in mind (though a lot of these parents were also the same ones adamant against mask and vaccine mandates), and if I had a kid still in school, I might feel the same way. But most people (and I’d put Leonhardt in this camp in this article) didn’t think for a second about the employees, and at the end of the day, working in a public school setting is not like being in the ministry or some other job where we expect people to make huge personal sacrifices for others. Being a teacher is a white collar job. Teachers love to teach, sure, but we shouldn’t expect them to put their own health and lives at any level of risk–even if it’s small– just because a lot of parents haven’t sorted out their childcare situations.

Third, the idea that low-income students fared worse in remote classes (and I agree, they certainly did) is bad, but that has nothing to do with why they spent more time online in the first place. That just doesn’t make sense.

Leonhardt goes on:

In places where schools reopened that summer and fall, the spread of Covid was not noticeably worse than in places where schools remained closed. Schools also reopened in parts of Europe without seeming to spark outbreaks.

I wrote about back in February: these schools didn’t reopen because they never closed! They tried the best they could and often failed, but as far as I can tell, no K-12 school in this country, public or private, just closed and told folks “we’ll reopen after Covid is over.” Second, most of the places where public schools (and universities as well) that went back to at least some f2f instruction in Fall 2020 were in parts of the country where being outside and/or leaving the windows open to classrooms is a lot easier than in Michigan, and/or most of these schools had the resources to do things like create smaller classes for social distancing, to install ventilation equipment, and so forth.

Third– and I cannot believe Leonhardt doesn’t mention this because I know this is an issue he has written about in the past– the comparison to what went on with schools in Europe is completely bogus. In places like Germany and France, they put a much much higher priority on opening schools– especially as compared to things like restaurants and bars and other places where Covid likes to spread. So they kept those kinds of places closed longer so the chances of a Covid outbreak in the schools was smaller. Plus Europeans are much MUCH smarter about things like mask and vaccine mandates too.

No, the pandemic was not good for learning, but it was not good for anything else, either. It wasn’t good for our work/life balances, our mental health, a lot of our household incomes, on and on and on. We have all suffered mightily for it, and I am certain that as educators of all stripes study and reflect on the last year and a half, we’ll all learn a lot about what worked and what didn’t. But after two years of trying their fucking best to do the right things, there is no reason to through K-12 teachers under the bus now.

My CCCCs 2022

Here’s a follow-up (of sorts) on my CCCCs 2022 experiences– minus the complaining, critiques, and ideas on how it could have been better. Oh, I have some thoughts, but to be honest, I don’t think anyone is particularly interested in those thoughts. So I’ll keep that to myself and instead focus on the good things, more or less.

When the CCCCs went online for 2022 and I was put in the “on demand” sessions, my travel plans changed. Instead of going to Chicago on my own to enjoy conferencing, my wife and I decided to rent a house on a place called Seabrook Island in South Carolina near Charleston. We both wanted to get out of Michigan to someplace at least kind of warm, and the timing on the rental and other things was such that we were on the road for all the live sessions, so I missed out on all of that. But I did take advantage of looking at some of the other on demand sessions to see what was there.

Now, I have never been a particularly devout conference attendee. Even at the beginning of my career attending that first CCCCs in 1995 in Washington, DC, when everything was new to me, I was not the kind of person who got up at dawn for the WPA breakfast or even for the 9 am keynote address, the kind of conference goer who would then attend panels until the end of the day. More typical for me is to go to about two or three other panels (besides my own, of course), depending on what’s interesting and, especially at this point of my life, depending on where it is. I usually spend the rest of the time basically hanging out. Had I actually gone to Chicago, I probably would have spent at least half a day doing tourist stuff, for example.

The other thing that has always been true about the CCCCs is even though there are probably over 1000 presentations, the theme of the conference and the chair who puts it together definitely shapes what folks end up presenting about. Sometimes that means there are fewer presentations that connect to my own interests in writing and technology– and as of late, that specifically has been about teaching online. That was the case this year. Don’t get me wrong, I think the theme(s) of identity, race, and gender invoked in the call are completely legitimate and important topics of concern, and I’m interested them both as a scholar and just as a human being. But at the same time, that’s not the “work” I do, if that makes sense.

That said, there’s always a bit of something for everyone. Plus the one (and only, IMO) advantage of the on demand format is the materials are still accessible through the CCCCs conference portal. So while enjoying some so-so weather in a beach house, I spent some time poking around the online program.

First off, for most of the links below to work, you have to be registered for and signed into the CCCCs portal, which is here:

https://app.forj.ai/en?t=/tradeshow/index&page=lobby&id=1639160915376

If you never registered for the conference at all, you won’t be able to access the sessions, though the program of on-demand sessions is available to anyone here. As I understand it, the portal will remain open/accessible for the month of March (though I’m not positive about that). Second, the search feature for the portal is… let’s just say “limited.” There’s no connection between the portal and the conference on-demand program, so you have to look through the program and then do a separate search of the portal opened in a different browser tab. The search engine doesn’t work at all if you include any punctuation, and for the most part, it only returns results when you enter in a few words and not an entire title. My experience has been it seems to work best if you enter in the first three words of the session title. Again, I’m not going to complain….

So obviously, the first thing I found/went to was my own panel:

OD-301 Researching Communication in Practice

There’s not much there. One of the risks of proposing an individual paper for the CCCCs rather than as part of a panel or round table discussion is how you get grouped with other individual submissions. Sometimes, this all ends up working out really well, and sometimes, it doesn’t. This was in the category of “doesn’t.” Plus it looks to me like three out of the other five other people on the program for this session essentially bailed out and didn’t post anything.

Of course, my presentation materials are all available here as Google documents, slides, and a YouTube video.

To find other things I was interested in, I did a search for the key terms “distance” (as in distance education– zero results) and “online,” which had 54 results. A lot of those sessions– a surprising amount to me, actually– involved online writing centers, both in terms of adopting to Covid but also in terms of shifting more work in writing centers to online spaces. Interesting, but not quite what I was looking for.

So these are the sessions I dug into a bit more and I’ll probably be going back to them in the next weeks as I keep working on my “online and the new normal” research:

OD-45 So that just happened…Where does OWI go from here?: Access, Enrollment, and Relevance

Really nice talk that sums up some of the history and talks in broad ways about some of the experiences of teaching online in Covid. Of course, I’m also always partial to presentations that agree with what I’m finding in my own research, and this talk definitely does that.

OD-211 Access and Community in Online Learning– specifically, Ashley Barry, University of New Hampshire, “Inequities in Digital Literacies and Innovations in Writing Pedagogies during COVID-19 Learning.”

Here’s a link to her video in the CCCCs site, and here’s a Google Slides link. At some point, I think I might have to send this PhD student at New Hampshire an email because it seems like Barry’s dissertation research is similar to what I am (kinda/sorta) trying to do with own research about teaching online during Covid. She is working with a team of researchers from across the disciplines on what is likely a more robust albeit local study than mine, but again, with some similar kind of conclusions.

OD-295 Prospects for Online Writing Instruction after the Pandemic Lockdown— specifically, Alexander Evans, Cincinnati State Technical and Community College, “Only Out of Necessity: The Future of Online Developmental FirstYear Writing Courses in Post-Pandemic Society.”

Here’s a link to his video and his slides (which I think are accessible outside of the CCCCs portal). What I liked about Evans’ talk is it is coming from someone very new to teaching at the college level in general, new to community college work, and (I think) new to online teaching as well. A lot of this is about what I see as the wonkiness of what happens (as I think is not uncommon at a lot of community colleges for classes like developmental writing) where instructors more or less get handed a fully designed course and are told “teach this.” I would find that incredibly difficult, and part of Evans’ argument here is if his institution is really going to give people access to higher education, then they need to offer this class in an online format– and not just during the pandemic.

So that was pretty much my CCCCs experience for 2022. I’m not sure when (or if) I’ll be back.

 

 

CCCCs 2022 (part 1?)

Here is a link (bit.ly/krause4c22) to my “on demand” presentation materials for this year’s annual Conference for College Composition and Communication. It’s a “talk” called “When ‘You’ Cannot be ‘Here:’ What Shifting Teaching Online Teaches Us About Access, Diversity, Inclusion, and Opportunity.” As I wrote in the abstract/description of my session:

My presentation is about a research project I began during the 2020-21 school year titled “Online Teaching and the ‘New Normal.” After discussing broadly some assumptions about online teaching, I discuss my survey of instructors teaching online during Covid, particularly the choice to teach synchronously versus asynchronously. I end by returning to the question of my subtitle.

I am saying this is “part 1?” because I might or might not write a recap post about the whole experience. On the one hand, I have a lot of thoughts about how this is going so far, how the online experience could have been better. On the other hand (and I’ve already learned this directly and indirectly on social media), the folks at NCTE generally seem pretty stressed out and overwhelmed and everything else, and it kind of feels like any kind of criticism, constructive or otherwise, will be taken as piling on. I don’t want to do that.

I’m also not sure there will be a part 2 because I’m not sure how much conferencing I’ll actually be able to do. When the conference went all online, my travel plans changed. Now I’m going to be be on the road during most of live or previously recorded sessions, so most of my engagement will have to to be in the on demand space. Though hopefully, there will be some recordings of events available for a while, things like Anita Hill’s keynote speech.

The thing I’ll mention for now is my reasons for sharing my materials in the online/on demand format outside the walled garden of the conference website itself. I found out that I was assigned to present in the “on demand” format of the conference– if I do write a part 2 to this post, I’ll come back to that decision process then. In any event, the instructions the CCCCs provided asked presenters to upload materials– PDFS, PPT slides, videos, etc.– to the server space for the conference. I emailed “ccccevents” and asked if that was a requirement. This was their response:

We do suggest that you load materials directly into the platform through the Speaker Ready Room for content security purposes (once anyone has the link outside of the platform, they could share it with anyone). However, if you really don’t want to do that, you could upload a PDF or a PPT slide that directs attendees to the link with your materials.

The “Speaker Ready Room” is just want they call the portal page for uploading stuff. The phrase I puzzled over was “content security purposes” and trying to prevent the possibility that anyone anywhere could share a link to my presentation materials. Maybe I’m missing something, but isn’t that kind of the point of scholarship? That we present materials (presentations, articles, keynote speeches, whatever) in the hopes that those ideas and thoughts and arguments are made available to (potential) readers who are anyone and anywhere?

I’ve been posting web-based versions of conference talks for a long time now– sometimes as blog posts, as videos, as Google Slides with notes, etc. I do it mainly because it’s easy for me to do, I believe in as much open access to scholarship as possible, and I’m trying to give some kind of life to this work that is beyond 15 minutes of me talking to (typically) less than a dozen people. I wouldn’t say any of my self-published conference materials have made much difference in the scholarly trajectory of the field, but I can tell from some of the tracking stats that these web-based versions of talks get many more times the number of “hits” than the size of the audience at the conference itself. Of course, that does not really mean that the 60 or 100 or so people who clicked on a link to a slide deck are nearly as engaged of an audience as the 10 people (plus other presenters) who were actually sitting in the room when I read my script, followed by a discussion. But it’s better than not making it available at all.

Anyway, we’ll see how this turns out.

“Synch Video is Bad,” perhaps a new research project?

As Facebook has been reminding me far too often lately, things were quite different last year. Last fall, Annette and I both had “faculty research fellowships,” which meant that neither of us were teaching because we were working on research projects. (It also meant we did A LOT of travel, but that’s a different post). I was working on a project that was officially called “Investigating Classroom Technology bans Through the Lens of Writing Studies,” a project I always referred to as the “Classroom Tech Bans are Bullshit” project.

It was going along well, albeit slowly. I gave a conference presentation about it all in fall at the Great Lakes Writing and Rhetoric Conference  in September, and by early October, I was circulating a snowball sampling survey to students and instructors (via mailing lists, social media, etc.) about their attitudes about laptops and devices in classes. I blogged about it some in December, and while I wasn’t making as much progress as quickly as I would have preferred, I was getting together a presentation for the CCCCs and ready to ramp up the next steps of this: sorting through the results of the survey and contacting individuals for follow-up case study interviews.

Then Covid.

Then the mad dash to shove students and faculty into the emergency lifeboats of makeshift online classes, kicking students out of the dorms with little notice, and a long and troubling summer of trying to plan ahead for the fall without knowing exactly what universities were going to do about where/in what mode/how to hold classes. Millions of people got sick, hundreds of thousands died, the world economy descended into chaos. And Black Lives Matter protests, Trump descending further into madness, forest fires, etc., etc.

It all makes the debate about laptops and cell phones in classes seem kind of quaint and old-fashioned and irrelevant, doesn’t it? So now I’m mulling over starting a new different but similar project about faculty (and perhaps students) attitudes about online courses– specifically about synchronous video-conference online classes (mostly Zoom or Google Meetings).

Just to back up a step: after teaching online since about 2005, after doing a lot of research on best practices for online teaching, after doing a lot of writing and research about MOOCs, I’ve learned at least two things about teaching online:

  • Asynchronous instruction works better than synchronous instruction because of the affordances (and limitations) of the medium.
  • Video– particularly videos of professors just lecturing into a webcam while students (supposedly) sit and pay attention– is not very effective.

Now, conventional wisdom often turns out to be wrong, and I’ll get to that. Nonetheless, for folks who have been teaching online for a while, I don’t think either of these statements are remotely controversial or in dispute.

And yet, judging from what I see on social media, a lot of my colleagues who are teaching online this fall for the first time are completely ignoring these best practices: they’re teaching synchronous classes during the originally scheduled time of the course and they are relying heavily on Zoom. In many cases (again, based on what I’ve seen on the internets), instructors have no choice: that is, the institution is requiring that what were originally scheduled f2f classes be taught with synch video regardless of what the instructor wants to do, what the class is, and if it makes any sense. But a lot of instructors are doing this to themselves (which, in a lot of ways, is even worse). In my department at EMU, all but a few classes are online this fall, and as far as I can tell, many (most?) of my colleagues have decided on their own to teach their classes with Zoom and synchronously.

It doesn’t make sense to me at all. It feels like a lot of people are trying to reinvent the wheel, which in some ways is not that surprising because that’s exactly what happened with MOOCs. When the big for-profit MOOC companies like Coursera and Udacity and EdX and many others got started, they didn’t reach out to universities that were already experienced with online teaching. Instead, they reached out to themselves and peer institutions– Stanford, Harvard, UC-Berkeley, Michigan, Duke, Georgia Tech, and lots of other high profile flagships. In those early TED talks (like this one from Daphne Koller and this one from Peter Norvig), it really really seems like these people sincerely believe that they were the first ones to ever actually think about teaching online, that they had stumbled across an undiscovered country. But I digress.

I think requiring students to meet online but synchronously for a class via Zoom simply is putting a round peg into a square hole. Imagine the logical opposite situation: say I was scheduled to teach an asynchronous online class that was suddenly changed into a traditional f2f class, something that meets Tuesdays and Thursdays from 10 am to 11:45 am. Instead of changing my approach to this now different mode/medium, I decided I was going to teach the class as an asynch online class anyway. I’d require everyone to physically show up to the class on Tuesdays and Thursdays at 10 am (I have no choice about that), but instead of taking advantage of the mode of teaching f2f, I did everything all asynch and online. There’d be no conversation or acknowledgement that we were sitting in the same room. Students would only be allowed to interact with each other in the class LMS. No one would be allowed to actually talk to each other, though texting would be okay. Students would sit there for 75 minutes, silently doing their work but never allowed to speak with each other, and as the instructor, I would sit in the front of the room and do the same. We’d repeat this at all meetings the entire semester.

A ridiculous hypothetical, right? Well, because I’m pretty used to teaching online, that’s what an all Zoom class looks like like to me.

The other problem I have with Zoom is its part in policing and surveilling both students and teachers. Inside Higher Ed and the Chronicle of Higher Education both published inadvertently hilarious op-eds written to an audience of faculty about how they should maintain their own appearances and of their “Zoom backgrounds” to project professionalism and respect. And consider this post on Twitter:


I can’t verify the accuracy of these rules, but it certainly sounds like it could be true. When online teaching came up in the first department meeting of the year (held on Zoom, of course), the main concern voiced by my colleagues who had never taught online before was dealing with students who misbehave in these online forums. I’ve seen similar kinds of discussions about how to surveil students from other folks on social media. And what could possibly motivate a teacher’s need to have bodily control over what their students do in their own homes to the point of requiring them to wear fucking shoes?

This kind of “soft surveillance” is bad enough, but as I understand it, one of Zoom’s features it sells to institutions is robust data on what users do with it: who is logged in, when, for how long, etc. I need to do a little more research on this, but as I was discussing on Facebook with my friend Bill Hart-Davidson (who is in a position to know more about this both as an administrator and someone who has done the scholarship), this is clearly data that can be used to effectively police both teachers’ and students’ behavior. The overlords might have the power to make us to wear shoes at all times on Zoom after all.

On the other hand…

The conventional wisdom about teaching online asynchronously and without Zoom might be wrong, and that makes it potentially interesting to study. For example, the main reason why online classes are almost always asynchronous is the difficulty of scheduling and the flexibility helps students take classes in the first place. But if you could have a class that was mostly asynchronous but with some previously scheduled synchronous meetings as a part of the mix, well, that might be a good thing. I’ve tried to teach hybrid classes in the past that approach this, though I think Zoom might make this a lot easier in all kinds of ways.

And I’m not a complete Zoom hater. I started using it (or Google Meetings) last semester in my online classes for one-on-one conferences, and I think it worked well for that. I actually prefer our department meetings on Zoom because it cuts down on the number of faculty who just want to pontificate about something for no good reason (and I should note I am very very much one of these kind of faculty members, at least once in a while). I’ve read faculty justifying their use of Zoom based on what they think students want, and maybe that turns out to be true too.

So, what I’m imagining here is another snowball sample survey of faculty (maybe students as well) about their use of Zoom. I’d probably continue to focus on small writing classes because it’s my field and also because of different ideas about what teaching means in different disciplines. As was the case with the laptop bans are bullshit project, I think I’d want to continue to focus on attitudes about online teaching generally and Zoom in particular, mainly because I don’t have the resources or skills as a researcher to do something like an experimental design that compares the effectiveness of a Zoom lecture versus a f2f one versus an asynchronous discussion on a topic– though as I type that, I think that could be a pretty interesting experiment. Assuming I could get folks to respond, I’d also want to use the survey to recruit participants in one on one interviews, which I think would be more revealing and relevant data, at least to the basic questions I have now:

  • Why did you decide to use a lot of Zoom and do things synchronously?
  • What would you do differently next time?

What do you think, is this an idea worth pursuing?

What We Learned in the “MOOC Moment” Matters Right Now

I tried to share a link to this post, which is on a web site I set up for my book More Than a Moment, but for some reason, Facebook is blocking that– though not this site. Odd. So to get this out there, I’m posting it here as well. –Steve

I received an email from Utah State University Press the other day inviting me to record a brief video to introduce More Than a Moment to the kinds of colleagues who would have otherwise seen the book on display in the press’ booth at the now cancelled CCCCs in Milwaukee. USUP is going to be hosting a “virtual booth” on their web site in an effort to get the word out about books they’ve published recently, including my own.

So that is where this is coming from. Along with recording a bit of video, I decided I’d also write about how I think what I wrote about MOOCs matters right now, when higher education is now suddenly shifting everything online.

I don’t want to oversell this here. MOOCs weren’t a result of an unprecedented global crisis, and MOOCs are not the same thing as online teaching. Plus what faculty are being asked to do right now is more akin to getting into a lifeboat than it is to actual online teaching, a point I write about in some detail here.

That said, I do think there are some lessons learned from the “MOOC Moment” that are applicable to this moment.

Continue reading “What We Learned in the “MOOC Moment” Matters Right Now”

A Bit of Brainstorming About Holding The CCCCs (and other academic conferences) F2F and Online

I’m not that worried about getting and dying from Covid-19 (though I don’t know, maybe I should be), but I can understand why people are concerned both for themselves and for others, and I can understand why there have been travel restrictions and school closures and all the rest. So while it’s probably too late to contain coronavirus and perhaps we’ve all already been exposed to it anyway, I do get why events are getting cancelled and why potentially sick people are self-quarantining and the like.

Which brings me to this year’s annual Conference on College Composition and Communication, scheduled to take place March 25-28: perfect timing for Covid-19 to have everything cancelled and all of us home and alone and and constantly washing our hands, and not conferencing in Milwaukee. Well, potentially; and if the conference goes on as planned, I’m still planning to go. But that’s all still a big “if.”

Now, one of the things that’s come up a lot on Facebook and Twitter and the like is the idea of “just move it online.” I’ve been saying a version of that myself, though though long before coronavirus. I know first hand that “just move it online” is not something that just happens magically, quickly, easily, and for free. But I also have some ideas on how this might work, and because it came up on Facebook (Julie Lindquist, who is chair of the conference this year, asked me to share my thoughts) because I’m procrastinating from grading, I thought I’d write about that.

The TL;DR version: the conference should have a web site and allow online participants to share links to their online presentations on that web site.

A few disclaimers. First, I don’t have much of a dog in this fight because while I’ve been going to the CCCCs off and on my entire career, it’s just not that important of an event for me any more. Second, I have systematically avoided getting involved in some kind of CCCC or NCTE service and I’m not planning on starting now. Maybe that is a mistake on my part, but it is what it is. And third, I’m not talking about doing away with the face to face conference. I think that’d be a bad idea. Rather, I’m just talking about giving people the chance to participate while not actually being their physically, and I’m talking about a way of preserving and sharing presentations beyond the moment of reading a paper and pointing at a slide show in a nearly empty room at a conference hotel.

Fourth– and this is an important one– the CCCCs can’t “just move it online” in less than three weeks. It is simply not enough time. Yeah, it sucks and it sucks a lot, and maybe participants could try to use Google Hangout on their own (see below), but I think it’s too late for the CCCCs organizers to systematically create an official online presentation mode. What I’m talking about here are ideas to think about for next year and beyond because there are lots of reasons to make academic conferences more accessible beyond a pandemic.

With that, some brainstorming/ideas: Continue reading “A Bit of Brainstorming About Holding The CCCCs (and other academic conferences) F2F and Online”

Still more on the “Classroom Tech Bans are Bullshit (or not)” project, in which I go down the tangent of note-taking

I spent most of my Thanksgiving break  back in Iowa, and along the way, I chatted with my side of the family about my faculty research fellowship project, “Investigating Classroom Technology Bans Through the Lens of Writing Studies,” aka “Classroom Tech Bans are Bullshit.” It’s always interesting talking to my non-academic family-types about academic things like this.

“So, you’re on sabbatical right now?” Not exactly. I’m not teaching so I can spend more time on research, but I’m still expected to go to meetings and things like that. Though honestly, I’ve skipped some of that stuff too, and it’s generally okay.

“Is there some kind of expectation for what you are supposed to be researching? What happens if you don’t do it?” Well, it is a competitive process for getting these fellowships in the first place, and there’s of course an expectation that I’ll do what I proposed. And I have done that, more or less, and I will have to write a report about that soon. But the implications (consequences?) of not doing all of what I originally proposed are vague at best.

“So, you’re not really working right now?” No no no, that’s not true. I’m working quite a bit, actually. But I’m doing this work because I want to, though I’m doing this work mostly at home and often in pajamas and I have an extremely flexible schedule right now (which is why we’re going to Morocco in a few days, but that’s another story for later), so I can understand why you might ask that.

“Being a professor is kind of a weird job, isn’t it?” Yes, yes it is.

Anyway, since I last blogged about this project back in September, I’ve been a bit distracted by department politics (don’t ask) and by prepping for teaching in the Winter term, which for me involves some new twists on old courses and also a completely new prep. But the research continues.

Back in October, I put together and conducted a survey for students and faculty about their attitudes/beliefs on the use of laptops and cell phones in classes. Taking the advice I often give my grad students in situations like this, I did not reinvent the wheel and instead based this survey on similar work by Elena Neiterman and Christine Zaza who are both at the University of Waterloo in Ontario and who both (I think) work in that school’s Public Health program. They published two articles right up my alley for this project: “A Mixed Blessing? Students’ and Instructors’ Perspectives about Off-Task Technology Use in the Academic Classroom” and “Does Size Matter? Instructors’ and Students’ Perceptions of Students’ Use of Technology in the Classroom.” I emailed to ask if they would be willing to share their survey questions and they generously agreed, so thanks again!

I’ll be sorting through and presenting about the results of this at the CCCCs this year and hopefully in an article (or articles) eventually. But basically, I asked for participants on social media, the WPA-L mailing list (had to briefly rejoin that!), and at EMU. I ended up with 168 respondents, 57% students and 43% instructors, most of whom aren’t at EMU. The results are in the ballpark/consistent with Neiterman and Zaza (based just on percentages– I have no idea if there’s a way to legitimately claim any kind of statistically significant comparison), though I think it’s fair to say both students and instructors in my survey are more tolerant and even embracing of laptops and cellphones in the classroom. I think that’s both because these are all smaller classes (Neiterman and Zaza found that size does indeed matter and devices are more accepted in smaller classes), and also because they’re writing classes. Besides the fact that writing classes tend to be activity-heavy and lecture-light (and laptops and cell phones are important tools for writing), I think our field is a lot more accepting of these technologies and frankly a lot more progressive in its pedagogy: not “sage on the stage” but “guide on the side,” the student-centered classroom, that sort of thing. I also was able to recruit a lot of potential interviewee subjects from this survey, though I think I’m going to hold off on putting together that part of the project for the new year.

And I’ve been thinking again about note-taking, though not so much as it relates to technology. As I’ve mentioned here before, there are two basic reasons in the scholarship for banning or limiting the use of devices– particularly laptops– in college classrooms, particularly lecture halls. One reason is about the problems of distraction and multitasking, and I do think there is some legitimacy to that. The other reason (as discussed in the widely cited Mueller and Oppenheimer study) is that it’s better to take notes by longhand than by a laptop.  I think that’s complete bullshit, so I kind of set that aside.

But now I’m starting rethink/reconsider the significance of note-taking again because of the presidential impeachment hearings. Those hearings were a series of poised, intelligent, and dedicated diplomats and career federal professionals explaining how Trump essentially tried to blackmail the Ukrainian government to investigate Biden. One of the key things that made these people so credible was their continued reference to taking detailed notes where they witnessed this impeachable behavior. In contrast, EU ambassador Gordon “The Problem” Sondland seemed oddly proud that he’s never been a note-taker. As a result, a lot of Sondland’s testimony included him saying stuff like “I don’t remember the details because I don’t take notes, but if it was in that person’s notes, I have no reason to doubt it.” I thought this detail (and other things about his testimony) made Sondland look simultaneously like an extremely credible witness to events and also like a complete boob.

Anyway, this made me wonder: exactly is the definition of “good note-taking?” How do we know someone takes good (or bad) notes, and what’s the protocol for teaching/training people to take good notes?

The taking notes by hand versus on a laptop claim is shaky and (IMO) quite effectively refuted by the Kayla Morehead, John Dunlosky, and Katherine A. Rawson study, “How Much Mightier Is the Pen Than the Keyboard for Note-Taking? A Replication and Extension of Mueller and Oppenheimer (2014).” But while that study does poke at the concept of note-taking a bit (for example, they have one group of participants not take notes at all and just closely pay attention to the TED talk lecture), everything else I’ve read seems to just take note-taking as a given. There’s broad consensus in the psych/education scholarship that taking notes is an effective way to recall information later on, be it for a test or testimony before Congress, and there also seems to be consensus that trying to write everything down is a bad note-taking strategy. But I have yet to read anything about a method or criteria for evaluating the quality of notes, nor have I read anything about a pedagogy or a protocol for teaching people how to take good notes.

I find that odd. I mean, if the basic claim that Mueller and Oppenheimer (and similar studies) are trying to make is that students take “better notes” by hand than by laptop, and if the basic claim that Morehead, Dunlosky, and Rawson (and similar studies) are tying to make is students don’t take “better notes” by hand than by laptop, shouldn’t there be at least some minimal definition of “better notes?” Without that definition, can we really say that study participants who scored higher on the test measuring success did so because they took “better notes” rather than some other factor (e.g., they were smarter, they paid better attention, they had more knowledge about the subject of the lecture before the test, etc., etc.)?

I posted about this on Facebook and tagged a few friends I have who work for the federal government, asking if there was any particular official protocol or procedure for taking notes; the answers I got back were kind of vague. On the way back home at one point, Annette and I got to talking about how we were taught to take notes. I don’t remember any sort of instruction in school, though Annette said she remembered a teacher who actually collected and I guess graded student notes. There are of course some resources out there– here’s what looks like a helpful collection of links and ideas from the blog Cult of Pedagogy— but most of these strategies seem more geared for a tutoring or learning center setting. Plus a pedagogy for teaching note taking strategies is not the same thing as research, and it certainly is not the same thing as a method for measuring the effectiveness of notes.

But clearly, I digress.

So my plan for what’s next is to do even more reading (I’m working my way back through the works cited of a couple of the key articles I’ve been working with so far), some sifting through/writing about the results, and eventually some interviews, probably via email. And maybe I’ll take up as a related project more on this question of note-taking methods. But first, there’s Morocco and next semester.

It’s been an interesting research fellowship semester for me. I’ve been quite fortunate in that in the last five years I’ve had two research fellowships and a one semester sabbatical. Those previous releases from teaching involved the specific project of my book about MOOCs, More Than A Moment (on sale now!), and thus had very specific goals/outcomes. My sabbatical was mostly about conducting interviews and securing a book contract; my last FRF was all about finishing the book.

In contrast, this project/semester was much less guided, a lot more “wondering” (I think blog posts like this one demonstrate that). It’s been a surprisingly useful time for me as a scholar, especially at a time in my career and following the intensity of getting the MOOC book done where I was feeling pretty “done” with scholarship. I’ve got to give a lot of credit to EMU for the opportunity, and I hope these keep funding these fellowships, too.