No, an AI could not pass “freshman year” in college

I am fond of the phrase/quote/mantra/cliché “Ninety percent of success in life is just showing up,” which is usually attributed to Woody Allen. I don’t know if Woody was “the first” person to make this observation (probably not, and I’d prefer if it was someone else), but in my experience, this is very true.

This is why AIs can’t actually pass a college course or their freshmen year or law school or whatever: they can’t show up. And it’s going to stay that way, at least until we’re dealing with advanced AI robots.

This is on my mind because my friend and colleague in the field, Seth Kahn, posted the other day on Facebook about this recent article from The Chronicle of Higher Education by Maya Bodnick, “GPT-4 Can Already Pass Freshman Year at Harvard.” (Bodnick is an undergraduate student at Harvard). It is yet another piece claiming that the AI is smart enough to do just fine on its own at one of the most prestigious universities in the world.

I agreed with all the other comments I saw on Seth’s post. In my comment (which I wrote before I actually read this CHE article), I repeated three points I’ve written about here or on social media before. First, ChatGPT and similar AIs can’t evaluate and cite academic research at even the modest levels I expect in a first year writing class. Second, while OpenAI proudly lists all the “simulated exams” where ChatGPT has excelled (LSAT, SAT, GRE, AP Art History, etc.), you have to click the “show more exams” button on that page to see that none of the versions of their AI has managed better than a “2” on the AP English Language (and also Literature) and Composition exams. It takes a “3” on this exam to get any credit at EMU, and probably a “4” at a lot of other universities.

Third, I think mainstream media and all the rest of us really need to question these claims of AIs passing whatever tests and classes and whatnot much MUCH more carefully than I think most of us have to date.  What I was thinking about when I made that last comment was another article published in CHE and in early July, “A Study Found That AI Could Ace MIT. Three MIT Students Beg to Differ,” by Tom Bartlett. In this article, Bartlett discusses  a study (which I don’t completely understand because it’s too much math and details) conducted by 3 MIT students (class of 2024) who researched the claim that an AI could “ace” MIT classes. The students determined this was bullshit. What were the students’ findings (at least the ones I could understand)? In some of the classes where the AI supposedly had a perfect score, the exams include unsolvable problems, so it’s not even possible to get a perfect score. In other examples, the exam questions the AI supposedly answered correctly did not provide enough information for that to be possible either. The students posted their results online and at least some of the MIT professors who originally made the claims agreed and backtracked.

But then I read this Bodnick article, and holy-moly, this is even more bullshitty than I originally thought. Let me quote at length Bodnick describing her “methodology”:

Three weeks ago, I asked seven Harvard professors and teaching assistants to grade essays written by GPT-4 in response to a prompt assigned in their class. Most of these essays were major assignments which counted for about one-quarter to one-third of students’ grades in the class. (I’ve listed the professors or preceptors for all of these classes, but some of the essays were graded by TAs.)

Here are the prompts with links to the essays, the names of instructors, and the grades each essay received:

  • Microeconomics and Macroeconomics (Jason Furman and David Laibson): Explain an economic concept creatively. (300-500 words for Micro and 800-1000 for Macro). Grade: A-
  • Latin American Politics (Steven Levitsky): What has caused the many presidential crises in Latin America in recent decades? (5-7 pages) Grade: B-
  • The American Presidency (Roger Porter): Pick a modern president and identify his three greatest successes and three greatest failures. (6-8 pages) Grade: A
  • Conflict Resolution (Daniel Shapiro): Describe a conflict in your life and give recommendations for how to negotiate it. (7-9 pages). Grade: A
  • Intermediate Spanish (Adriana Gutiérrez): Write a letter to activist Rigoberta Menchú. (550-600 words) Grade: B
  • Freshman Seminar on Proust (Virginie Greene): Close read a passage from In Search of Lost Time. (3-4 pages) Grade: Pass

I told these instructors that each essay might have been written by me or the AI in order to minimize response bias, although in fact they were all written by GPT-4, the recently updated version of the chatbot from OpenAI.

In order to generate these essays, I inputted the prompts (which were much more detailed than the summaries above) word for word into GPT-4. I submitted exactly the text GPT-4 produced, except that I asked the AI to expand on a couple of its ideas and sequenced its responses in order to meet the word count (GPT-4 only writes about 750 words at a time). Finally, I told the professors and TAs to grade these essays normally, except to ignore citations, which I didn’t include.

Not only can GPT-4 pass a typical social science and humanities-focused freshman year at Harvard, but it can get pretty good grades. As shown in the list above, GPT-4 got all A’s and B’s and one Pass.

JFC. Okay, let’s just think about this for a second:

  • We’re talking about three “essays” that are less than 1000 words and another three that are slightly longer, and based on this work alone, GPT-4 “passed” a year of college at Harvard. That’s all it takes. Really; really?! That’s it?
  • I would like to know more about what Bodnick means when she says that the writing prompts were “much more detailed than the summaries above” because those details matter a lot. But as summarized, these are terrible assignments. They aren’t connected with the context of the class or anything else.  It would be easy to try to answer any of these questions with a minimal amount of Google searching and making educated guesses. I might be going out on a limb here, but I don’t think most writing assignments at Harvard or any other college– even badly assigned ones– are as simplistic as these.
  • It wasn’t just ChatGPT: she had to do some significant editing to put together ChatGPT’s short responses into longer essays. I don’t think the AI could have done that on its own. Unless it hired a tutor.
  • Asking instructors to not pay any attention to the lack of citation (and I am going to guess the need for sources to back up claims in the writing) is giving the AI way WAAAAYYY too much credit, especially since ChatGPT (and other AIs) usually make shit up hallucinate when citing evidence. I’m going to guess that even at Harvard, handing in hallucinations would result in a failing grade. And if the assignment required properly cited sources and the student didn’t do that, then that student would also probably fail.
  • It’s interesting (and Bodnick points this out too) that the texts that received the lowest grades are ones that ask students to “analyze” or to provide their opinions/thoughts, as opposed to assignments that were asking for an “information dump.” Again, I’m going to guess that, even at Harvard, there is a higher value placed on students demonstrating with their writing that they thought about something.

I could go on, but you get the idea. This article is nonsense. It proves literally nothing.

But I also want to return to where I started, the idea that a lot of what it means to succeed in anything (perhaps especially education) is showing up and doing the work. Because after what seems like the zillionth click-bait headline about how ChatGPT could graduate from college or be a lawyer or whatever because it passed a test (supposedly), it finally dawned on me what has been bothering me the most about these kinds of articles: that’s just not how it works! To be a college graduate or a lawyer or damn near anything else takes more than passing a test; it takes the work of showing up.

Granted, there has been a lot more interest and willingness in the last few decades to consider “life experience” credit as part of degrees, and some of these places are kind of legitimate institutions– Southern New Hampshire and the University of Phoenix immediately come to mind. But “life experience” credit is still considered mostly bullshit and the approach taken by a whole lot of diploma mills, and real online universities (like SNHU and Phoenix) still require students to mostly take actual courses, and that requires doing more than writing a couple papers and/or taking a couple of tests.

And sure, it is possible to become a lawyer in California, Vermont, Virginia and Washington without a law degree, and it is also possible to become a lawyer in New York or Maine with just a couple years of law school or an internship. But even these states still require some kind of experience with a law office, most states do require attorneys to have law degrees, and it’s not exactly easy to pass the bar without the experience you get from earning a law degree. Ask Kim Kardashian. 

Bodnick did not ask any of the faculty who evaluated her AI writing examples if it would be possible for a student to pass that professor’s class based solely on this writing sample because she already knew the answer: of course not.

Part of the grade in the courses I teach is based on attendance, participation in the class discussions and peer review, short responses to readings, and so forth. I think this is pretty standard– at least in the humanities. So if some eager ChatGPT enthusiast came to one of my classes– especially one like first year writing, where I post all of the assignments at the beginning of the semester (mainly because I’ve taught this course at least 100 times at this point)– and said to me “Hey Krause, I finished and handed in all the assignments! Does that mean I get an A and go home now?” Um, NO! THAT IS NOT HOW IT WORKS! And of course anyone familiar with how school works knows this.

Oh, and before anyone says “yeah, but what about in an online class?” Same thing! Most of the folks I know who teach online have a structure where students have to regularly participate and interact with assignments, discussions, and so forth. My attendance and participation policies for online courses are only slightly different from my f2f courses.

So please, CHE and MSM in general: stop. Just stop. ChatGPT can (sort of) pass a lot of tests and classes (with A LOT of prompting from the researchers who really really want ChatGPT to pass), but until that AI robot walks/rolls into  a class or sets up its profile on Canvas all on its own, it can’t go to college.

The Problem is Not the AI

The other day, I heard the opening of this episode of the NPR call-in show 1A, “Know It All: ChatGPT In the Classroom.” It opened with this recorded comment from a listener named Kate:

“I teach freshman English at a local university, and three of my students turned in chatbot papers written this past week. I spent my entire weekend trying to confirm they were chatbot written, then trying to figure out how to confront them, to turn them in as plagiarist, because that is what they are, and how I’m going penalize their grade. This is not pleasant, and this is not a good temptation. These young men’s academic careers now hang in the balance because now they’ve been caught cheating.”

Now, I didn’t listen to the show for long beyond this opener (I was driving around running errands), and based on what’s available on the website, the discussion  also included information about incorporating ChatGPT into teaching. Also, I don’t want to be too hard on poor Kate; she’s obviously really flustered and I am guessing there were a lot of teachers listening to Kate’s story who could very personally relate.

But look, the problem is not the AI.

Perhaps Kate was teaching a literature class and not a composition and rhetoric class, but let’s assume whatever “freshman English” class she was teaching involved a lot of writing assignments. As I mentioned in the last post I had about AI and teaching with GPT-3 back in December, there is a difference between teaching writing and assigning writing. This is especially important in classes where the goal is to help students become better at the kind of writing skills they’ll need in other classes and “in life” in general.

Teaching writing means a series of assignments that build on each other, that involve brainstorming and prewriting activities, and that involve activities like peer reviews, discussions of revision, reflection from students on the process, and so forth. I require students in my first year comp/rhet classes to “show their work” through drafts that is in a way they similar to how they’d be expected to in an Algebra or Calculus course. It’s not just the final answer that counts. In contrast, assigning writing is when teachers give an assignment (often a quite formulaic one, like write a 5 paragraph essay about ‘x’) with no opportunities to talk about getting started, no consideration of audience or purpose, no interaction with the other students who are trying to do the same assignment, and no opportunity to revise or reflect.

While obviously more time-consuming and labor-intensive, teaching writing has two enormous advantages over only assigning writing. First, we know it “works” in that this approach improves student writing– or at least we know it works better than only assigning writing and hoping for the best. We know this because people in my field have been studying this for decades, despite the fact that there are still a lot of people just assigning writing, like Kate. Second, teaching writing makes it extremely difficult to cheat in the way Kate’s students have cheated– or maybe cheated. When I talk to my students about cheating and plagiarism, I always ask “why do you think I don’t worry much about you doing that in this class?” Their answer typically is “because we have to turn in all this other stuff too” and “because it would be too much work,” though I also like to believe that because of the way the assignments are structured, students become interested in their own writing in a way that makes cheating seem silly.

Let me just note that what I’m describing has been the conventional wisdom among specialists in composition and rhetoric for at least the last 30 (and probably more like 50) years. None of this is even remotely controversial in the field, nor is any of this “new.”

But back to Kate: certain that these three students turned in “chatbot papers,” she spent the “entire weekend” working to prove these students committed the crime of plagiarism and they deserve to be punished. She thinks this is a remarkably serious offense– their “academic careers now hang in the balance”– but I don’t think she’s going through all this because of some sort of abstract and academic ideal. No, this is personal. In her mind, these students did this to her and she’s going to punish them. This is beyond a sense of justice. She’s doing this to get even.

I get that feeling, that sense that her students betrayed her. But there’s no point in making teaching about “getting even” or “winning” because as the teacher, you create the game and the rules, you are the best player and the referee, and you always win. Getting even with students is like getting even with a toddler.

Anyway, let’s just assume for a moment that Kate’s suspicions are correct and these three students handed in essays created entirely by ChatGPT. First off, anyone who teaches classes like “Freshman English” should not need an entire weekend or any special software to figure out if these essays were written by an AI. Human writers– at all levels, but especially comparatively inexperienced human writers– do not compose the kind of uniform, grammatically correct, and robotically plodding prose generated by ChatGPT. Every time I see an article with a passage of text that asks “was this written by a robot or a student,” I always guess right– well, almost always I guess right.

Second,  if Kate did spend her weekend trying to find “the original” source ChatGPT used to create these essays, she certainly came up empty handed. That was the old school way of catching plagiarism cheats: you look for the original source the student plagiarized and confront the student with it, court room drama style. But ChatGPT (and other AI tools) do not “copy” from other sources; rather, the AI creates original text every time. That’s why there have been several different articles crediting an AI as a “co-author.”

Instead of wasting a weekend, what Kate should have done is called each of these students into her office or taken them aside one by one in a conference and asked them about their essays. If the students cheated,  they would not be able to answer basic questions about what they handed in, and 99 times out of 100, the confronted cheating student will confess.

Because here’s the thing: despite all the alarm out there that all students are cheating constantly, my experience has been the vast majority do not cheat like this, and they don’t want to cheat like this. Oh sure, students will sometimes “cut corners” by looking over to someone else’s answers on an exam, or maybe by adding a paragraph or two from something without citing it. But in my experience, the kind of over-the-top sort of cheating Kate is worried about is extremely rare. Most students want to do the right thing by doing the work, trying to learn something, and by trying their best– plus students don’t want to get in trouble from cheating either.

Further, the kinds of students who do try to blatantly plagiarize are not “criminal masterminds.” Far from it. Rather, students blatantly plagiarize when they are failing and desperate, and they are certainly not thinking of their “academic careers.” (And as a tangent: seems to me Kate might be overestimating the importance of her “Freshman English” class a smidge).

But here’s the other issue: what if Kate actually talked to these students, and what if it turned out they either did not realize using ChatGPT was cheating, and/or they used ChatGPT in a way that wasn’t significantly different from getting some help from the writing center or a friend? What do you do then? Because– and again, I wrote about this in December— when I asked students to use GPT-3 (OpenAI’s software before ChatGPT) to write an essay and to then reflect on that process, a lot of them described the software as being a brainstorming tool, sort of like a “coach,” and not a lot different from getting help from others in peer review or from a visit to the writing center.

So like I said, I don’t want to be too hard on Kate. I know that there are a lot of teachers who are similarly freaked out about students using AI to cheat, and I’m not trying to suggest that there is nothing to worry about either. I think a lot of what is being predicted as the “next big thing” with AI is either a lot further off in the future than we might think, or it is in the same category as other famous “just around the corner” technologies like flying cars. But no question that this technology is going to continue to improve, and there’s also no question that it’s not going away. So for the Kates out there: instead of spending your weekend on the impossible task of proving that those students cheated, why not spend a little of that time playing around with ChatGPT and seeing what you find out?

Online Teaching and ‘The New Normal’: A Survey of Faculty in the Midst of an Unprecedented ‘Natural Experiment’ (or, my presentation for CWCON2022)

This blog entry/page is my online/on demand presentation for the 2022 Computers and Writing Conference at East Carolina University.

I’m disappointed that I’m not at this year’s Computers and Writing Conference in person. I haven’t been to C&W since 2018 and of course there was no conference in 2020 or 2021. So after the CCCCs prematurely pulled the plug on the face to face conference a few months ago, I was looking forward to the road trip to Greenville. Alas, my own schedule conflicts and life means that I’ll have to participate in the online/on-demand format this time around. I don’t know if that means anyone (other than me) will actually read this, so as much as anything else, this presentation/blog post– which is too long, full of not completely substantiated/documented claims, speculative, fuzzy, and so forth– is a bit of note taking and freewriting meant mostly for myself as I think about how to present this research in future articles, maybe even a book. If a few conference goers and my own blog readers find this interesting, all the better.

Because of the nature of these on-demand/online presentations generally and also because of the perhaps too long/freewriting feel of what I’m getting at here, let me start with a few “to long, didn’t read” bullet points. I’m not even going to write anything else here to explain this, but it might help you decide if it’s worth continuing to read. (Hopefully it is…)

The research I’m continuing is a project I have been calling “Online Teaching and ‘The New Normal,’” which I started in early fall 2020. Back then, I wrote a brief article and was an invited speaker at an online conference held by a group in Belgium– this after someone there saw a post I had written about Zoom on my blog, which is one of the reasons why I keep blogging after all these years. I gave a presentation (that got shuffled away into the “on demand” format) at the most recent CCCCs where I introduced some of my broad assumptions about teaching online, especially about the affordances of asynchronously versus synchronously, and where I offered a few highlights of the survey results. I also wrote an article-slash-website for Computers and Composition Online which goes into much more detail about the results of the survey. That piece is in progress, though it will be available soon. If you have the time and/or interest, I’d encourage you to check out the links to those pieces as well.

I started this project in early fall 2020 for two reasons. First, there was the “natural experiment” created by Covid. Numerous studies have claimed online courses can be just as effective as face to face courses, but one of the main criticisms of these studies is the problem of self selection: that is, because students and teachers engage in the format voluntarily, it’s not possible to have subjects randomly assigned to either a face to face course or an online course, and that kind of randomized study is the gold standard in the social sciences. The natural experiment of Covid enabled a version of that study because millions of college students and instructors had no choice but to take and teach their classes online. 

Second, I was surprised by the large number of my colleagues around the country who said on social media and other platforms that they were going to teach their online classes synchronously via a platform like Zoom rather than asynchronously. I thought this choice– made by at least 60% of college faculty across the board during the 2020-21 school year– was weird. 

Based both on my own experiences teaching some of my classes online since 2005 and the modest amount of research comparing synchronous and asynchronous modes for online courses, I think that asynchronous online courses are probably more effective than synchronous online courses. But that’s kind of beside the point, actually. The main reason why at least 90% of online courses prior to Covid were taught asynchronously is scheduling and the imperative of providing access. Prior to Covid, the primary audience for online courses and programs were non-traditional students. Ever since the days of correspondence courses, the goal of distance ed has been to help “distanced” students– that is, people who live far away from the brick and mortar campus– but also people who are past the traditional undergraduate age, who have “adult” obligations like mortgages and dependents and careers, and  people who are returning to college either to finish the degree they started some years before, or to retool and retrain after having finished a degree earlier. Asynchronous online courses are much easier to fit into busy and changing life-slash-work schedules than synchronous courses– either online ones of f2f. Sure, traditional and on-campus students often take asynchronous courses for similar scheduling reasons, but again and prior to Covid, non-traditional students were the primary audience for online courses. In fact, most institutions that primarily serve traditional students– that is, 18-22 year olds right out of high school who live on or near campus and who attend college full-time (and perhaps work part-time to pay some of the bills)– did not offer many online courses, nor was there much of a demand for online courses from students at these institutions. I’ll come back to this point later.

I conducted my IRB approved survey from December 2020 to June 2021. The survey was limited to college level instructors in the U.S. who taught at least one class completely online (that is, not in some hybrid format that included f2f instruction) during the 2020-21 school year. Using a very crude snowball sampling method, I distributed the survey via social media and urged participants to share the survey with others. I had 104 participants complete this survey, and while I was hoping to recruit participants from a wide variety of disciplines, most were from a discipline related to English studies. This survey was also my tool for recruiting interview subjects: the last question of the survey asked if participants would be interested in a follow-up interview, and 75 indicated that they would be.

One of the findings from the survey that I discussed in my CCCCs talk was that those survey participants who had no previous experience teaching online were over three times more likely to have elected to teach their online classes synchronously during Covid than those who had had previous teaching experience. As this pie chart shows, almost two-thirds of faculty with no prior experience teaching online elected to teach synchronously and only about 12% of survey participants who had no previous experience teaching online elected to teach asynchronously.

In contrast, about a third of faculty who had had previous online experience elected to teach online asynchronously and less than 18% decided to teach online synchronously. Interestingly, the amount of previous experience with teaching online didn’t seem to make much difference– that is, those who said that prior to covid they had taught over 20 sections online were about as likely to have taught asynchronously or to use both synchronous and asynchronous approaches as those who had only taught 1 to 5 sections online prior to the 2020-21 school year. 

For the forthcoming Computers and Composition Online article, I go into more detail about the results of the survey along with incorporating some of the initial impressions and feedback I’ve received from the surveys to date. 

But for the rest of this presentation, I’ll focus on the interviews I have been conducting. I started interviewing participants in January 2022, and these interviews are still in progress. Since this is the kind of conference where people do often care about the technical details: I’m using Zoom to record the interviews and then a software called Otter.ai to create a transcription. Otter.ai isn’t free– in fact, at $13 for the month to month and unlimited usage plan, it isn’t especially cheap– and there are of course other options for doing this. But this is the best and easiest approach I’ve found so far. Most of the interviews I’ve conducted so far run between 45 and 90 minutes, and what’s amazing is Otter.ai can change the Zoom audio file into a transcript that’s about 85% correct in less than 15 minutes. Again, nerdy and technical details, but for changing audio recordings into mostly correct transcripts, I cannot say enough good things about it.

To date, I’ve conducted 24 interviews, and I am guessing that I will be able to conduct between 15 and 30 more, depending on how many of the folks who originally volunteered to be interviewed are still willing.

This means I already have about 240,000 words of transcripts, and I have to say I am at something of a loss as to what to “do” with all of this text in terms of coding, analysis, and the like. The sorts of advice and processes offered by books like Geisler’s and Swarts’ Coding Streams of Language and Saldaña’s The Coding Manual for Qualitative Researchers seems more fitting for analyzing sets of texts in different genres– say an archive for an organization that consists of a mix of memos, emails, newsletters, academic essays, reports, etc.– or of a collection of ethnographic observations. So for me, it doesn’t so much feel like I am collecting a lot of qualitative data meant to be coded and analyzed based on particular word choices or sentence structures or what-have-you, and more like good old-fashioned journalism. If I had been at this conference in person or if there was a more interactive component to this online presentation, this is something I would have wanted to talk more about with the kind of scholars and colleagues involved with computers and writing because I can certainly use some thoughts on how to handle my growing collection of interviews. In any event, my current focus– probably through the end of this summer– is to keep collecting the interviews from willing participants and to figure out what to do with all of this transcript data later. Perhaps that’s what I can talk about at the computers and writing conference at UC Davis next year.

But just to give a glimpse of what I’ve found so far, I thought I’d focus on answers to two of the dozen or so questions I have been sure to ask each interviewee:

  • Why did you decide to teach synchronously (or asynchronously)?
  • Knowing what you know now and after your experience teaching online during the 2020-21 school year, would you teach online again– voluntarily– and would you prefer to do it synchronously or asynchronously?

In my survey, participants had to answer a close-ended question to indicate if they were teaching online synchronously, asynchronously, or some classes synchronously and some asynchronously. There was no “other” option for supplying a different answer. This essentially divided survey participants into two groups because I counted those who were teaching in both formats as synchronous for the other questions on the survey. Also, I excluded from the survey faculty who were teaching with a mix of online and face to face modes because I wanted to keep this as simple as possible. But early on, the interviews made it clear that the mix of modes at most universities was far more complex. One interviewee said that prior to Covid, the choices faculty had for teaching (and the choices students saw in the catalog) was simply online or on campus. Beginning in Fall 2020 though, faculty could choose “fully online asynchronous, fully online synchronous, high flex synchronous, so (the instructor) is stand in the classroom and everyone else is in WebEx of Teams, and fully in the classroom and no option… you need to be in the classroom.” 

I was also surprised at the extent to which most of my interviewees reported that their institution provided faculty a great deal of autonomy in selecting the teaching mode that worked best for their circumstances. So far, I have only interviewed two or three people who said they had no choice but to teach in the mode assigned by the institution. A number of folks said that their institution strongly encouraged faculty to teach synchronously to replicate the f2f experience, but even under those circumstances, it seems most faculty had a fair amount of flexibility to teach in a mode that best fits into the rest of their life. As one person, a non-tenure-track but full time instructor said, “basically, the university said ‘we don’t care that much, especially if you’re… a parent and your kids aren’t going to school and you have to physically be home.’” This person’s impression was that while most of their colleagues were teaching synchronous courses with Zoom, there were “a lot of individual class sessions that were moved asynchronous, and maybe even a few classes that essentially went asynchronous.”

A number of interviewees mentioned that this level of flexibility offered to faculty from their institutions was unusual; one interviewee described the flexibility offered to faculty about their preferred teaching mode a “rare win” against the administration. After all, during the summer of 2020 and when a lot of the plans for going forward with the next school year were up in the air, there were a lot of rumors at my institution (and, judging from Facebook, other institutions as well) that individual faculty who wanted to continue to teach online in Fall of 2020 because of the risks of Covid were were going to have go through a process involving the Americans with Disabilities Act. So the fact that just about everyone I talked to was allowed to teach online and in the mode that they preferred was both surprising and refreshing.

As to why faculty elected to teach in one mode or the other: I think there were basically three reasons. First, as that quote I had above just mentioned, many faculty said concerns about how Covid was impacting their own home lives shaped the decision for either for teaching synchronously or asynchronously. Though again, most of my survey and interview subjects who hadn’t taught online before taught synchronously, and, not surprisingly, some of those interviewees told stories about how their pets, children, and other family members would become regular visitors in the class Zoom sessions. In any event, the risks and dangers of Covid– especially in Fall 2020 and early in 2021 when the data on the risks of transmission in f2f classrooms was unclear and before there was a vaccine– was of course the reason why so many of us were forced into online classes during the pandemic. And while it did indeed create a natural experiment for testing the effectiveness of online courses, I wonder if Covid ended up being such an enormous factor in all of our lives that it essentially skewed or trumped the experiences of teaching online. After all, it is kind of hard for teachers and students alike to reflect too carefully on the advantages and disadvantages of online learning when the issue dominating their lives was a virus that was sickening and killing millions of people and disrupting pretty much every aspect of modern society as we know it.

Second — and perhaps this is just obvious– people did what they already knew how to do, or they did what they thought would be the path of least resistance. Most faculty who decided to teach asynchronously had previous experience teaching asynchronously– or they were already teaching online asynchronously. As one interviewee put it, “Spring 2020, I taught all three classes online. And then COVID showed up, and I was already set up for that because, I was like ‘okay, I’m already teaching online,’ and I’m already teaching asynchronously, so…” That was the situation I was in when we first went into Covid lockdown in March 2020– though in my experience, that didn’t mean that Covid was a non-factor in those already online classes.

Most faculty who decided to teach synchronously– particularly those who had not taught online before– thought teaching synchronously via Zoom would require the least amount of work to adjust from the face to face format, though few interviewees said anything so direct. I spoke with one Communications professor who, just prior to Covid, was part of the launch of an online graduate program at her institution, so she had already spent some time thinking about and developing online courses. She also had previous online teaching experience from a previous position, but at her current institution, she said “I saw a lot of senior faculty”– and she was careful in explaining she meant faculty at her new institution who weren’t necessarily a lot older but who had not taught online previously– “try to take the classroom and put it online, and that doesn’t work. Because online is a different medium and it requires different teaching practices and approaches.” She went on to explain that her current institution sees itself as a “residential university” and the online graduate courses were “targeted towards veterans, working adults, that kind of thing.” 

I think what this interviewee was implying is it did not occur to her colleagues who decided to teach synchronously to do it any other way. As a different interviewee put it, inserting a lot of pauses along the way during our discussion, “I opted for the synchronous, just because… I thought it would be more… I don’t know, better suited to my own comfort levels, I suppose.” Though to be fair, this interviewee had previously taught online asynchronously (albeit some time ago), and he said “what I anticipated– wrongly I’ll add– that what doing it synchronously would allow me to do is set boundaries on it.” This is certainly a problem since teaching asynchronously can easily expand such that it feels like you’re teaching 24/7. There are ways to address those problems, but that’s a different presentation. 

Now, a lot of my interviewees altered their teaching modes as the online experience went on. Many– I might even go so far as saying the majority– of those who started out teaching 100% synchronously with Zoom and holding these classes for the same amount of time as they would a f2f version of the same class did make adjustments. A lot of my interviewees, particularly those who teach things like first year writing, shifted activities like peer review into asynchronous modes; others described the adjustments they made to being like a “flipped classroom” where the synchronous time was devoted to specific student questions and problems with the assigned work and the other materials (videos of lectures, readings, and so forth) were all shifted to asynchronous delivery. And for at least one interviewee, the experience of teaching synchronously drove her to all asynch:

“So, my first go around with online teaching was really what we call here remote teaching. It’s what everybody was kind of forced into, and I chose to do synchronous, I guess, because I didn’t, I hadn’t really thought about the differences. I did that for one quarter. And I realized, this is terrible. I, I don’t like this, but I can see the potential for a really awesome online course, so now I only teach asynchronous and I love it.”

The third reason for the choice between synchronous versus asynchronous is what I’d describe as “for the students,” though what that meant depends entirely on the type of students the interviewee was already working with. For example, here’s a quote from a faculty member who taught a large lecture class in communications at a regional university that puts a high priority on the residential experience for undergraduates:

“A lot of our students were asking for the synchronous class. I mean… when I look back at my student feedback, people that I literally wouldn’t know if they walked in the room because all I had (from them) was a black (Zoom) screen with their name on it, (these students said) ‘really enjoyed your enthusiasm, it made it easy to get out of bed every morning,’ you know, those kind of things. So I think they were wanting punctuation to just not an endless sea of due dates, but an actual event to go to.”

Of course, the faculty who had already been teaching online and were teaching asynchronously said something similar: that is, they explained that one of the reasons why they kept teaching asynchronously was because they had students all over the world and it was not possible to find a time where everyone could meet synchronously, that the students were working adults who needed the flexibility of the asynchronous format, and so forth. I did have an interviewee– one who was experienced at teaching online asynchronously– comment on the challenge students had in adjusting to what was for them a new format:

“What I found the following semester (that is, fall 2020 and after the emergency remote teaching of spring 2020) was I was getting a lot of students in my class who probably wouldn’t have picked online, or chosen it as a way of learning. This has continued. I’ve found that the students I’m getting now are not as comfortable online as the students I was getting before Covid…. It’s not that they’re not comfortable with technology…. But they’re not comfortable interacting in an online way, maybe especially in an asynchronous way… so I had some struggles with that last year that were really weird. I had the best class I’ve had online, probably ever. And the other (section) was absolutely the worst, but I run them with the same assignments and stuff.”

Let me turn to the second question I wanted to discuss here:  “Knowing what you know now and after your experience teaching online during the 2020-21 school year, would you teach online again– voluntarily– and would you prefer to do it synchronously or asynchronously?” It’s an interesting example of how the raw survey results become more nuanced as a result of both parsing those results a bit and conducting these interviews. Taken as a whole, about 58% of all respondents agreed or strongly agreed with the statement “In the future and after the pandemic, I am interested in continuing to teach at least some of my courses online.” My sense– and it is mostly just a sense– that prior to Covid, a much smaller percentage of faculty would have any interest in online teaching. But clearly, Covid has changed some minds. As one interviewee said about talking to faculty new to online teaching at her institution: 

“A lot of them said, ‘you know,  this isn’t as onerous as I thought, this isn’t as challenging as I thought.’ There is one faculty member who started teaching college in 1975, so she’s been around for a while. And she picked it up and she’s like ‘You know, it took a little time to get used to everything, but I like it. I can do the same things, I can reach students and feel comfortable.’ And in some ways, that’s good because it will prolong some people’s careers. And in some ways, it’s not good because it will prolong some people’s careers. It’s a double-edged sword, right?”

My interviewee who I quoted earlier about making the switch from synchronous to asynchronous was certainly sold. She said that she was nearing a point in her career “where I thought I’m just gonna quit teaching and find another job, I don’t know, go back to trade school and become a plumber.” Now, she is an enthusiastic advocate of online courses at her institution, describing herself as a “convert.”

“I use that word intentionally. I gave a presentation to some graduate students in our teacher training class, I was invited as a guest speaker, and I had big emoji that said ‘hallelujah’ and there were doves, and I’m like this is literally me with online teaching. The scales have fallen from my eyes, I am reborn. I mean, I was raised Catholic so I’m probably relying too much on these religious metaphors, but that’s how it feels. It really feels like a rebirth as an instructor.”

Needless to say, not everyone is quite that enthusiastic about the prospect of teaching online again. This chart, which is part of the article I’m writing for Computers and Composition Online, indicates the different answers to this question based on previous experience. While almost 70% of faculty who had online teaching experience prior to Covid strongly agreed or agreed about teaching online again and after the pandemic, only about 40% of faculty with no online teaching experience prior to Covid felt the same way. If anything, I think this chart indicates mostly ambivalent feelings among folks new to online teaching during covid about teaching online again: while more positive than negative, it’s worth noting that most faculty who had no prior online teaching experience neither agreed nor disagreed about wanting to teach online in the future.

For example, here are a couple of responses that I think suggest that ambivalence: 

“Um, I would do it again… even though I would imagine a lot of students would say they didn’t have very positive experiences for all different kinds of reasons over the last two years, but now that we have integrated this kind of experience into our lives in a way that, you know, will evolve, but I don’t think it will go away…. I’d have to be motivated (to teach online again), you know, more than just do it for the heck of it. Like if I could just as well teach the class on campus, I still feel like face to face in person conversation is a better modality. I mean, maybe it will evolve and we’ll learn how to do this better.”

And this response:

“The synchronous teaching online is far more exhausting than in person synchronous teaching, and… I don’t think we cover as much material. So my tendency is to say for my current classes, I would be hesitant to teach them online at my institution, because of a whole bunch of different factors. So I would tend to be in the probably not category, if the pandemic was gone. If the pandemic is ongoing, then no, please let’s stay online.”

And finally this passage, which is also closer to where I personally am with a lot of this:

“If people know what they’re getting into, and their expectations are met, then asynchronous or synchronous online instruction, whether delivery or dialogic, it can work, so long as there is a set of shared expectations. And I think that was the hardest thing about the transition: people who did not want to do distance education on both sides, students and instructors.”

That issue of expectations is critical, and I don’t think it’s a point that a lot of people thought a lot about during the shift online. Again, this research is ongoing, and I feel like I am still in the note-taking/evidence-gathering phase, but I am beginning to think that this issue of expectations is really what’s critical here.

Ten or so years ago, when I would have discussions with colleagues skeptical about teaching online, the main barrier or learning curve issue for most seemed to be the technology itself. Nowadays, that no longer seems to be the problem for most. At my institution (and I think this is common at other colleges as well), almost all instructors now use the Learning Management System (for us, that’s Canvas) to manage the bureaucracy of assignments, grades, tests, collecting and distributing student work, and so forth. We all use the institution’s websites to handle turning in grades for our students and checking on employment things like benefits and paychecks. And of course we also all use the library’s websites and databases, not to mention Google. I wouldn’t want to suggest there is no technological learning curve at all, but that doesn’t seem to me to be the main problem faculty have had with teaching online during the 2020-21 school year.

Rather, I think the challenges have been more conceptual than that. I think a lot of faculty have a difficult time understanding how it is possible to teach a class where students aren’t all paying attention to them, the teacher, at the same time and instead they are participating in the class at different times and in different places, and not really paying attention to the teacher much at all. I think a lot of faculty– especially those new to online teaching– define the act of teaching as standing in front of a classroom and leading students through some activity or by lecturing to them, and of course, this is not how asynchronous courses work at all. So I would agree that the expectations of both students and teachers needs to better align with the mode of delivery for online courses to work, particularly asynchronous ones.

The other issue though is in the assumption about the kind of students we have at different institutions. When I first started this project, the idea of teaching an online class synchronously seemed crazy to me– and I still think asynchronous delivery does a better job of taking advantage of the affordances of the mode– but that was also because of the students I have been working with. Faculty who were used to working almost exclusively with traditional college students tended to put a high emphasis on replicating as best as possible the f2f college experience of classes scheduled and held at a specific time (and many did this at the urging of their institutions and of their students). Faculty like me who have been teaching online classes designed for nontraditional students for several years before Covid were actively trying to avoid replicating the f2f experience of synchronous classes. Those rigidly scheduled and synchronous courses are one of the barriers most of the students in my online courses are trying to circumvent so they could finish their college degrees while also working to support themselves and often a family. In effect, I think Covid revealed more of a difference between the needs and practices of these different types of students and how we as faculty try to reach them. Synchronous courses delivered via Zoom to traditional students were simply not the same kind of online course experience as the asynchronous courses delivered to nontraditional students.

Well, this has gone on for long enough, and if you actually got to this last slide after reading through all that came before, I thank you. Just to sum up and repeat my “too long, didn’t read” slide: 

I think the claims I make here about why faculty decided to teach synchronously or asynchronously during Covid are going to turn out to be consistent with some of the larger surveys and studies about the remarkable (in both terrible and good ways) 2020-21 school year now appearing in the scholarship. I think the experience most faculty had teaching online convinced many (but not all) of the skeptics that an online course can work as well as a f2f course– but only if the course is designed for the format and only if students and faculty understand the expectations and how the mode of an online class is different from a f2f class. In a sense, I think the “natural experiment” of online teaching during Covid suggests that there is some undeniable self-selection bias in terms of measuring the effectiveness of online delivery compared to f2f delivery. What remains to be seen is how significant that self-selection bias is. Is the bias so significant that online courses for those who do not prefer the mode are demonstrably worse than a similar f2f course experience? Or is the bias more along the lines to my own “bias” against taking or teaching a class at 8 am, or a class that meets on a Friday? I don’t know, but I suspect there will be more research emerging out of this time that attempts to dig into that.

Finally, I think what was the previous point of resistance for teaching online– the complexities of the technology– have largely disappeared as the tools have become easier to use and also as faculty and students have become more familiar with those tools in their traditional face to face courses. As a result of that, I suspect that we will continue to see more of a melding of synchronous and asynchronous tools in all courses, be they traditional and on-campus courses or non-traditional and distance education courses. 

 

 

My talk at the Media and Learning Conference (plus with a post-talk update)

After the break and this recap is the text of my talk for the panel “Maximising the learning potential for students and academics” at the Media and Learning Conference. Before the panel happened, I thought I would be the “odd man out” in the sense that I think teaching with video is overrated, and the other people on the panel (notably Michael Wesch and Maha Bali) do not.

Now that it’s over, I can report my first Zoom academic conference talk is in the books. As I mention in the script of my talk, I was invited to participate in this because of a blog post I wrote back in early September about why I thought synchronous Zoom teaching online was a bad idea. An organizer of this conference somehow came across that post and invited me to be on the panel. So once again, I posted something on my blog because it was on my mind, it caught someone’s attention, and it turned into a couple of (small) CV entries. So yeah, there’s a reason why I still blog.

Anyway, I thought was good discussion/panel, with a few minor hiccups along the way. I don’t know if I ended up being at odds with my fellow panelists so much as we were all talking in different ways about the issues of reaching out to students and how video can be a part of online teaching.

The first two speakers, Sian Hammlett and Phillip Seargeant, were filmmakers in the UK who talked about making videos for Open University courses. These are professionally produced videos made with the intention of being used repeatedly for years in courses; the example the speakers and a lot of the participants mentioned in the comments was “The Language of Lying,” which looks quite interesting. Impressive stuff.

Then Michael Wesch talked. Now, I don’t know if the mostly European audience was aware of this (I assume so), but Wesch about as close as you can get to being a “famous academic” after years of high-profile work with video, digital ethnography, and YouTube culture. So he of course gave a great talk featuring all kinds of video and neat slide effects and everything. Super interesting and slick.

And then there was me. Wesch was a tough act to follow, let me tell ya.

I think it went okay, but I had basically three problems that folks might or might not have noticed. First, because this a session that was happening at 1:30 in the afternoon in Europe, it started at 7:30 am for me. Sure, I’m usually up by then and it’s not like I had far to go to get to my computer to participate, but I think it’s fair to say that I haven’t had to be “presentable” this early in the morning in months, possibly years. Second, when I was preparing my talk, I decided not to do any slides or video, mainly because I didn’t know how well it would work on Zoom to begin with– I didn’t want to be fiddling with slides and Zoom at the same time– and because it was a short talk. Turns out I was the only one who didn’t have slides, so that didn’t look great. And third, I was originally told 12 minutes, so I wrote up a script (below) that took me almost exactly 12 minutes to read. Then the moderator began by saying we had 10 minutes each. These things happen, but it did mean I did a lot of skimming over a lot of what I wrote.

And finally Maha Bali talked. She’s a professor at American University in Cairo who I had heard of before through the things I’ve read on Hybrid Pedagogy and her Twitter feed.  I think the other talks were more technical than hers, but what Maha was talking about– how to foster equity and caring in education in the midst of Covid– was arguably more important than to video or not to video. She made her slideshow available here.

This was all via the “webinar” version of Zoom, which I suspect is what most conferences that are going to happen online this year will end up using. I thought it worked well for hosting the presentations and it seemed like it was easy to moderate. One of things that happens at too many f2f conference panels is a moderate is unwilling/unable to stop someone from going over time. Credit to the moderate of this panel, Zac Woolfitt, for not allowing that to happen, but I’d also argue that’s one of the advantages of Zoom: it’s easy for the moderator to stop people. And none of the speakers had any serious technical problems.

But I do wish Zoom had a few better features for facilitating these things. There was a text chat running along with our talks, but there was no way to go back to respond to a specific comment. That was annoying. The only way for folks in the audience to ask questions was via a text box. Perhaps it would have been possible for the moderators to set it up so that someone who wanted to ask a question could get audio/video access– kind of like someone stepping up to the microphone to ask their question. I also found it a bit disembodying because we couldn’t see anyone in the audience; rather, all I could see was a fluctuating number of participants (between about 90 and 110, so a pretty good sized crowd for this sort of thing) and a stream of texts.

Anyway, Zoom was okay, Zoom could have been better, and it felt like a reasonably good substitution for a face to face conference session. Though as I blogged about back in early March, I don’t think synchronous video should be the only alternative to a f2f academic conference presentation, and covid or not, higher education needs to think a lot harder about how to embrace hybrid conference formats that could include a mix of f2f sessions broadcasted online, synch video discussions like what I just participated in, and asynch discussions/posters that can be made available as beyond a particular session time.

As I wrote back then, the problem with moving academic conferences at least partially online during and after Covid is not the technology. The problems are all about the difficulties institutions and people have with trying and doing “new things.”

After the break is the script for my talk.

Continue reading “My talk at the Media and Learning Conference (plus with a post-talk update)”

Still more on the “Classroom Tech Bans are Bullshit (or not)” project, in which I go down the tangent of note-taking

I spent most of my Thanksgiving break  back in Iowa, and along the way, I chatted with my side of the family about my faculty research fellowship project, “Investigating Classroom Technology Bans Through the Lens of Writing Studies,” aka “Classroom Tech Bans are Bullshit.” It’s always interesting talking to my non-academic family-types about academic things like this.

“So, you’re on sabbatical right now?” Not exactly. I’m not teaching so I can spend more time on research, but I’m still expected to go to meetings and things like that. Though honestly, I’ve skipped some of that stuff too, and it’s generally okay.

“Is there some kind of expectation for what you are supposed to be researching? What happens if you don’t do it?” Well, it is a competitive process for getting these fellowships in the first place, and there’s of course an expectation that I’ll do what I proposed. And I have done that, more or less, and I will have to write a report about that soon. But the implications (consequences?) of not doing all of what I originally proposed are vague at best.

“So, you’re not really working right now?” No no no, that’s not true. I’m working quite a bit, actually. But I’m doing this work because I want to, though I’m doing this work mostly at home and often in pajamas and I have an extremely flexible schedule right now (which is why we’re going to Morocco in a few days, but that’s another story for later), so I can understand why you might ask that.

“Being a professor is kind of a weird job, isn’t it?” Yes, yes it is.

Anyway, since I last blogged about this project back in September, I’ve been a bit distracted by department politics (don’t ask) and by prepping for teaching in the Winter term, which for me involves some new twists on old courses and also a completely new prep. But the research continues.

Back in October, I put together and conducted a survey for students and faculty about their attitudes/beliefs on the use of laptops and cell phones in classes. Taking the advice I often give my grad students in situations like this, I did not reinvent the wheel and instead based this survey on similar work by Elena Neiterman and Christine Zaza who are both at the University of Waterloo in Ontario and who both (I think) work in that school’s Public Health program. They published two articles right up my alley for this project: “A Mixed Blessing? Students’ and Instructors’ Perspectives about Off-Task Technology Use in the Academic Classroom” and “Does Size Matter? Instructors’ and Students’ Perceptions of Students’ Use of Technology in the Classroom.” I emailed to ask if they would be willing to share their survey questions and they generously agreed, so thanks again!

I’ll be sorting through and presenting about the results of this at the CCCCs this year and hopefully in an article (or articles) eventually. But basically, I asked for participants on social media, the WPA-L mailing list (had to briefly rejoin that!), and at EMU. I ended up with 168 respondents, 57% students and 43% instructors, most of whom aren’t at EMU. The results are in the ballpark/consistent with Neiterman and Zaza (based just on percentages– I have no idea if there’s a way to legitimately claim any kind of statistically significant comparison), though I think it’s fair to say both students and instructors in my survey are more tolerant and even embracing of laptops and cellphones in the classroom. I think that’s both because these are all smaller classes (Neiterman and Zaza found that size does indeed matter and devices are more accepted in smaller classes), and also because they’re writing classes. Besides the fact that writing classes tend to be activity-heavy and lecture-light (and laptops and cell phones are important tools for writing), I think our field is a lot more accepting of these technologies and frankly a lot more progressive in its pedagogy: not “sage on the stage” but “guide on the side,” the student-centered classroom, that sort of thing. I also was able to recruit a lot of potential interviewee subjects from this survey, though I think I’m going to hold off on putting together that part of the project for the new year.

And I’ve been thinking again about note-taking, though not so much as it relates to technology. As I’ve mentioned here before, there are two basic reasons in the scholarship for banning or limiting the use of devices– particularly laptops– in college classrooms, particularly lecture halls. One reason is about the problems of distraction and multitasking, and I do think there is some legitimacy to that. The other reason (as discussed in the widely cited Mueller and Oppenheimer study) is that it’s better to take notes by longhand than by a laptop.  I think that’s complete bullshit, so I kind of set that aside.

But now I’m starting rethink/reconsider the significance of note-taking again because of the presidential impeachment hearings. Those hearings were a series of poised, intelligent, and dedicated diplomats and career federal professionals explaining how Trump essentially tried to blackmail the Ukrainian government to investigate Biden. One of the key things that made these people so credible was their continued reference to taking detailed notes where they witnessed this impeachable behavior. In contrast, EU ambassador Gordon “The Problem” Sondland seemed oddly proud that he’s never been a note-taker. As a result, a lot of Sondland’s testimony included him saying stuff like “I don’t remember the details because I don’t take notes, but if it was in that person’s notes, I have no reason to doubt it.” I thought this detail (and other things about his testimony) made Sondland look simultaneously like an extremely credible witness to events and also like a complete boob.

Anyway, this made me wonder: exactly is the definition of “good note-taking?” How do we know someone takes good (or bad) notes, and what’s the protocol for teaching/training people to take good notes?

The taking notes by hand versus on a laptop claim is shaky and (IMO) quite effectively refuted by the Kayla Morehead, John Dunlosky, and Katherine A. Rawson study, “How Much Mightier Is the Pen Than the Keyboard for Note-Taking? A Replication and Extension of Mueller and Oppenheimer (2014).” But while that study does poke at the concept of note-taking a bit (for example, they have one group of participants not take notes at all and just closely pay attention to the TED talk lecture), everything else I’ve read seems to just take note-taking as a given. There’s broad consensus in the psych/education scholarship that taking notes is an effective way to recall information later on, be it for a test or testimony before Congress, and there also seems to be consensus that trying to write everything down is a bad note-taking strategy. But I have yet to read anything about a method or criteria for evaluating the quality of notes, nor have I read anything about a pedagogy or a protocol for teaching people how to take good notes.

I find that odd. I mean, if the basic claim that Mueller and Oppenheimer (and similar studies) are trying to make is that students take “better notes” by hand than by laptop, and if the basic claim that Morehead, Dunlosky, and Rawson (and similar studies) are tying to make is students don’t take “better notes” by hand than by laptop, shouldn’t there be at least some minimal definition of “better notes?” Without that definition, can we really say that study participants who scored higher on the test measuring success did so because they took “better notes” rather than some other factor (e.g., they were smarter, they paid better attention, they had more knowledge about the subject of the lecture before the test, etc., etc.)?

I posted about this on Facebook and tagged a few friends I have who work for the federal government, asking if there was any particular official protocol or procedure for taking notes; the answers I got back were kind of vague. On the way back home at one point, Annette and I got to talking about how we were taught to take notes. I don’t remember any sort of instruction in school, though Annette said she remembered a teacher who actually collected and I guess graded student notes. There are of course some resources out there– here’s what looks like a helpful collection of links and ideas from the blog Cult of Pedagogy— but most of these strategies seem more geared for a tutoring or learning center setting. Plus a pedagogy for teaching note taking strategies is not the same thing as research, and it certainly is not the same thing as a method for measuring the effectiveness of notes.

But clearly, I digress.

So my plan for what’s next is to do even more reading (I’m working my way back through the works cited of a couple of the key articles I’ve been working with so far), some sifting through/writing about the results, and eventually some interviews, probably via email. And maybe I’ll take up as a related project more on this question of note-taking methods. But first, there’s Morocco and next semester.

It’s been an interesting research fellowship semester for me. I’ve been quite fortunate in that in the last five years I’ve had two research fellowships and a one semester sabbatical. Those previous releases from teaching involved the specific project of my book about MOOCs, More Than A Moment (on sale now!), and thus had very specific goals/outcomes. My sabbatical was mostly about conducting interviews and securing a book contract; my last FRF was all about finishing the book.

In contrast, this project/semester was much less guided, a lot more “wondering” (I think blog posts like this one demonstrate that). It’s been a surprisingly useful time for me as a scholar, especially at a time in my career and following the intensity of getting the MOOC book done where I was feeling pretty “done” with scholarship. I’ve got to give a lot of credit to EMU for the opportunity, and I hope these keep funding these fellowships, too.

 

A post about an admittedly not thought out idea: very low-bar access

The other day, I came across this post on Twitter from Derek Krissoff, who is the director of the West Virginia University Press:

I replied to Derek’s Tweet “Really good point and reminds me of a blog post I’ve been pondering for a long time on not ‘Open Access’ but something like ‘Very Low Bar Access,'” and he replied to my reply “Thanks, and I’d love to see a post along those lines. It’s always seemed to me access is best approached as a continuum instead of a binary.” (By the way, click on that embedded Twitter link and you’ll see there are lots of interesting replies to his post).

So, that’s why I’m writing this now.

Let me say three things at the outset: first, while I think I have some expertise and experience in this area, I’m not a scholar studying copyright or Open Educational Resources” (OER) or similar things. Second, this should in no way be interpreted as me saying bad things about Parlor Press or Utah State University Press. Both publishers have been delightful to work with and I’d recommend them to any academic looking for a home for a manuscript– albeit different kinds of homes. And third, my basic idea here is so simple it perhaps already exists in academic publishing and I just don’t know better, and I know this exists outside of academia with the many different self-publishing options out there.

Here’s my simple idea: instead of making OER/open-sourced publications completely free and open to anyone (or any ‘bot) with an internet connection, why not publish materials for a low cost, say somewhere between $3 and $5?

The goal is not to come up with a way for writers and publishers to “make money” exactly, though I am not against people being paid for their work nor am I against publishers and other entities being compensated for the costs of distributing “free” books. Rather, the idea is to make access easy for likely interested readers while maintaining a modest amount of control as to how a text travels and is repurposed on the internet.

I’ve been kicking this idea around ever since the book I co-edited Invasion of the MOOCs was published in 2014.  My co-editor (Charlie Lowe) and I wanted to simultaneously publish the collection in traditional print and as a free PDF, both because we believed (still do, I think) in the principles of open access academic publishing and because we frankly thought it would sell books. We also knew the force behind Parlor Press, David Blakesley (this Amazon author page has the most extensive bio, so that’s why I’m linking to that), was committed to the concept of OER and alternatives to “traditional” publishing– which is one of the reasons he started Parlor Press in the first place.

It’s also important to recognize that Invasion of the MOOCs was a quasi-DYI project. Among other things, I (along with the co-authors) managed most of the editing work of the book, and Charlie managed most of the production aspects of the book, paying a modest price for the cover art and doing the typesetting and indexing himself thanks to his knowledge of Adobe’s InDesign. In other words, the up-front costs of producing this book from Parlor Press’ point of view were small, so there was little to lose in making it available for free.

Besides being about a timely topic when it came out, I think distributing it free electronically helped sell the print version of the book. I don’t know exactly how many copies it has sold, but I know it has ended up in libraries all over the world. I’m pretty sure a lot (if not most) of the people/libraries who went ahead and bought the print book did so after checking out the free PDF. So giving away the book did help, well, sell books.

But in hindsight, I think there were two problems with the “completely free” download approach. First, when a publisher/writer puts something like a PDF up on the web for any person or any web crawling ‘bot to download, they get a skewed perspective on readership. Like I said, Invasion of the MOOCs has been downloaded thousands of times– which is great, since I can now say I edited a book that’s been downloaded thousands of times (aren’t you impressed?) But the vast majority of those downloads just sat on a user’s hard drive and then ended up in the (electronic) trash after never being read at all. (Full disclosure: I have done this many times). I don’t know if this is irony or what, but it’s worth pointing out this is exactly what happened with MOOCs: tens of thousands of would-be students signed up and then never once returned to the course.

Second and more important, putting the PDF up there as a free download means the publisher/writer loses control over how the text is redistributed. I still have a “Google alert” that sends me an email whenever it comes across a new reference to Invasion of the MOOCs on the web, and most of the alerts I have gotten over the years are harmless enough. The book gets redistributed by other OER sites, linked to on bookmarking sites like Pinterest, and embedded into SlideShare slide shows.

But sometimes the re-publishing/redistribution goes beyond the harmless and odd. I’ve gotten Google alerts to the book linked to/embedded in web sites like this page from Ebook Unlimited, which (as far as I can tell) is a very sketchy site where you can sigh up for a “free trial” to their book service.  In the last couple years, most of the Google alert notices I’ve received are links to broken links,  paper mill sites, “congratulations you won” pop-up/virus sites,  and similarly weirdo sites decidedly not about the book I edited or anything about MOOCs (despite what the Google alert says).

In contrast, the book I have coming out very soon called More Than A Moment, is being published by Utah State University Press and will not be available for a free download– at least not for a while.  On the positive side of things, working with USUP (which is an imprint of University Press of Colorado) means this book has had a more thorough (and traditional) editorial review, and the copyediting, indexing, and typesetting/jacket design have all been done by professionals. On the downside, a lack of a free to download version will mean this book will probably end up having fewer readers (thus less reach and fewer sales), and, as is the case with most academic books, I’ve had to pay for some of the production costs with grant money from EMU and/or out of my own pocket.

These two choices put writers/publishers in academia in a no-win situation. Open access publishing is a great idea, but besides the fact that nothing is “free” in the sense of having no financial costs associated with it (even maintaining a web site for distributing open access texts costs some money), it becomes problematic when a free text is repurposed by a bad actor to sell a bad service or to get users to click on a bad link. Traditional print publishing costs money and necessarily means fewer potential readers. At the same time, the money spent on publishing these more traditional print publications does show up in a “better” product, and it does offer a bit more reasonable control of the book. Maybe I’m kidding myself, but I do not expect to see a Google alert for the More than a Moment MOOC book lead me to a web site where clicking on the link will sign me up for some service I don’t want or download a virus.

So this is where I think “very low-bar access” publishing could split the difference between the “completely free and online” and the “completely not free and in print” options in academic publishing. Let’s say publishers charged as small of a fee as possible for downloading a PDF of the book. I don’t know exactly how much, but to pay the costs for running a web site capable of selling PDFs in the first place and for the publisher/writer to make at least a little bit of money for their labor, I’d guess around $3 to $5.

The disadvantage of this is (obviously) any amount of money charged is going to be more than “free,” and it is also going to require a would-be reader to pass through an additional step to pay before downloading the text. That’s going to cut down on downloads A LOT. On the other hand, I think it’s fair to say that if someone bothers to fill out the necessary online form and plunks down $5, there’s a pretty good chance that person is going to at least take a look at it. And honestly, 25-100 readers/book skimmers is worth more to me than 5,000 people who just download the PDF. It’s especially worth it if this low-bar access proves to be too much for the dubious redirect sites, virus makers, and paper mill sites.

I suppose another disadvantage of this model is if someone can download a PDF version of an academic book for $5 to avoid spending $20-30 (or, in some cases, a lot more than that) for the paper version, then that means the publisher will sell less paper books. That is entirely possible. The opposite is also possible though: the reader spends $5 on the PDF, finds the book useful/interesting, and then that reader opts to buy the print book. I do this often enough, especially with texts I want/need for teaching and scholarship.

So, there you have it, very low-bar access. It’s an idea– maybe not a particularly original one, maybe even not a viable one. But it’s an idea.

More on the “Classroom Tech Bans Are Bullshit (or not)” Project Before Corridors

This post is both notes on my research so far (for myself and anyone else who cares), and also a “teaser” for Corridors: the 2019 Great Lakes Writing and Rhetoric Conference.  I’m looking forward to this year’s event for a couple of different reasons, including the fact that I’ve never been on campus at Oakland University.

Here’s a link to my slides— nothing fancy.

Anyway: as I wrote about back in June, I am on leave right now to get started on a brand-new research project officially called “Investigating Classroom Technology Bans Through the Lens of Writing Studies,” but which is more informally known as the “Classroom Tech Bans Are Bullshit” project. I give a little more detail in that June post, but basically, I have been reading a variety of studies about the impact of devices– mostly laptops, but also cellphones– in classrooms (mostly lecture halls) and how they negatively impact students (mostly on tests). I’ve always thought these studies seemed kind of bullshitty, but I don’t know a lot of research in composition and rhetoric that refutes these arguments. So I wanted to read that scholarship and then try to do something to apply and replicate that scholarship in writing classrooms.

So far, I’ve mostly just been reading academic articles in psychology and education journals. It’s always challenging to step just a little outside my comfort zone and do some reading in a field that is not my own. If nothing else, it reminds me why it’s important to be empathetic with undergraduates who complain about reading academic articles: it’s hard to try figure out what’s going on in that Burkean parlor when pretty much all you can do is look through the window instead of being in the room. For me, that’s most evident in the descriptions of the statistics. I look at the explanations and squiggly lines of various formulas and just mutter “I’m gonna have to trust you on that.” And as a slight but important tangent: one of the reasons why we don’t do this kind of research in writing studies is because most people in the field feel the same about math and stats.

The other thing that has been quite striking for me is the assumptions in these articles on how the whole enterprise of higher education works. Almost all of these studies take it as a completely unproblematic given that education means a lecture hall with a professor delivering knowledge to students who are expected to (and who know how to) pay attention and who also are expected to (and who know how to) take notes on the content delivered by the lecturer. Success is measured by an end of the course (or end of the experiment) test. That’s that. In other words, most of this research assumes an approach to education that is more or less the opposite of what we assume in writing studies.

I have also figured out there are some important and subtle differences to the arguments about why laptops and cell phones ought to be banned (or at least limited) in classrooms. As I wrote back in June, the thing that perhaps motivated me the most to do this research is the argument that laptops ought to be banned from lecture halls because handwritten notes are “better.” This is the argument in the frequently cited Pam Mueller and Daniel Oppenheimer “The Pen is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking.”  I think this is complete bullshit. This is a version of the question that used to circulate in the computers and writing world, whether it was “better” for student to write by hand or to type, a question that’s been dismissed as irrelevant for a long time. But as someone who is so bad at writing things by hand, I personally resent the implication that people who have good handwriting are somehow “better.” Fortunately, I think Kayla Morehead, John Dunlosky, and Katherine A. Rawson replication of that study, “How Much Mightier Is the Pen Than the Keyboard for Note-Taking? A Replication and Extension of Mueller and Oppenheimer (2014),” does an excellent job refuting this “handwriting is better” bullshit.

Then there’s the issue of “distraction” that results when students trying to do things right are disturbed/put off by other students fiddling around with their laptops or cellphones. This is the argument in Faria Sana, Tina Weston, Nicholas J. Cepeda “Laptop multitasking hinders classroom learning for both users and nearby peers.” They outline a clever and complicated methodology that involved arranging students so a laptop was (or wasn’t) in their line of sight and also by having some of those students acting as “confederates” in the study by purposefully doing stuff that is distracting. One issue I have with this research is it is a little dated, having been published in 2013. Maybe it’s just me, but I think laptops in classes were a little more novel (and thus distracting) a few years ago than they are now. Regardless though, one of the concluding points these folks make is that laptops shouldn’t be banned because the benefits outweigh the problems.

There are a lot of studies focusing on the multitasking and divided attention issues: that is, devices and the things students look at on those devices distract them from the class, which again typically means paying attention to the lecture. I find the subtly different degrees of multitasking kind of interesting, and there is a long history in psychology of research about attention, distraction, and multitasking. For example, Arnold L. Glass and Mengxue Kang in “Dividing attention in the classroom reduces exam performance” argue (among other things) that there’s a kind of delayed effect with students multitasking/dividing attention in a lecture hall setting. Students seem to be able to comprehend a lecture or whatever in the midst of their multitasking, but they don’t perform as well on tests at the end of the semester. 

Interestingly– and I have a feeling this is more because of what I haven’t read/studied yet– most of these studies I’ve seen on the multitasking/dividing attention angle don’t separate tasks like email or texting from social media apps. That’s something I want to read about/study more because it seems to me that there is a qualitative difference in how applications like Facebook and Twitter distract since these platforms are specifically designed to grab attention from other tasks.

And then there’s the category of research I wasn’t even aware was happening, and I guess I’d describe that as the different perceptions/attitudes about classroom technology. This is mostly based on surveys and interviews, and (maybe not surprising) students tend to believe the use of devices is no big deal and/or “a matter of personal autonomy,” while instructors have a more complex view. Interestingly, the recommendation a lot of these studies make is students and teachers ought to talk about this as a way of addressing the problem.

So, that’s what I “know” so far. Where I’m going next, I think:

  • I think the first tangible (not just reading) research part of this project is going to be to design a survey of both faculty and instructors– probably just for first year writing, but maybe beyond that– about their attitudes on using these devices. If I dig a bit, I might be able to use some of the same questions that come up in the research I’ve read.
  • We’ll see what kind of feedback/participation I get from those surveys, but my hope is also to use a survey as a way of recruiting some instructors to participate in something a little more case study/observational in the winter term, maybe even trying to replicate some of the “experimental” research on note taking in a small class setting. That would happen in Winter 2020.
  • I need to keep reading, especially about the ways in which social media specifically functions here. It’s one thing for a student (or really anyone) to be bored in a badly run lecture hall and thus allowing themselves to drift into checking their messages, email, working on homework for other classes, checking sports, etc. I think it’s a different thing for a student/any user to feel the need to check Facebook or Twitter or Instagram or whatever.
  • I can see a need to dive more deeply into thinking/writing about the ways in which this research circulates in MSM and then back into the classroom. As I wrote in my proposal and back in June, I think there are a lot of studies– done with lecture hall students in very specific experimental settings– that get badly translated into MSM articles about why people should put their laptops and cell phones away in classrooms or meetings.  Those MSM articles get read by well-meaning faculty who then apply the MSM’s misunderstanding of the original study as a justification for banning devices even though the original research doesn’t support that. Oh, and perhaps not surprising, but the tendency of the vast majority of the MSM pieces I’ve seen on tech bans is basically reinforcing the very worn theme of “the problem with the kids today.”
  • I also wonder about this attitude difference and maybe students have a point: maybe these technologies are a matter of personal autonomy and personal choice. This was an idea put into my head while chatting about all this with Derek Mueller over not very good Chinese food this summer, and I still haven’t thought it through yet, but if students have a right to their own language use in writing classrooms, do they also have a right to their own technology use? When and when not?
  • And even though this is kind of where I began this project (so I guess I’m once again showing my bias here), a lot of the solution that motivates faculty to ban laptops and devices from their classrooms in the first place really comes back to better pedagogy. Teaching students how to take notes with a laptop immediately comes to mind. I’m also reading (slowly but surly) James M. Lang’s Small Teaching: Everyday Lessons From the Science of Teaching right now, and there’s a clear connection to his advice and this project too. So much of the complaints about students being distracted by their devices really comes back to bad teaching.

Classroom Tech Bans Are Bullshit (or are they?): My next/current project

I was away from work stuff this past May– too busy with Will’s graduation from U of M followed quickly by China, plus I’m not teaching or involved in any quasi-administrative work this summer. As I have written about before,  I am no longer apologetic for taking the summer off, so mostly that’s what I’ve been doing. But now I need to get back to “the work–” at least a leisurely summer schedule of “the work.”

Along with waiting for the next step in the MOOC book (proofreading and indexing, for example), I’m also getting started on a new project. The proposal I submitted for funding (I have a “faculty research fellowship” for the fall term, which means I’m not teaching though I’m still supposed to do service and go to meetings and such) is officially called “Investigating Classroom Technology Bans Through the Lens of Writing Studies.” Unofficially, it’s called “Classroom Tech Bans are Bullshit.” 

To paraphrase: there have been a lot of studies (mostly in Education and/or Psychology) on the student use of mobile devices in learning settings (mostly lecture halls– more on that in a moment). Broadly speaking, most of these studies have concluded these technologies are bad because students take worse notes than they would with just paper and pen, and these tools make it difficult for students to pay attention.  Many of these studies have been picked up in mainstream media articles, and the conclusions of these studies are inevitably simplified with headlines like “Students are Better Off Without a Laptop In the Classroom.”

I think there are couple of different problems with this– beyond the fact that MSM misinterprets academic studies all the time. First, these simplifications trickle back into academia when those faculty who do not want these devices in their classrooms use these articles to support laptop/mobile device bans. Second, the methodologies and assumptions behind these studies are very different from the methodologies and assumptions in writing studies. We tend to study writing– particularly pedagogy– with observational, non-experimental, and mixed-method research designs, things like case studies, ethnographies, interviews, observations, etc., and also with text-based work that actually looks at what a writer did.

Now, I think it’s fair to say that those of us in Composition and Rhetoric generally and in the “subfield/specialization” of Computers and Writing (or Digital Humanities, or whatever we’re calling this nowadays) think tech bans are bad pedagogy. At the same time, I’m not aware of any scholarship that directly challenges the premise of the Education/Psychology scholarship calling for bans or restrictions on laptops and mobile devices in classrooms. There is scholarship that’s more descriptive about how students use technologies in their writing process, though not necessarily in classrooms– I’m thinking of the essay by Jessie Moore and a ton of other people called “Revisualizing Composition” and the chapter by Brian McNely and Christa Teston “Tactical and Strategic: Qualitative approaches to the digital humanities” (in Bill Hart-Davidson and Jim Ridolfo’s collection Rhetoric and the Digital Humanities.) But I’m not aware of any study that researches why it is better (or worse) for students to use things like laptops and cell phones while actually in the midst of a writing class.

So, my proposal is to spend this fall (or so) developing a study that would attempt to do this– not exactly a replication of one or more of the experimentally-driven studies done about devices and their impact on note taking, retention, and distraction, but a study that is designed to examine similar questions in writing courses using methodologies more appropriate for studying writing. For this summer and fall, my plan is to read up on the studies that have been done so far (particularly in Education and Psych), use those to design a study that’s more qualitative and observational, and recruit subjects and deal with the IRB paperwork. I’ll begin some version of a study in earnest beginning in the winter term, January 2020.

I have no idea how this is going to work out.

For one thing, I feel like I have a lot of reading to do. I think I’m right about the lack of good scholarship within the computers and writing world about this, but maybe not. As I typed that sentence in fact, I recalled a distant memory of a book Mike Palmquist, Kate Kiefer, Jake Hartvigsen, and Barbara Godlew wrote called Transitions: Teaching Writing in Computer-Supported and Traditional Classrooms. It’s been a long time since I read that (it was written in 1998), but I recall it as being a comparison between writing classes taught in a computer lab and not. Beyond reading in my own field of course, I am slowly making my way through these studies in Education and Psych, which present their own kinds of problems. For example, my math ignorance means I have to slip into  “I’m just going to have to trust you on that one” mode in the discussions about statistical significance.

One article I came across and read (thanks to this post from the Tattooed Prof, Kevin Gannon) was “How Much Mightier Is the Pen Than the Keyboard for Note-Taking? A Replication and Extension of Mueller and Oppenheimer (2014).” As the title suggests, this study by Kayla Morehead, John Dunlosky, and Katherine A. Rawson replicates a 2014 (which is kind of the “gold standard” in the ban laptops genre) study by Pam Mueller and Daniel Oppenheimer “The Pen is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking.” The gist of these two articles is all in the titles: Mueller and Oppenheimer’s conclusions were that it was much better to take notes by hand, while Morehead, Dunlosky, and Rawson’s conclusions were not so much. Interestingly enough, the more recent study also questioned the premise of the value of note taking generally since one of their control groups didn’t take notes and did about as well on the post-test of the study.

Reading these two studies has been a quite useful way for me to start this work. Maybe I should have already known this, but there are actually two fundamentally different issues at stake with these classroom tech bans (setting aside assumptions about the lecture hall format and the value of taking notes as a way of learning).  Mueller and Oppenheimer claimed with their study handwriting was simply “better.” That’s a claim that I have always thought was complete and utter bullshit, and it’s one that I think was debunked a long time ago. Way back in the 1990s when I first got into this work, there were serious people in English and in writing studies pondering what was “better,” a writing class equipped with computers or not, students writing by hand or on computers. We don’t ask that question anymore because it doesn’t really matter which is “better;” writers use computers to write and that’s that. Happily, I think Morehead, Dunlowsky, and Rawson counter Mueller and Oppenheimer’s study rather persuasively. It’s worth noting that so far, MSM hasn’t quite gotten the word out on this.

But the other major argument for classroom tech bans– which neither of these studies addresses– is about distraction, and that’s where the “or are they?” part of my post title comes from. I still have a lot more reading to do on this (see above!), but it’s clear to me that the distraction issue deserves more attention since social media applications are specifically designed to distract and demand attention from their users. They’re like slot machines, and it’s clear that “the kids today” are not the only ones easily taken in. When I sit in the back of the room during a faculty meeting and I glance at the screens of my colleagues’ laptops in front of me, it’s pretty typical to see Facebook or Twitter or Instagram open, along with a window for checking email, grading papers– or, on rare occasion, taking notes.

Anyway, it’s a start. And if you’ve read this far and you’ve got any ideas on more research/reading or how to design a study into this, feel free to comment or email or what-have-you.

Three thoughts on the “Essay,” assessing, and using “robo-grading” for good

NPR had a story on Weekend Edition last week, “More States Opting to ‘Robo-Grade” Student Essays By Computer,” that got some attention from other comp/rhet folks though not as much as I thought it might. Essentially, the story is about the use of computers to “assess” (really “rate,” but I’ll get to that in a second) student writing on standardized tests. Most composition and rhetoric scholars think this software is a bad idea. I think this is not not true, though I do have three thoughts.

First, I agree with what my friend and colleague Bill Hart-Davidson writes here about essays, though this is not what most people think “essay” means. Bill draws on the classic French origins of the word, noting that an essay is supposed to be a “try,” an attempt and often a wandering one at that. Read any of the quite old classics (de Montaigne comes to mind, though I don’t know his work as well as I should) or even the more modern ones (E.B. White or Joan Didion or the very contemporary David Sedaris) and you get more of a sense of this classic meaning. Sure, these writers’ essays are organized and have a point, but they wander to them and they are presented (presumably after much revision) as if the writer was discovering their point along with the reader.

In my own teaching, I tend to use the term project to describe what I assign students to do because I think it’s a term that can include a variety of different kinds of texts (including essays) and other deliverables. I hate the far too common term paper because it suggests writing that is static, boring, routine, uninteresting, and bureaucratic. It’s policing, as in “show me your papers” when trying to pass through a boarder. No one likes completing “paperwork,” but it is one of those necessary things grown-ups have to do.

Nonetheless, for most people including most writing teachers–  the term “essay” and “paper” are synonymous. The original meaning of essay has been replaced by the school meaning of essay (or paper– same thing).  Thus we have the five paragraph form, or even this comparably enlightened advice from the Bow Valley College Library and Learning Commons, one of the first links that came up in a simple Google search. It’s a list (five steps, too!) for creating an essay (or paper) driven by a thesis and research.  For most college students, papers (or essays) are training for white collar careers to learn how to complete required office paperwork.

Second, while it is true that robo-grading standardized tests does not help anyone learn how to write, the most visible aspect of writing pedagogy to people who have no expertise in teaching (beyond experience as a student, of course) is not the teaching but the assessment. So in that sense, it’s not surprising this article focuses on assessment at the expense of teaching.

Besides, composition and rhetoric as a field is very into assessment, sometimes (IMO) at the expense of teaching and learning about writing. Much of the work of Writing Program Administration and scholarship in the field is tied to assessment– and a lot (most?) comp/rhet specialists end up involved in WPA work at some point in their careers. WPAs have to consider large-scale assessment issues to measure outcomes across many different sections of first year writing, and they usually have to mentor instructors on small-scale assessment– that is, how to grade and comment all these student essays papers in a way that is both useful to students and that does not take an enormous amount of time.  There is a ton of scholarship on assessment– how to do it, what works or doesn’t, the pros and cons of portfolios, etc. There are books and journals and conferences devoted to assessment. Plenty of comp/rhet types have had very good careers as assessment specialists. Our field loves this stuff.

Don’t get me wrong– I think assessment is important, too. There is stuff to be learned (and to be shown to administrators) from these large scale program assessments, and while the grades we give to students aren’t always an accurate measure of what they learned or how well they can write, grades are critical to making the system of higher education work. Plus students themselves are too often a major part of the problem of over-assessing. I am not one to speak about the “kids today” because I’ve been teaching long enough to know students now are not a whole lot different than they were 30 years ago. But one thing I’ve noticed in recent years– I think because of “No Child Left Behind” and similar efforts– is the extent to which students nowadays seem puzzled about embarking on almost any writing assignment without a detailed rubric to follow.

But again, assessing writing is not the same thing as fostering an environment where students can learn more about writing, and it certainly is not how writing worth reading is created. I have never read an essay which mattered to me written by someone closely following the guidance of a typical  assignment rubric. It’s really easy as a teacher to forget that, especially while trying to make the wheels of a class continue to turn smoothly with the help of tools like rubrics. As a teacher, I have to remind myself about that all the time.

The third thing: as long as writing teachers believe more in essays than in papers and as long as they are more concerned with creating learning opportunities rather than sites for assessment, “robo-grader” technology of the soft described in this NPR story are kind of irrelevant– and it might even be helpful.

I blogged about this several years ago here as well, but it needs to be emphasized again: this software is actually pretty limited. As I understand it, software like this can rate/grade the response to a specific essay question– “in what ways did the cinematic techniques of Citizen Kane revolutionize the way we watch and understand movies today”– but it is not very good at more qualitative questions– “did you think Citizen Kane was a good movie?”– and it is not very good at all at rating/grading pieces of writing with almost no constraints, as in “what’s your favorite movie?”

Furthermore, as the NPR story points out, this software can be tricked. Les Perleman has been demonstrating for years how these robo-graders can be fooled, though I have to say I am a lot more impressed with the ingenuity shown by some students in Utah who found ways to “game” the system: “One year… a student who wrote a whole page of the letter “b” ended up with a good score. Other students have figured out that they could do well writing one really good paragraph and just copying that four times to make a five-paragraph essay that scores well. Others have pulled one over on the computer by padding their essays with long quotes from the text they’re supposed to analyze, or from the question they’re supposed to answer.” The raters keep “tweaking” the code to present these tricks, but of course, students will keep trying new tricks.

I have to say I have some sympathy with one of the arguments made in this article that if a student is smart enough to trick the software, then maybe they deserve a high rating anyway. We are living in an age in which it is an increasingly important and useful skill for humans to write texts in a way that can be “understood” both by other people and machines– or maybe just machines. So maybe mastering the robo-grader is worth something, even if it isn’t exactly what most of us mean by “writing.”

Anyway, my point is it really should not be difficult at all for composition and rhetoric folks to push back against the use of tools like this in writing classes because robo-graders can’t replicate what human teachers and students can do as readers: to be an actual audience. In that sense, this technology is not really all that much different than stuff like spell-checkers and grammar-checkers I have been doing this work long enough to know that there were plenty of writing teachers who thought those tools were the beginning of the end, too.

Or, another way of putting it: I think the kind of teaching (and teachers) that can be replaced by software like this is pretty bad teaching.

Instead of banning laptops, what if we mandated them?

Oy. Laptops are evil. Again.

This time, it comes from “Leave It in the Bag,” an article in Inside Higher Ed, reporting on a study done by Susan Payne Carter, Kyle Greenberg, and Michael S. Walker, all economists at West Point (PDF). This has shown up on the WPA-L mailing list and in my various social medias as yet another example of why technology in the classrooms is bad, but I think it’s more complicated than that.

Mind you, I only skimmed this and all of the economics math is literally a foreign language to me. But there are a couple of passages here that I find interesting and not exactly convincing to me that me and my students should indeed “leave it in the bag.”

For example:

Permitting laptops or computers appears to reduce multiple choice and short answer scores, but has no effect on essay scores, as seen in Panel D. Our finding of a zero effect for essay questions, which are conceptual in nature, stands in contrast to previous research by Mueller and Oppenheimer (2014), who demonstrate that laptop note-taking negatively affects performance on both factual and conceptual questions. One potential explanation for this effect could be the predominant use of graphical and analytical explanations in economics courses, which might dissuade the verbatim note-taking practices that harmed students in Mueller and Oppenheimer’s study. However, considering the substantial impact professors have on essay scores, as discussed above, the results in panel D should be interpreted with considerable caution. (page 17)

The way I’m reading this is for classes where students are expected to take multiple choice tests as a result of listening to a lecture from a sage on the stage, laptops might be bad. But in classes where students are supposed to write essays (or at least more conceptual essay questions), laptops do no harm. So if it’s a course where students are supposed to do more than take multiple choice tests….

After describing the overall effects of students performing worse when computing technology is available, Carter, Greenberg, and Walker write:

It is quite possible that these harmful effects could be magnified in settings outside of West Point. In a learning environment with lower incentives for performance, fewer disciplinary restrictions on distracting behavior, and larger class sizes, the effects of Internet-enabled technology on achievement may be larger due to professors’ decreased ability to monitor and correct irrelevant usage.” (page 26)

Hmmm…. nothing self-congratulatory about that passage, is there?

Besides the fact that there is no decent evidence that the students at West Point (or any other elite institution for that matter) are on the whole such special snowflakes that they are more immune from the “harm” of technology/distraction compared to the rest of us simpletons, I think one could just as easily make the exact opposite argument. It seems to me that is is “quite possible” that the harmful effects are more magnified in a setting like West Point because of the strict adherence to “THE RULES” and authority for all involved. I mean, it is the Army after all. Perhaps in settings where students have more freedom and are used to the more “real life” world of distractions, large class sizes, the need to self-regulate, etc., maybe those students are actually better able to control themselves.

And am I the only one who is noticing the extent to which laptop/tablet/technology use really seems to be about a professor’s “ability to monitor and correct” in a classroom? Is that actually “teaching?”

And then there’s this last paragraph in the text of the study:

We want to be clear that we cannot relate our results to a class where the laptop or tablet is used deliberately in classroom instruction, as these exercises may boost a student’s ability to retain the material. Rather, our results relate only to classes where students have the option to use computer devices to take notes.   We further cannot test whether the laptop or tablet leads to worse note taking, whether the increased availability of distractions for computer users (email, facebook, twitter, news, other classes, etc.) leads to lower grades, or whether professors teach differently when students are on their computers. Given the magnitude of our results, and the increasing emphasis of using technology in the classroom, additional research aimed at distinguishing between these channels is clearly warranted.(page 28)

First, laptops might or might not be useful for taking notes. This is at odds with a lot of these “laptops are bad” studies. And as a slight tangent, I really don’t know how easy it is to generalize about note taking and knowledge across large groups. Speaking only for myself: I’ve been experimenting lately with taking notes (sometimes) with paper and pen, and I’m not sure it makes much difference. I also have noticed that my ability to take notes on what someone else is saying — that is, as opposed to taking notes on something I want to say in a short speech or something– is now pretty poor. I suppose that’s the difference between being a student and being a teacher, and maybe I need to relearn how to do this from my students.

This paragraph also hints at another issue with all of these “laptops are bad” pieces, of “whether professors teach differently when students are on their computers.” Well, maybe that is the problem, isn’t it? Maybe it isn’t so much that students are spending all of this time being distracted by laptops, tablets, and cell-phones– that is, students are NOT giving professor the UNDIVIDED ATTENTION they believe (nay, KNOWS) they deserve. Maybe the problem is professors haven’t figured out that the presence of computers in classrooms means we have to indeed “teach differently.”

But the other thing this paragraph got me to thinking about the role of technology in the courses I teach, where laptops/tablets are “used deliberately in classroom instruction.” This paragraph suggests that the opposite of banning laptops might also be as true: in other words, what if, instead of banning laptops from a classroom, the professor mandated that students each have a laptop open at all times in order to take notes, to respond to on-the-fly quizzes from the professor, and look stuff up that comes up in the discussions?

It’s the kind of interesting mini-teaching experiment I might be able to pull off this summer. Of course, if we extend this kind of experiment to the realm of online teaching– and one of my upcoming courses will indeed be online– then we can see that in one sense, this isn’t an experiment at all. We’ve been offering courses where the only way students communicate with the instructor and with other students has been through a computer for a long time now. But the other course I’ll be teaching is a face to face section of first year writing, and thus ripe for this kind of experiment. Complicating things more (or perhaps making this experiment more justifiable?) is the likelihood that a significant percentage of the students I will have in this section are in some fashion “not typical” of first year writing at EMU– that is, almost all of them are transfer students and/or juniors or seniors. Maybe making them have those laptops open all the time could help– and bonus points if they’re able multitask with both their laptop and their cell phones!

Hmm, I see a course developing….