Well, no, not really– but I thought that post title might be provocative, sort of like writing essays in a way that tricks the grading software.
There’s been a lot of discussion on things like WPA-L and elsewhere about “robo-readers,” as this New York Times piece sums up well, “Facing a Robo-Grader? Just Keep Obfuscating Mellifluously” and this further discussion on Slashdot. The very short version is it turns out that machines are just as capable of scoring writing that is completed as part of standardized tests– things like the GRE or SAT or other writing tests that ask students to respond to a very specific prompt. Writing teachers of various flavors– the WPA-L crowd in general and Les Perelman from MIT in particular– are beside themselves with the wrongness of this software because it’s not human, it can be fooled, and cannot recognize “truth.”
Of course, ETS and Pearson (two of the companies that have developed this software) point out that they don’t intend this software to replace actual human feedback, that they admit this is not a way to check facts, and the software is not a good judge of “poetic” language. And I’ve also seen plenty of humans fooled by untruths in print. But never mind that; writing teachers are angry at the machine.
Now, I mostly (though obviously not entirely) agree with my WPA-L colleagues and Perelman, and, as I wrote about in my previous post, I’m not a fan of education that eliminates teaching and minimizes the opportunity for learning simply to jump through a credentialing loop. So yes, I would agree that taking a batch of first year composition papers and dumping them into the robo-reading hopper to assign grades would a) not work and b) be bad. Though again, it also appears that the people who have developed this software have the same position.
But let’s just say– hypothetically, mind you, and for the sake of argument– that this kind of software and its inevitable improvements might actually not be evil. How might robo-grading (or maybe more accurately automated rating) software actually be “good?”
For starters, if this software is used in the way that ETS and Pearson say they are intending it to be used– that is, as a teaching/learning aid and not a grading tool per se– then it seems to me that this might be potentially useful along the lines of spellchecks, grammar-checks, and readability tests. Is this a replacement for reader/ student/ teacher/ other human interaction in writing classes of various sorts? Obviously not. But that doesn’t mean it can’t be useful to readers, particularly teachers during the grading process.
Let’s face it: the most unpleasant part of teaching writing is grading– specifically “marking up” papers students turn in to point out errors (and in effect justify the grade to the student) and to suggest ideas for revision. It is very labor-intensive and the most boring part of the job, as I wrote about in some detail last year here. If there was a computer tool out there that really would help me get this work done more efficiently and that would help my students improve, then why wouldn’t I use it?
Second, I think Perelman’s critique about how easily the machine is fooled is a little problematic– or at least it can be turned on its head. It seems to me that if a student completing some standardized test writing is smart enough to out smart the machine– as Perelman demonstrates here— then perhaps that student actually does deserve a high grade from the machine. It’s kind of like Kirk reprogramming the “no win” Kobayashi Maru test so he could win, right?
Third– and this is maybe something writing teachers in particular and writers in general don’t want to accept– writing texts that are well-received by machines is a pretty important skill to master. I know that’s not the intention of this robo-reading software, but my writing teacher colleagues seem to suggest that this is not only an unnecessary skill but a particularly dangerous one. Yet there is an entire web business called Search Engine Optimization that is (in part) about how to write web pages to include frequently searched keyword phrases so that the results appear higher in search engine– e.g., machine– results. The keywords and structure of your monster.com resume can be half the battle in getting found by a potential employer who is using searches– e.g., machines– to find a match.
Anyway, you get the idea. No, I don’t think we ought to turn over the teaching/grading function in writing classes to machines, and I don’t think a robo-grader is going to be able to look into the soul of the writer to seek Truth anytime soon. But I think the blanket dismissal of and/or resistance to these kinds of tools by writing teachers is at best naive. It’s probably more useful to imagine ways these tools can help our teaching practices in the long run.