See this article from The Washington Post,“Computers Weighing In on the Elements of Essay.” I found this via the NCTE Inbox newsletter, and I’m pretty sure that you need a user login to read the Post online.
There isn’t anything really that new here– I posted a couple of times a few months ago about this software being used to read the writing portions of some big standardized tests. One of my (many overly ambitious) goals for today is to put together the syllabus for the grad course I’m teaching in the fall, and one of the “units” or topics of discussion for one day’s class is going to be on assessment. This article, and some of the others that I found earlier in the year, ought to fit in there nicely.
And like before, I think I may be alone in my colleagues in English studies who is not completely against these electronic grading systems, especially for things like standardized tests. This story, like the other ones I’ve read, is talking about “grading” writing samples on a scale of 1 to 6; it isn’t about providing comments in the form of entering into a dialogue with the essay, or in the form of offering ideas for revision. Just a grade, and just a grade on a very standardized sort of test. Oh, and in this particular story, which is about the GMAT I believe, there is a human reader that goes back to the writing sample too– in other words, each essay gets read by the computer and by one human. If there is disagreement over the score, it is read by another human.
I don’t know, but this seems a pretty reasonable application of the current technology to me.