There’s been a lot of talk on the WPA-L mailing list lately about the SAT plans to have a writing component to the test. Besides some complaints about how the test is being administered, there are plenty of complaints about how the test will be scored, at least as it’s been described in an article from the Washington Post, “Scorers of New SAT Get Ready for Essays” (you’ve got to register for that site). I don’t want to rehash the argument/discussion on WPA-L, but even with all the problems (and there are plenty of them), it seems to me that the writing portion of the test is better than nothing, and $17-20 an hour to score these writing samples in the comfort of your own home isn’t bad, either.
But apparently there’s another test brewing out there. NCTE Inbox had a link to this New York Times article, “Measuring Literacy in a World Gone Digital” (registration required, but you probably know that). The article begins with a couple of paragraphs recalling the “good ol’ days,” when students did basic research with encylopedias or the library. Now it’s all about the web. (Deep and reminiscent sigh). Then it says this:
Now the Educational Testing Service, the nonprofit group behind the SAT, Graduate Record Examination and other college tests, has developed a new test that it says can assess students’ ability to make good critical evaluations of the vast amount of material available to them.
The Information and Communications Technology literacy assessment, which will be introduced at about two dozen colleges and universities later this month, is intended to measure students’ ability to manage exercises like sorting e-mail messages or manipulating tables and charts, and to assess how well they organize and interpret information from many sources and in myriad forms.
The rest of the article goes through the various pros and cons of this, and raises questions about the very concept of “digital literacy.”
I’d of course be interested in seeing this test, but what concerns me about it (at least based on the way it is described here) is the test seems to be about knowing how to use particular tools and not larger concepts. I suppose this is how a lot of these test work (or, really, don’t work), but it seems particularly problematic to do this with things that are digital because of the rate of change.
For example, how one sorts email messages or manipulates tables and charts depends a lot on the software being employed, and the way that one sorts email nowadays is quite a bit different than the way one sorted email 10 or 15 years ago. Remember pine? Lots of people used to use it, and I suppose a lot of people still use it. And pine was a “user friendly” software on unix, relative to “mail.” Will the person who uses pine be able to do well on this test’s questions about sorting email? If the test assumes a software package like MS Outlook, maybe not.