Rich Haswell
(Texas A&M University, Corpus Christi)

View Bibliography in PDF format

 
 
Textcheckers—computerized spell-checkers, grammar-checkers, and style-checkers—have been around for three decades. The programs compare words in a textfile against a vocabulary of conventional spellings, generate the rate of passive constructions and raise a red flag if the rate is too high, question clichés or idiomatic expressions, capitalize the next word after a full stop, calculate “readability” formulas, and perform a host of other operations. Currently, integrated into all the popular word-processing and email packages, text-checkers are endemic to digital composing. Usually they function willy nilly, unless the writer has the initiative and know-how to turn them off.

What has been the composition community’s reaction to this now pervasive—some would say invasive—machinery? Individually, the response varies. Bob Broad records one teacher apparently evaluating a student’s spelling errors more harshly because the student’s class met in a computer classroom: “Do they use spell check?” Yet another of his teachers excuses a student who had misspelled a proper name because “the spell-checker’s not going to pick that up, so I gave him a little leeway there” (What We Really Value: Beyond Rubrics in Teaching and Assessing Writing, Utah State University Press, 2003, p. 115). Collectively, it is hard to say how the writing-teacher community has dealt with the encroachment of text-checkers over the years into their evaluation procedures and other teaching practices. There is no substantial review of the literature.

For some baseline information to help answer the question I have put together a chronology of the technology of text-checkers along with a bibliography of substantive commentary on them. I have sorted the history of the technology and the history of the commentary year by year, better to see patterns and interrelationships. My time-line bibliography is intended for the use especially of writing teachers and writing scholars—across the academic disciplines and in the workplace—and is offered with the hope that informed critique of this particular piece of auto-instructional technology will continue.

In gathering and organizing the material, however, I observed three curiosities that I can’t resist passing on. The first has to do with the accuracy of the text-checking programs. Fairly early in their history the imperfect performance of text-checkers was noted (e.g., Frase, 1981; Sommers, 1982). Spell-checkers are more accurate than grammar-checkers, of course. But in either case, the rate of inaccuracy is not minimal. Using student writing, Collins (1989) and Brock (1993) compared non-spelling mistakes detected by the most popular programs (Sensible Grammar, RightWriter, Grammatik, etc.) with those detected by writing teachers, and found machines and teachers identifying the same mistakes less than 10% of the time. It can be argued that the detection of any amount of error in a student’s writing is a bonus for the student, but that disregards the times the programs identify correct forms as incorrect. Typically false positives or “false flags” will make up 30% to 40% of the instances the software will identify as error. What I find remarkable is not the weak performance of the software but the fact that this inaccuracy has been reported for twenty-five years now and little seems to have come of it. Software designers don’t improve their products, and teachers don’t seem to mind students using them. Bruce Wampler, who spent seven years improving his grammar-checker program, Grammatik, before selling it to WordPerfect in 1992, remarked in 2002 that he believed WordPerfect had made no changes in the code since then (Kies 2005). Kathleen Kiefer, who helped develop Writer’s Workbench in the early 1980s, argues that it is still more accurate than the most recent versions of Microsoft Word (cited in Mike Palmquist, “Tracing the Development of Digital Tools for Writers and Writing Teachers,” forthcoming in Ollie Olviedo, Joyce R. Walker, and Byron Hawk (Eds.), Digital Tools in Composition Studies: Critical Dimensions and Implication, Hampton Press, forthcoming.

The comments by Wampler and Kiefer connect with a second curiosity, which involves what might be called the commodification of the technology. Roughly the text-checking capability moved from a mainframe “general inquirer” method with an embedded vocabulary and text processed via punchcards (1950’s-1960’s); to line-editors still connected to mainframe computers processing fixed-line text connected to a typewriter, a TV screen, or CRT display (1970’s); to stand-alone programs that could analyze text via external disks connected to a personal computer (1980’s); to bonus features of word-processing software packages that could be installed and activated if one wished to vet a text (1990’s); to default features of word-processing programs that run constantly (“auto-correct”) unless the user chooses to de-activate them (mid-1990’s). In short the commodity has moved from self-controlled to automatic, from manifest to hidden. The curiosity is that scholars researching text-checkers seem to have bought into this process of commodity naturalization. The bulk of their critique has focused on the earlier stand-alone programs with little of it investigating the later integrated word-processing packages. Wampler himself notes the decline in critique, and argues that the decision to use a plug-in product was an “active choice,” and that “since grammar checking has become a standard feature of word-processing, this self-filtering is gone” (Wampler 1995). The opposite could be argued, however. Maybe users did not lose some control of text-checking but gained it. With the integrated word-processing software, writers could apply text-checking on the fly, whenever in the act of composing they wanted it. Early on the crucial shift was expressed in Bryan Pfaffenberger’s piece for Research in Word Processing Newsletter, “Integrated Word Processing: Has it Arrived?” (1987), in which he fantasized his “exemplary writing tool: the green box on your screen is not merely a space in which to write; it’s also a gateway to a world of writing accessories, all of which are available at a keystroke,” including “a context-sensitive style guide.” Five years later, in 1992, he had his exemplary tool when Microsoft Word 5.0 included a grammar-checker. Many users quickly learned not to install it since occupied about half of the program’s memory partition, but industry soon solved that problem with improved memory chips. More and more the capability was built into the users’ own machines. Critique of the programs may have faded the more they were “owned” by their purchasers.

Whatever the causes, they are related to the third curiosity, which is the overall decline of scholarship on text-checkers in the last ten years. I don’t pretend that the following bibliography is complete, but I searched rather evenly over the years. Beginning with 1980 (the year after the release of WordStar as the first word-processing software including a spell-checker) and proceeding by two-year increments, here are the number of items:

1980-1981

17

1982-1983

33

1984-1985

59

1986-1987

42

1988-1989

37

1990-1991

32

1992-1993

36

1994-1995

13

1996-1997

15

1998-1999

5

2000-2001

7

2002-2003

7

2004-2005

6

The same phenomenon has been documented in studies of word-processing in general, by Bernard Susser (Computers and Composition 15.3, 1998, pp. 347-372). Perhaps we are looking at a particular combustion when technology and writing research meet that might be called the “novelty effect.” The plug-in text-checker programs that dominated the market in the 1980’s were more of a breakthrough technology than were the later integrated programs, most of which were just the old stand-alone programs with minor code changes (e.g., Grammatik built into WordPerfect, Correct Grammar into WordStar).

Or maybe we are looking at a commodification of scholarship that parallels the commodification of technology. A new technology often peaks early with number of launched products and then gradually decreases in volume as the few successful products take over the market; so in scholarship an early flurry of pieces is followed by a decline in production as scholars can find less new to say and only a few old pieces are perpetuated through reprints and citations. Let’s hope not. Maybe all we are seeing is teachers losing interest in an aspect of teaching composition, attention to surface features, that more and more they have come to feel is secondary and that they are happy to turn over to mechanical household aids. Then the question is whether teachers are aware of how poorly the machines are doing the chores or how the students are getting along with the hired help.

In terms of scholarly understanding the bottom line is that there is much still to uncover, as a few recent analyses have brilliantly shown (e.g., McGee & Ericsson 2002, Haist 2004, Kies 2005). May the following bibliography do its small part in encouraging more of the same.

As for the parameters of the bibliography, I have focused rather tightly on hardware and software that supports spell-, grammar-, and style-checking. I do not include computerization of readability formulas, which forms part of many text-checking packages but which technologically and instructionally follows a somewhat different history. Nor do I include much commentary that deals with the development of editing and formatting software for publishing, which often contained grammar and spell-checking components; or with programmed autotutorial instruction (“teaching machines”), which typically dwelt heavily on grammar; or with the CAI interactive tutorial composing programs (TICCIT, WANDAH, HOMER, WORDSWORTH, SEEN, and a host of others), most of which included text-checking capability or links to it. Finally I have, reluctantly, omitted the scholarship on text-checking with special populations, for instance the fascinating work done on hardware and software for the visually handicapped, or for students learning English as a second language (e.g., Cornelia Tschichold, “Grammar checking for CALL: Strategies for Improving Foreign Language Grammar Checkers,” in Cameron, Ed., CALL: Media, Design and Applications, 1999, Swets & Zeitlinger, pp. 203-222). Nor have I included the excellent work on accuracy of text-checkers in languages other than English (e.g., Jack Burston, “A Comparative Evaluation of French Grammar Checkers,” Calico Journal 13.2/3 (1995), 104-111). Largely I have also excluded the growing literature—because it is a growing technology—on automated grading or scoring of student writing. That material will be found in a bibliography of its own, appearing in Machine Scoring of Student Essays: Truth and Consequences, edited by Patricia Ericsson and myself, in press at Utah State University Press. Finally, I should note that I have mostly omitted mere notices or descriptions of new technology.

There are 336 items. The first, up to about 1970, are here just to indicate a few precursors to the composing and instructional text-checking technology that came later. I have appended a few search terms to each entry, but please do not trust them too much. Here are some non-intuitive search terms:

  • accuracy: testing of the degree to which text-checking programs succeed in detecting solecisms and ignoring non-solecisms
  • basic: study involving remedial writing courses
  • computer-analysis: computerized analysis of text for diagnostic purposes, including checking of spelling, grammar, or style (terms which overlap, of course)
  • data: study extracting factual information that would allow for replication of the study
  • instruction: scholarship addressing the teaching of writing anywhere
  • machine-scoring: computerized analysis of text to give it an evaluative score or grade
  • record-keeping: computer software that assists information recording, such as grades, attendance, or summed points.
  • school: study involving grade-school, middle-school, or high-school instruction (the default is post-secondary instruction)

I want to acknowledge the feedback I generously received on this manuscript from Gail Hawisher, Glenn Blalock, and especially Mike Palmquist, who sent me a pre-publication copy of his encyclopedic “Tracing the Development of Digital Tools for Writers and Writing Teachers” (forthcoming), from which I borrowed a few bibliographic items. I’m fully responsible for the opinions above and the facts below, along with any hitches and glitches that MS Word did not catch.

Text-checkers: A timeline and a bibliography of commentary [PDF]