A Theoretical Introduction to Multimodal Compositions and Assessment
There is no denying that we have moved into the age in which computer technology is a major part of our daily lives, both in and out of the classroom. With the onslaught of blogs, wikis, YouTube, and other Web 2.0 software, avoiding technology in our composition classrooms is no longer a viable option. As Gunther Kress (2003) argues in Literacy in the New Media Age, the screen and the image have now replaced the book and written word as the dominant means of communication. Throughout the book, he explores how these changes will affect the future of literacy. Kress (2005) explains:
It is no longer possible to think about literacy in isolation from a vast array of social, technological and economic factors. Two distinct yet related factors deserve to be particularly highlighted. These are, on the one hand, the broad move from the now centuries-long dominance of writing to the new dominance of the image, on the other hand, the move from the dominance of the medium of the book to the dominance of the medium of the screen. These two together are producing a revolution in the uses and effects of literacy and of associated means for representing and communicating at every level and at every domain (p.1).
Since we agree with Kress (2005) that we have entered the age of the screen, then we must acknowledge that there exists the opportunity in the composition classroom for student projects that blend word with image. Likewise, it then becomes our responsibility to determine how we will assess these new compositions.
The issue of assessment is complicated by the fact that we must evaluate several modes in conjunction and in interaction with one another. Ron Fortune (2005) illustrates this complication when he explains in "You’re Not in Kansas Anymore: Interactions among Semiotic Modes in Multimodal Texts" that when reading and assessing multimodal compositions, instructors cannot just look at each mode seperately (i.e. writing as one entity and image as another entity), but rather the modes need to be examined together as one entity. As he explains, "...focusing on the interaction or some reciprocity between writing and images is difficult because, though each is sufficiently complicated on its own, we double the problem's complexity when we try to see how they intermingle” (p.51). Although meaning can be made from the individual modes of the composition, the composition cannot be truly understood or assessed unless it is examined as a whole. In this way, we must figure out how we assess the intermingling of word and image while still remaining true to the goals of a composition course.
Thus, it needs to be understood that the word and image are not two separate entities and that meaning is constructed through the combination of the two. For instance, Lester Faigley (2004) explains that "To read any text, a reader pulls meaning from a system of signs: letters, words, sentences, paragraphs, shapes, colors, pictures" (111). Furthermore, J.L. Lemke (2004) argues in "Metamedia Literacy: Transforming Meanings and Media," all literacy consists of more than one mode since, to make meaning, we use linguistic and non-linguistic signs. Like Fortune (2005), Lemke (2004) suggests that multimodal composition is about more than figuring out what each element means separately, it is about the relationships between each mode and how they affect each other. Lemke (2004) writes, “There was a time perhaps when we could believe that making meaning with language was somehow fundamentally different, or could be treated in isolation from making meaning with visual resources or patterns of bodily action and social interaction" (p.71). He goes on to say that “text and picture together are not two ways of saying the same thing; the text means more when juxtaposed with the picture, and so does the picture when set beside the text" (Lemke, 2004, p.77).