Convergence of Listening and Reading Processing

16
Gale M. Sinatra University of Massachusetts Convergence of listening and reading processing THE RELATION between listening and reading is important for theory as well as practice. Once a wordhas been recognized, is the comprehension process for reading the same as for listen- ing? This study tested the point of convergence of linguistic information from auditory and visual channels. Forty college students were asked to indicate whether two visual stimuli presented on a computer screen were the same or different;before each pair was presented, the student heardan auditory stimulus, which either matched or did not matchthe first visual stimulus. Four types of stimuli were chosen to reflect differentlevels of processing: senten- ces, syntactic but meaningless word strings, randomword strings, and pronounceable non- words. As measured by reaction times, the visual comparison was significantly faster when subjects first heard a matching auditory stimulus for sentences, syntactic nonsense strings, and random words, but not for nonwords.The results suggest that listening and reading proc- essing converge at the word level, and that words processed aurally and visually share the same lexicon. Convergence entre les processus d,'coute et de lecture LA COMPREHENSION des relations entre l'coute et la lecture presente un interet aussi bien thdorique que pratique. Est-ce que les processus d' coute et de lecture de mots sont les memes? La presente recherche a tentede verifier si le traitement de l'information est le meme selon que l'information est perque uniquement par le canal visuel ou ' la fois par le canal visuel et auditif. Quarante etudiantsde college ont participe ' l'experiencequi consistait ' juger si des stimuli presentes par paires sur ecran cathodique etaient pareils ou diff6rents. Quatretypes de stimuli qui diff6raient en fonction du niveaude traitement etaient presentes: des phrases, des suites de mots grammaticales mais sans signification, des suites de mots aleatoires, et des mots sans signification. Avant chaquepresentation sur l'cran, les etudiants entendaient un stimulus auditif qui pouvait ou non correspondre au premier stimulus visuel. Le temps de reaction constituait la variable dependante. Un effet facilitateur de la presentation anterieure du stimulus auditif a ete observe pour la lecture de phrase, la suite de mots sans signification, et les suites aleatoires de mots mais pas pour les mots sans signification. Les resultats semblent indiquer qu'il y a convergence entre les processus de traitement auditifs et visuels au niveaudu mot et que les deux types de processus activentle meme lexique mental. Ces resultats montrent qu'une fois les processus de decodage maitrises, les processus de comprdhension en lectureet de comprehension orale de mots sont les memes.

Transcript of Convergence of Listening and Reading Processing

Page 1: Convergence of Listening and Reading Processing

Gale M. Sinatra

University of Massachusetts

Convergence of listening and reading processing

THE RELATION between listening and reading is important for theory as well as practice. Once a word has been recognized, is the comprehension process for reading the same as for listen- ing? This study tested the point of convergence of linguistic information from auditory and visual channels. Forty college students were asked to indicate whether two visual stimuli presented on a computer screen were the same or different; before each pair was presented, the student heard an auditory stimulus, which either matched or did not match the first visual stimulus. Four types of stimuli were chosen to reflect different levels of processing: senten- ces, syntactic but meaningless word strings, random word strings, and pronounceable non- words. As measured by reaction times, the visual comparison was significantly faster when subjects first heard a matching auditory stimulus for sentences, syntactic nonsense strings, and random words, but not for nonwords. The results suggest that listening and reading proc- essing converge at the word level, and that words processed aurally and visually share the same lexicon.

Convergence entre les processus d,'coute et de lecture

LA COMPREHENSION des relations entre l'coute et la lecture presente un interet aussi bien thdorique que pratique. Est-ce que les processus d' coute et de lecture de mots sont les memes? La presente recherche a tente de verifier si le traitement de l'information est le meme selon que l'information est perque uniquement par le canal visuel ou ' la fois par le canal visuel et auditif. Quarante etudiants de college ont participe

' l'experience qui consistait '

juger si des stimuli presentes par paires sur ecran cathodique etaient pareils ou diff6rents. Quatre types de stimuli qui diff6raient en fonction du niveau de traitement etaient presentes: des phrases, des suites de mots grammaticales mais sans signification, des suites de mots aleatoires, et des mots sans signification. Avant chaque presentation sur l'cran, les etudiants entendaient un stimulus auditif qui pouvait ou non correspondre au premier stimulus visuel. Le temps de reaction constituait la variable dependante. Un effet facilitateur de la presentation anterieure du stimulus auditif a ete observe pour la lecture de phrase, la suite de mots sans signification, et les suites aleatoires de mots mais pas pour les mots sans signification. Les resultats semblent indiquer qu'il y a convergence entre les processus de traitement auditifs et visuels au niveau du mot et que les deux types de processus activent le meme lexique mental. Ces resultats montrent qu'une fois les processus de decodage maitrises, les processus de comprdhension en lecture et de comprehension orale de mots sont les memes.

Page 2: Convergence of Listening and Reading Processing

116 READING RESEARCH QUARTERLY * Spring 1990 XXV/2

La convergencia entre la audicion y el procesamiento de la lectura

LA RELACI6N entre escuchar y leer es importante tanto para la teoria como para la praictica. Una vez que una palabra ha sido reconocida, ,resulta entonces que el proceso de comprensi6n es el mismo que el de escuchar? Este estudio examin6 el punto de convergencia de la informaci6n linglifstica proveniente de los canales auditivo y visual. Se les pidi6 a 40 estudiantes universitarios que indicaran si dos estimulos visuales presentados en una pantalla de computadora eran iguales o diferentes. Se escogieron cuatro tipos de pares de estimulos para reflejar diferentes niveles de procesamiento: oraciones, cadenas de palabras sinticticamente correctas pero sin sentido, cadenas de palabras escogidas al azar, y palabras inexistentes. Antes de que cada par fuera presentado, el estudiante escuch6 un estimulo auditivo que se pudiera aparear o no con el primer estimulo visual. Como se comprob6 al tomar el tiempo de reaccidn, la comparaci6n visual se facilit6 significativamente al escuchar primero un estimulo auditivo apareado a las oraciones en las cadenas de palabras sinticticas sin sentido y en las palabras al azar, pero no en las palabras inexistentes. Los resultados sugieren que el procesamiento de lectura y la audici6n convergen al nivel de palabra, y que las palabras procesadas de forma auditiva y visual comparten el mismo lexic6n. Este hallazgo sugiere que, al menos en el ninio pequenio, una vez que la decodificaci6n ha sido dominada, los procesos de comprensi6n de la lectura y comprensi6n auditiva son los mismos.

Die Konvergenz der Hiir- und Leseverarbeitung

DIE BEZIEHUNG zwischen H6ren und Lesen ist ffir die Theorie und die Praxis wichtig. Ist der

VerstfindnisprozeB ffir das Lesen und H6ren identisch, wenn ein Wort einmal erkannt worden ist? Die vorliegende Studie fiberpruft den Konvergenzpunkt der linguistischen Information der akustischen und visuellen Kandile. Vierzig Studenten wurden aufgefordert anzugeben, ob zwei visuelle auf einem Computerbildschirm dargestellte Stimuli identisch waren oder nicht. Vier Stimuluspaare wurden ausgewihlt, um unterschiedliche Verarbeitungsstufen darzustellen: Sditze, syntaktisch richtige aber unbedeutende Wortketten, willkdirliche Wortketten und Nicht-W6rter. Bevor jedes Paar dargestellt wurde, h6rten die Studenten einen akustischen Stimulus, der entweder mit dem ersten visuellen Stimulus identisch war oder nicht. Das Messen der Reaktionszeit ergab, daB der visuelle Vergleich wesentlich erleichtert wurde, wenn die Testpersonen den passenden akustischen Stimulus ffir Sditze, syntaktisch richtige aber unbedeutende Wortketten und willkfirliche Wortketten, aber nicht fdir Nicht-W6rter, vorher h6rten. Die Resultate deuten an, daB die Verarbeitung des H6rens und Lesens auf der Wortebene zusammenliuft, und daB W6rter, die akustisch und visuell verarbeitet werden, dasselbe Lexikon benutzen. Dieses Ergebnis liBt vermuten, dali-zumindest beim Kleinkind-die Prozesse des Lese- und H6rverstindnisses identisch sind, wenn das Dekodieren einmal verwirklicht worden ist.

Page 3: Convergence of Listening and Reading Processing

Listening and reading processing SINATRA 117

Much of the research in reading compre-

hension and listening comprehension makes the assumption that, after word identifi- cation, the cognitive processes and the mental representations elicited by the two modes of in- put are the same (e.g., Fries, 1963; Goodman, 1970; Kavanagh & Mattingly, 1972; Sticht, Beck, Hanky, Kleiman, & James, 1974; Perfetti, 1985). In other words, a unitary (or single) comprehension process is activated re- gardless of the mode of input (Danks, 1980). According to this unitary process view, reading consists of listening comprehension plus decod- ing. Thus, once decoding is mastered, reading should not require any separate skills distinct from general language skills. Sticht et al. (1974) have claimed, for example, that reading uses the same language ability and cognitive re- sources as listening, plus the ability to search a visual display and decode print into speech.

The assumption that, following word iden- tification, the processes of comprehending speech and print do not differ leads to a number of hypotheses concerning the relation between reading performance and listening perform- ance. For example, Sticht et al. (1974) suggest that performance in listening will exceed per- formance in reading until reading skill is mas- tered. However, once decoding skills are mastered, measures of listening comprehension performance should be predictive of perform- ance on measures of reading comprehension, and gains from instruction in a listening skill (e.g., listening for the main idea) should trans- fer to performance on the same skill in reading. Sticht et al. (1974) present a voluminous review of studies that provide evidence concerning these hypotheses, but the studies are based largely on correlational research and are thus not conclusive with respect to the relation be- tween the cognitive processes of listening and reading.

Although most researchers seem to have adopted the unitary process view, several have suggested that the differences between the lin- guistic stimuli of the two modalities are suffic- ient to postulate separate processes for listening and reading. The dual process theory maintains

that, although reading and listening share some common elements, the differences are sufficient to assume that reading comprehension and lis- tening comprehension are different processes.

The most obvious distinction between the two modalities is that reading requires the de- coding of printed symbols in order to recognize words, whereas listening does not (Townsend, Carrithers, & Bever, 1987). This distinction suggests the tasks are intrinsically different and thus require independent sets of processes. For example, Rubin (1980) noted that, because the speaker and the listener share the same context, the listener can take advantage of such nonlin- guistic cues as gestures and facial expressions, which contribute to communication. Carroll and Slowiaczek (1985) suggest a number of other differences in the nature of the stimuli processed in spoken and in written language. In spoken language, the signal decays rapidly; in written language, information is relatively per- manent. The rate of information is controlled by the producer in spoken language, but by the perceiver in written language. In spoken lan- guage, sentences are often fragments, whereas in written language, sentences are usually com- plete and grammatical. In spoken language, there is a great deal of prosodic information; in written language, there is no prosodic informa- tion except for the minimal cues provided by punctuation.

In addition to differences in the linguistic stimuli, researchers have pointed to develop- mental differences suggesting that there are separate processes for listening and reading comprehension. Mattingly (1972) has noted that every normal child develops the ability to understand his or her native spoken language. In contrast, children must be deliberately taught to read, and many fail to learn to do so, in spite of having adequate listening skills. Miller (1972) points out that written language is his- torically a more recent development than spo- ken language. Furthermore, he notes that writing did not originate as a record of speech; rather, it evolved from pictographs as an alter- nate form of communication. Danks (1980) ar- gues that although differences in the historical development of spoken language and written

Page 4: Convergence of Listening and Reading Processing

118 READING RESEARCH QUARTERLY * Spring 1990 XXV/2

language do not necessarily indicate differential processing, they do suggest that the processing of spoken and written language may not be identical.

Perfetti (1987) has emphasized that these two views on the nature of listening and reading processing have significant implications for reading instruction. The unitary process view suggests that reading should be taught only until the process of decoding is mastered, and that in- struction in other reading skills is not necessary. The dual process view suggests that it is neces- sary to teach not only decoding but also skills necessary for the specific task demands of reading comprehension. Danks (1980), for ex- ample, suggests that, if there are separate proc- esses for listening and reading comprehension, then reading instruction should continue even after children have become skilled decoders. He suggests that more advanced reading in- struction could emphasize skills he sees as spe- cific to reading, such as outlining, analyzing the structure of paragraphs, and learning how to follow styles of argument development. Danks points out that if curriculum designers knew ex- actly how listening comprehension and reading comprehension differ, they could design spe- cialized reading curricula to address the de- mands of both types of processing.

Convergence of listening and reading processing

Most unitary- and dual-process theorists agree that listening and reading share common processing at some point; however, little re- search has been aimed at discovering at what point listening and reading processes converge. As noted above, much of the research regarding the relation between listening and reading has been limited to studies of the correlation be- tween listening performance and reading per- formance.

Some empirical evidence about the relation of listening processes to reading processes has come from cross-modality priming studies. These studies have examined how information processing in one modality (e.g., listening) influences processing in the second modal-

ity (e.g., reading). For example, a number of researchers (e.g., Seidenberg, Tanenhaus, Leiman, & Bienkowski, 1982; Swinney, 1979) have used cross-modality priming to study the processing of ambiguous words (words with multiple meanings). They first presented sub- jects aurally with an ambiguous word within a semantic context that would bias the interpreta- tion of that word (i.e., lead the listener to choose one meaning out of the multiple mean- ings possible). The subjects were then pre- sented visually with a letter string and asked to make a lexical decision (i.e., to decide whether the letter string was a word or not). The re- searchers found that less time was required to make a lexical decision for words related to one of the possible meanings of the ambiguous word when both words were presented simultane- ously (or within 200 msec). When the visual string was presented after several more syllables of aural text, the lexical decision was facilitated only for words that were related to the biasing context, but not for words that were related to one of the other meanings of the ambiguous word. These studies show that when a listener hears an ambiguous word, initially both mean- ings of the word are accessed, but after 200 msec only the contextually related meaning re- mains activated. Thus, the processing of audi- tory information can affect the processing of visual information. These studies also suggest that there is a common lexicon for aurally and visually presented words.

Kirsner and Smith (1974) investigated whether there is a single lexicon or separate vis- ual and auditory lexicons by examining both cross-modality and within-modality effects on word recognition. A lexical decision task was presented to subjects either visually or aurally. Each item was then repeated a second time, in either the same or the opposite modality. Kirsner and Smith's results showed that lexical decision time was less for the second presenta- tion of a word or a nonword when both presen- tations were in the same modality. Also, in the cross-modality condition, the lexical decision on the second presentation was facilitated some- what for words, but was not facilitated for non- words. In addition, the accuracy data showed

Page 5: Convergence of Listening and Reading Processing

Listening and reading processing SINATRA 119

that there were fewer errors on the second pre- sentation for both words and nonwords (aver- aged over conditions). Accuracy at the second presentation of words was greater when both presentations were in the same modality than when the second presentation was in the oppo- site modality. The accuracy data for nonwords did not show this advantage. These results sup- port the notion of a common lexicon for words that are read and heard.

More recently, Hanson (1981) investigated whether written and spoken words share com- mon processing systems. She presented words simultaneously in both modalities, but asked subjects to attend to only one modality. Subjects were asked to make decisions regarding the se- mantic, phonological, or physical properties of the attended word. In the semantic task, sub- jects were asked to decide whether the word in the attended modality was a member of a partic- ular semantic category. The words presented in the unattended modality were (a) redundant with the attended word, (b) a member of the same semantic category as the attended word (e.g., chair/lamp), or (c) a member of a differ- ent semantic category. In the phonological task, subjects were asked to decide whether the at- tended word contained a target phoneme. The unattended word was (a) redundant with the at- tended word, (b) a different word that contained the target phoneme, or (c) a different word that did not contain the target phoneme. The physi- cal task required subjects to make decisions regarding nonlinguistic properties of the stimu- lus. When attending to the visual modality, the subject was to decide whether the word was in upper or lower case. When attending to the au- ditory modality, the subject was to decide whether the stimulus was presented by a male or a female voice. In both conditions, the unat- tended word was either redundant with or dif- ferent from the attended word. By manipulating the level of stimulus analysis required for re- sponse decisions, Hanson was able to test for the influence of a common code for written and spoken words at different levels of processing. Hanson argued that if there is a common repre- sentation of words presented in the two modali- ties at any of these levels of processing, then

decisions involving that level of analysis should be influenced by the properties of the unat- tended word. She found response facilitation in the semantic and phonological tasks, but not in the physical task. Hanson concluded that writ- ten and spoken words share semantic and pho- nological processing, but that information is coded separately for the two modalities prior to the point of convergence of the two inputs.

In his model of reading based on a biologi- cal metaphor, Royer (1985) proposed a conver- gence of listening and reading processing at a point in the model called the syntactic/con- ceptual level. According to this model, when a person hears a sentence, activation passes up through an auditory processing hierarchy com- posed of auditory feature detectors, an auditory spelling pattern echelon, an auditory word ech- elon, a syntactic/conceptual echelon, an epi- sodic echelon, and a scriptal echelon. When the person reads the same sentence, activation passes up a similar visual pathway that con- verges with the auditory pathway at the syntac- tic/conceptual level. A sentence that is read immediately after it is heard would be proc- essed more quickly because nodes at the syntac- tic/conceptual, episodic, and scriptal echelons would have a lowered threshold, due to the acti- vation caused by the auditory sentence. Royer's proposal that the point of convergence of the two modalities is at the syntactic/conceptual level is based on a developmental view: In be- ginning readers, processing at higher levels is already well developed from their experience with spoken language. Processing at lower lev- els, on the other hand, is modality-specific: Learning to read requires the development of a visual analysis pathway for words.

Current study Most previous studies of the convergence

of listening and reading processes have been limited to examining the processing of single words, rather than complete sentences. In the current study, the convergence of listening and reading processes was examined using full sen- tences as well as other linguistic stimuli. These various types of linguistic stimuli were used in order to activate processing at the phonemic,

Page 6: Convergence of Listening and Reading Processing

120 READING RESEARCH QUARTERLY * Spring 1990 XXV/2

lexical, syntactic, and semantic levels. Subjects listened to an auditory stimulus

and then were asked to decide whether two visual stimuli were the same or different. The auditory stimulus was either identical to or completely different from the first visual stimu- lus. The two visual stimuli were either identical to each other or differed by one word. The sub- ject's task was to decide whether the two visual stimuli were identical to each other or different from each other. Forster (1979) has noted that response times in a comparison task of this kind consist of the following components: (a) the time needed to establish mental representa- tions of the two stimuli, (b) the time needed to compare the representations, and (c) the time needed to evaluate the outcome of the compari- son in terms of the task decision. Forster notes that the comparison task has been used success- fully in the area of word recognition, and that, by varying the nature of the stimuli, it can also be used to investigate sentence processing.

Four types of stimuli were included in the present study to activate various levels of proc- essing. First, nonword strings were used to evoke processing at the phonemic level. These were strings of pronounceable nonsense words, which could be processed up to the phonemic level but no further. Second, for processing up to the lexical or word level, I used random word strings, which were groups of real words that together had no semantic interpretation and were not syntactically correct. Such a list of random words could have a lexical representa- tion for each word (in addition to phonemic rep- resentations), but could not be represented in terms of the syntax of the group of words or the sentence-level meaning. Third, syntactic non- sense strings were used to evoke a representa- tion at the syntactic level of processing. These syntactically correct strings could be repre- sented at a level where the syntax and the meanings of individual words were preserved. However, because they had no possible seman- tic interpretation, they could not be represented at the level in the processing hierarchy where the meaning of sentences is preserved. Finally, full good sentences were used to evoke process- ing at all levels up to and including the seman-

tic, or meaning, level. These sentences were semantically and syntactically correct.

The logic behind the study was that the processing of an auditory stimulus will have an effect on the processing of a similar visual stim- ulus only if the auditory stimulus can be repre- sented at or beyond the point of convergence of visual and auditory processing. For example, if there is no common representation of words (i.e., if there are separate auditory and visual lexicons), then hearing a string of words such as tire book month would have no effect on the processing of the visual string tire book month; different representations would be activated. If, however, these words were represented in a lexi- con common to both modes of presentation, then hearing the string tire book month would facilitate the processing of the visual string tire book month, because both strings would acti- vate the same representation in the lexicon.

There are several possible sources of a cross-modality effect of the processing of an au- ditory stimulus on the processing of two visual stimuli. First, as described above, hearing an auditory stimulus could facilitate the process of encoding the same stimulus presented visually. Reading the first visual stimulus might also fa- cilitate the encoding of the second visual stimu- lus when the two visual stimuli are the same. There may also be some carry-over of the facili- tation from the auditory stimulus to the second visual stimulus.

In addition, there may be a facilitative or inhibitory effect of the auditory stimulus on the decision component (Forster, 1979) of the com- parison task, due to expectations set up by the task structure. For example, a match between the auditory stimulus and the first visual stimu- lus may set up an expectation of a match be- tween the two visual stimuli; this expectation would facilitate the decision time. Whether the effect of the processing of the auditory stimulus is on the encoding of the visual stimuli, the de- cision component of the visual comparison task, or both the encoding and the decision processes, there is likely to be an effect on the time necessary to make a response.

In the present experiment, an auditory stimulus is expected to have an effect on the

Page 7: Convergence of Listening and Reading Processing

Listening and reading processing SINATRA 121

process of encoding an identical visual stimulus when the auditory and visual stimuli share a common representation. This assumption leads to the following set of predictions:

1. If the auditory and visual pathways do not converge until the semantic level (the point in the processing system where the meaning of sentences is represented), then an encod- ing effect would be expected for the full good sentences only.

2. If the two pathways converge at the syntactic level (the point in the processing system where syntactic information is represented), then an encoding effect would be expected for the full good sentences and syntactic non- sense strings, but not for the random word strings or nonword strings.

3. If the auditory and visual pathways converge at the word or lexical level, then an encoding effect would be expected for the random word strings as well as the full good senten- ces and syntactic nonsense strings, but not for the nonword strings.

4. If the two pathways converge at the phone- mic level, then an encoding effect would be expected for all four stimulus sets.

5. If the auditory and visual pathways do not converge, then there should be no encoding effect for any of the stimulus types.

Method

Subjects Subjects were 40 undergraduate students

recruited from psychology classes at the Uni-

versity of Massachusetts. The students received class credit for their participation in the experi- ment.

Apparatus A Godbout CompuPro microcomputer con-

trolled the presentation of both the auditory and the visual stimuli. Subjects sat facing a CRT terminal and a three-button console that were connected to the computer. All written stimuli were presented on the computer screen. The au- ditory stimuli were presented through head- phones. Subjects used a button on the left, marked START, to begin a set of trials, and used the two buttons on the right, marked SAME and DIFFERENT, to register their decisions about the stimuli.

Materials The 192 sentences from which stimuli were

developed were simple 5- to 9-word sentences. They were taken from examples presented in three style manuals: The Practical Stylist (Baker, 1973), Modern English: A Practical Reference Guide (Frank, 1972), and The Struc- ture of English Clauses (Young, 1980). These sentences were randomly divided into four sets of 48 sentences each. A different set of senten- ces was used to generate each type of stimulus in order to avoid excessive repetition of the same words across trials. Table 1 shows exam- ples of each stimulus type. (See the Appendix for a complete list of stimuli.)

Full good sentences. The first set of senten- ces was used in their original form. All senten- ces were semantically and syntactically correct.

Table 1 Examples of the four stimulus types

Type Examples

Full good sentences 1. Sue wants to go for a walk. 2. The church stands in the square.

Syntactic nonsense strings 1. They can spend the car. 2. It had looked the hours of the street.

Random word strings 1. owe riot month course 2. sending very happened heard

Nonword strings 1. trings sland sork 2. rame teru mest

Page 8: Convergence of Listening and Reading Processing

122 READING RESEARCH QUARTERLY * Spring 1990 XXV/2

Syntactic nonsense strings. The syntactic nonsense strings were generated from the sec- ond set of 48 sentences by replacing each noun, verb, adjective, and adverb with a word of the same part of speech that was randomly selected from another sentence in the set. The verb was changed to agree in number with the noun in the string, if necessary. If the resulting sentence could be considered meaningful, some words were re-selected until a syntactically correct but meaningless sentence resulted.

Random word strings. These stimuli were generated by scrambling the words in the third set of 48 sentences. Articles, prepositions, con- junctions, quantifiers, and auxiliary verbs were omitted to minimize differences in reading time between stimulus types, as people generally read sentences more quickly than they read lists of random words.

Nonword strings. The nouns, verbs, adjec- tives, and adverbs of the final 48-sentence set were manipulated to produce strings of pro- nounceable nonsense words. One or two letters of each content word were replaced to obtain the pronounceable nonwords. A pilot study was conducted to normalize the spelling of each nonword. Fifteen subjects each listened to a tape recording of the spoken nonwords and wrote down their best guess as to the spelling of each nonword. For each nonword, the spelling that was produced most frequently was used for its visual presentation in the main study.

Length of all stimuli. A pilot study was conducted to determine the length for each type of stimulus. Ten subjects listened to 52 stimuli of varying length that were selected randomly from the four stimulus types and presented in random order. The stimuli included sentences and syntactic strings of 5-9 words in length; random word strings of 4-6 words, and non- word strings of 3-5 nonwords. After each stim- ulus had been presented, subjects were asked first to do a mental arithmetic problem and then to recall what they had heard. The length that showed the highest average recall across sub- jects was 3 items for nonword strings and 4

words for random word strings. There was no difference in recall due to length for the syntac- tic nonsense strings or full good sentences. Therefore, each nonword string was limited to exactly 3 nonwords and each random word string contained 3 or 4 words; the length of the syntactic nonsense strings and full good senten- ces was unaltered.

Foils for all stimuli. For the trials in which the auditory stimulus differed from the first vis- ual stimulus, 16 of the 48 stimuli of each type were randomly selected to be used as auditory foils. Stimuli used as auditory foils were not presented visually at any time. The remaining 32 stimuli of each type were used as the first visual stimulus in each trial (and as the auditory stimulus in the auditory match condition).

For half of the trials, in which the two vis- ual stimuli were to match, the same sentence or string appeared twice on the computer screen. For the other half of the trials, the second visual stimulus differed from the first visual stimulus by only one word or nonword. For nonword strings, the replacement nonword was of the same length as the nonword it replaced. For all other stimuli, the replacement word was similar in length and meaning to the word it replaced. For example, I can't find my car keys was changed to I won't find my car keys. Preserving length and meaning was important to ensure consistent task demands across trials; other- wise, subjects could have made judgments based on differences in meaning or length of the stimulus, rather than on differences in wording.

Order of presentation. Three stimuli were presented in each trial: one auditory stimulus and two visual stimuli. The auditory stimulus either matched or did not match the first visual stimulus, and the first visual stimulus either matched or did not match the second visual stimulus; these conditions were crossed. Thus, there were four experimental conditions for each type of stimulus, or 16 total experimental conditions. The three stimuli presented in each trial were always of the same type (three full good sentences, three syntactic nonsense strings, three random word strings, or three

Page 9: Convergence of Listening and Reading Processing

Listening and reading processing SINATRA 123

nonword strings). There were 32 trials for each type of stimulus, resulting in a total of 128 trials.

Each subject completed 128 trials. Experi- mental conditions and stimulus types were pre- sented in one of four random orders, in order to minimize the effects of order of presentation. Each stimulus appeared in a different condition in each of the four orders.

The auditory stimuli were recorded on audiotape in each of these four orders. An audi- tory foil was substituted whenever the auditory stimulus was to appear in an auditory mismatch condition. Four orders of the visual stimuli were developed in correspondence with the four tape recordings, such that subjects who listened to a particular audiotape would see the appro- priate visual stimuli presented on the computer screen. Subjects were randomly assigned one of the four presentation orders. Each subject saw every type of stimulus presented in every condi- tion (but saw each individual stimulus in only one condition).

Procedure

Subjects received instructions explaining the nature of the task and asking them to com- pare the two visual stimuli only. Each subject then participated in 8 practice trials. The exper- imental trials began once the subject appeared to understand the task demands. Each trial be- gan with the words Press left button for next trial displayed on the screen. As soon as the subject pressed the button, the message disap- peared from the screen and the subject heard the auditory stimulus through the headphones. When the auditory stimulus ended, the first vis- ual stimulus immediately appeared on the com- puter screen. All the words or letters of the visual stimulus appeared on the screen simulta- neously. After a half-second pause (the intent of which was to encourage the subject to read the entire first stimulus), the second visual stimulus appeared on the screen. All letters again ap- peared simultaneously, and were aligned di- rectly beneath the letters of the first visual stimulus. The two visual stimuli remained on the screen until the subject responded by press-

ing the button for either SAME or DIFFERENT. The computer measured the time from the onset of the second visual stimulus until the subject pressed the button in response.

Immediately after the subject had re- sponded, either the word CORRECT or the word ERROR appeared on the screen. This feedback was given in order to remind subjects to pay at- tention to the visual stimuli rather than the audi- tory stimulus in making their comparison, and to remind subjects to monitor their accuracy (rather than just responding quickly).

Results

The results of this experiment will be pre- sented in two sections. General results will be presented first, although they are not critical to the purpose of the experiment. The results that test the point of convergence of listening and reading processing directly will be presented in the second section.

General findings A preliminary analysis of variance

(ANOVA) showed no significant difference due to order of presentation of the trials; therefore, the data for all four presentation orders were collapsed for subsequent analyses.' The average accuracy rate across all conditions was 95 per- cent correct. One subject was replaced because of equipment problems during the subject's tri- als; three other subjects were replaced because they each took more than 5 seconds to respond on two or more trials. Table 2 shows the means and standard deviations of all subjects on all conditions.

Subjects' reaction times were analyzed us- ing a 4 x 2 x 2 within-subjects ANOVA, with stimulus type (full good sentence, syntactic nonsense string, random word string, or non- word string), auditory condition (match/mis- match) and visual condition (match/mismatch) as independent variables. Separate ANOVAs were conducted for responses by stimulus item and by subject; results for both analyses are re- ported together.

Page 10: Convergence of Listening and Reading Processing

124 READING RESEARCH QUARTERLY * Spring 1990 XXV/2

Table 2 Means (and standard deviations) for subjects' reaction times by stimulus type and experimental condition

Auditory match Auditory mismatch

Stimulus type Visual Visual Visual Visual match mismatch match mismatch

Full good sentences 1,714 1,422 1,933 1,595 ( 378) ( 304) ( 489) ( 335)

Syntactic nonsense strings 1,964 1,545 2,276 1,669 ( 469) ( 327) ( 601) ( 450)

Random word strings 1,597 1,393 1,887 1,469 ( 351) ( 285) ( 417) ( 344)

Nonword strings 1,700 1,340 1,891 1,285 ( 448) ( 330) ( 536) ( 316)

Note. All figures are rounded to the nearest msec.

A significant main effect was found for stimulus type in analyses both by subject and by item, Min F'(3, 161) = 8.44, p < .01. The mean reaction time was 1,681 msec for full good sentences, 1,863 msec for syntactic non- sense strings, 1,586 msec for random word strings, and 1,554 msec for nonword strings. These means are misleading when compared because stimulus type is confounded with stim- ulus length; for example, the nonword strings, which show the fastest mean reaction time, were the shortest in length. The mean reaction time per unit (word or nonword) was 210 msec for full good sentences, 232 msec for syntactic nonsense strings, 453 msec for random word strings, and 518 msec for nonword strings.

The ANOVAs by both subjects and items also revealed significant main effects of the au- ditory condition, Min F'(1, 81) = 38.08, p < .01, and of the visual condition, Min F'(1, 80) = 106.38, p < .01. These analyses, and the means, indicate that subjects responded more quickly overall when the auditory stimulus matched the first visual stimulus (M = 1,584) than when it did not (M = 1,758), and that sub- jects responded more quickly when the two vis- ual stimuli were the same (M = 1,465) than when they were different (M = 1,878).

Significant interaction effects were found in the analyses by both subjects and items for Auditory Condition x Visual Condition, Min F'(1, 145) = 17.25, p < .01, and Visual Con- dition x Stimulus Type, Min F'(3, 22) = 3.53, p < .05. However, these interactions are not re- lated to the question of interest in the current study. The three-way interaction was not signifi- cant (p > .05).

Findings for the convergence of listening and reading processing

The analysis of the main effect of auditory condition showed that there was an effect of processing an auditory stimulus on the process- ing of a visual stimulus, suggesting some over- lap between the representations of auditory and visual linguistic stimuli. To locate the point of convergence of the two types of processing, we must look at the interaction between auditory condition and stimulus type. The ANOVAs for both subjects and items showed a significant ef- fect of this interaction, Min F'(3, 239) = 2.90, p < .05. Figure 1 displays this interaction for the analysis by subjects.

Bonferroni t tests on the analysis by sub- jects were used to identify the source of the in- teraction. (The family-wise error rate for all

Page 11: Convergence of Listening and Reading Processing

Listening and reading processing SINATRA 125

Figure 1 Mean reaction time (in msec) for both auditory conditions by stimulus type

2400

2200

Auditory 2000-

Mismatch

1800

1600 - Auditorv

Match

1400 (;S SNS RWS NW \•

Stimulus Type

Bonferroni tests was controlled at .05 by evalu- ating each contrast involving two means at the .0125 level and each contrast involving four means at the .01 level.) For three of the four stimulus types, reaction time was significantly shorter when the auditory stimulus was the same as the first visual stimulus: The difference was significant for full good sentences, t(39) = 5.60, p < .0125; syntactic nonsense strings, t(39) = 5.79, p < .0125; and random word strings, t(39) = 6.37, p < .0125. There was no significant difference between the two auditory conditions in reaction time for the nonword strings, t(39) = 2.21, p > .0125. Bonferroni comparisons of the magnitude of the three dif- ferences were not significant. That is, the dif- ferences between the auditory match and auditory mismatch conditions for the full good sentences, syntactic nonsense strings, and ran- dom word strings were comparable.

Although not significant, there was a 68 msec difference between the auditory match and auditory mismatch conditions for the non- word strings. To examine this difference, I plot- ted separately the interaction of auditory condition and stimulus type for the visual match condition (Figure 2) and visual mismatch con- dition (Figure 3). As shown in these figures,

there is a significant difference between the au- ditory match and auditory mismatch conditions for the nonword strings when the visual stimuli match. (This difference was significant for all four stimulus types at the .0125 level.) How- ever, there is no significant difference between the auditory match and auditory mismatch con- ditions for the nonword strings when the visual stimuli do not match, t(39) = 1.25, p > .05. The effect of auditory condition when the visual stimuli do not match was significant for the full good sentences, t(39) = 3.49, p < .0125, and syntactic nonsense strings, t(39) = 3.09, p < .0125, and was nearly significant for the ran- dom word strings, t(39) = 1.92, p = .062. Comparisons of the above differences between the auditory match and auditory mismatch con- ditions for each stimulus type revealed that the magnitudes of these differences were compara- ble for the match condition (i.e., no significant difference at the .05 level). For the mismatch condition, the magnitudes of the differences were comparable for the full good sentences, syntactic nonsense strings, and random word strings. The magnitude of the difference for the nonword strings, however, was significantly dif- ferent from that for each of the other three stim- ulus types (at the .01 level).

Figure 2 Mean reaction time (in msec) for both auditory

conditions by stimulus type when visual stimuli were the same

2000

1900

1800

S 1700Auditory Mismatch

1600

Match 1400

FGS SNS RWS NWS

Stimulus Type

Page 12: Convergence of Listening and Reading Processing

126 READING RESEARCH QUARTERLY * Spring 1990 XXV/2

Figure 3 Mean reaction time (in msec) for both auditory

conditions by stimulus type when visual stimuli were different

1700

1600

1500

S 1400 Auditory Match

1300

Auditory Mismatch

1200 FGS SNS RWS NWS

Stimulus Type

Discussion

The results show that listening to a prior auditory stimulus affects the time required to decide whether two visual stimuli are identical. This effect was found for stimuli that have a lin- guistic representation at the semantic, syntactic, and lexical level-namely, the full good senten- ces, the syntactic nonsense strings, and the ran- dom word strings.

For nonword strings, a significant effect of a prior auditory stimulus was found only in the visual match condition. This effect may have been due to facilitation (or inhibition) of the de- cision process rather than of the encoding of the visual nonword string. That is, if the auditory stimulus matched the first visual stimulus, the subject might have expected that the two visual stimuli would also match. Such an expectation could accelerate the decision that the two visual stimuli were the same. However, when the auditory stimulus did not match the first visual stimulus, subjects may have expected a vis- ual mismatch, and may have responded more slowly to a visual match. The difference ob- served for the nonword strings in the visual match condition thus could be explained by ei- ther facilitation or inhibition of the visual match

decision, but it appears to be an effect on the decision process rather than the encoding proc- ess. Whereas the results for the nonword strings appear to indicate an effect on the decision process only, the results for the full good sen- tences, the syntactic nonsense strings, and the random word strings appear to show an effect of the auditory stimulus on the encoding of the vis- ual stimulus.

The results of this study thus suggest that listening processes and reading processes con- verge at the word level. This finding is inconsis- tent with the assumption that reading and listening processes are completely separate. On the other hand, it also refutes the notion that listening and reading processes are the same except for initial perceptual differences. In ad- dition, the finding is inconsistent with the point of convergence of listening and reading sug- gested by some other models. For example, Royer (1985) suggests that the two processes converge at the syntactic/conceptual level; to be consistent with this model, an effect of the audi- tory stimulus would have to be found for the full good sentences and syntactic nonsense strings only, and not for the the random word strings. Hanson's (1981) suggestion that written and spoken words share phonological processing is also inconsistent with the current results. On the other hand, the present study does support Kirsner and Smith's (1974) conclusion that written and spoken words share a common lex- icon.

Some important questions were not ad- dressed by the present study. First, although the results suggest that the processes of listening and reading converge at the word level, the two processes may diverge at some later point in the processing continuum. For example, the proc- essing of lengthy, connected text may require processing strategies that are qualitatively dif- ferent from the strategies used in the processing of aural discourse. This possibility demands further study.

Second, the current study does not address interactions between various levels of process- ing. Rather, each condition was designed to ex- amine how processing an auditory stimulus at a particular level would affect the processing of a

Page 13: Convergence of Listening and Reading Processing

Listening and reading processing SINATRA 127

visual stimulus at the same level (e.g., the ef- fects of auditory word processing on visual word processing). Although interaction effects were not examined in the present study, a com- plete understanding of how processing auditory linguistic information affects the processing of visual linguistic information would have to in- clude an examination of the influence of the processing of higher-level auditory information on the processing of lower-level visual informa- tion.

The differences in unit processing time be- tween the stimulus types warrant further inves- tigation. The stimulus type at the highest level of the processing hierarchy (the full good sen- tence) also required the shortest processing time per word. The processing time per unit in- creased as the level at which the stimulus could be represented in the processing hierarchy de- creased; the nonsense strings required the most time to process. The ease of comparison for lin- guistic stimuli may depend upon the size of the unit that must be processed. In other words, it is possible that readers can compare full good sen- tences as single units of meaning, but that they must compare random word strings word-by- word, and nonwords phoneme-by-phoneme. (Syntactic nonsense strings would have to be compared in some sort of syntactic units.) Thus, there may actually be more units to be compared in the nonword stimulus, despite its shorter length, than in the full good sentence.

Another important question is whether the effect of an auditory stimulus on the processing of visual stimuli is a facilitation effect or an in- hibition effect. For example, in the current study the effect of the auditory stimulus on the processing of the visual stimuli may have been a facilitation effect (in the auditory match condi- tion) or an inhibition effect (in the mismatch condition). Also, because the current findings for nonwords were complex, the effect of audi- tory nonwords on the processing of visual non- words deserves further investigation.

The results of the present study may have implications for instruction in both listening and reading. The findings suggest that there is some commonality of processing between the two modalities. If reading comprehension de-

pends on some of the same processes as listen- ing comprehension, then the ratio between a student's listening skills and reading skills may be a useful indicator of a student's "reading po- tential" (e.g., Durrell, 1969; Carroll, 1977; Sticht & James, 1984). A student whose skills in listening comprehension and reading com- prehension are comparable may be reading as well as can be expected, and may be able to im- prove his or her reading ability only by building a larger vocabulary or a larger knowledge base. On the other hand, a student whose reading skills are far below his or her listening skills should benefit from further instruction in de- coding.

Although the current study shows com- monality between listening processes and read- ing processes, the relation between the two is complex. In a recent article, Perfetti (1987) pro- poses that there is an asymmetric relation be- tween the two skills, which changes as the child develops. He argues that the two skills are quite different in the beginning reader, but that the process of reading becomes similar to the proc- ess of listening once a child masters decoding. In the adult fluent reader, this relation may change again, and aural processing may be- come more similar to the process of reading. Further research is needed to understand fully the relation between these two processes. If the relation between listening comprehension and reading comprehension evolves over time, then a more complete understanding of this develop- mental process may have different implications for instruction in the two modalities for the be- ginning reader, the experienced reader, and the fluent reader.

REFERENCES BAKER, S. (1973). The practical stylist. New York: Thomas

Crowell. CARROLL, J.B. (1977). Developmental parameters of read-

ing comprehension. In J.T. Guthrie (Ed.), Cognition, curriculum, and comprehension (pp. 1-15). Newark, DE: International Reading Association.

CARROLL, P.J., & SLOWIACZEK, M.L. (1985, June). Modes and modules: Multiple pathways to the language proc- essor Paper presented at the Conference on Modularity, Amherst, MA.

Page 14: Convergence of Listening and Reading Processing

128 READING RESEARCH QUARTERLY * Spring 1990 XXV/2

DANKS, J.H. (1980). Comprehension in listening and read-

ing: Same or different? In J.H. Danks & K. Pezdek (Eds.), Reading and understanding (pp. 1-39). Newark, DE: International Reading Association.

DURRELL, D.D. (1969). Listening comprehension versus

reading comprehension. Journal of Reading, 12, 455- 460.

FRANK, M. (1972). Modern English: A practical reference guide. Englewood Cliffs, NJ: Prentice-Hall.

FRIES, C.C. (1963). Linguistics and reading. New York: Holt, Rinehart & Winston.

FORSTER, K.I. (1979). Levels of processing and the structure of the language processor. In W.E. Cooper & E. Walker (Eds.), Sentence processing (pp. 27-85). Hillsdale, NJ: Erlbaum.

GOODMAN, K.S. (1970). Reading: A psycholinguistic guess- ing game. In H. Singer & R.B. Ruddell (Eds.), Theoret- ical models and processes of reading (pp. 497-508). Newark, DE: International Reading Association.

HANSON, V.L. (1981). Processing of written and spoken words: Evidence for common coding. Memory & Cog- nition, 9(1), 93-100.

KAVANAGH, J.E, & MATTINGLY, I.G. (1972). Language by ear and by eye. Cambridge, MA: MIT Press.

KIRSNER, K., & SMITH, M.C. (1974). Modality effects in word identification. Memory & Cognition, 2(4), 637- 640.

MATTINGLY, I.G. (1972). Reading, the linguistic process, and linguistic awareness. In J.E. Kavanagh & I.G. Mat- tingly (Eds.), Language by ear and by eye (pp. 133- 148). Cambridge, MA: MIT Press.

MILLER, G.A. (1972). Reflections of the conference. In J.E. Kavanagh & I.G. Mattingly (Eds.), Language by ear and by eye (pp. 373-381). Cambridge, MA: MIT Press.

PERFETTI, C.A. (1985). Reading ability. New York: Oxford

University Press. PERFETTI, C.A. (1987). Language, speech, and print: Some

asymmetries in the acquisition of literacy. In R. Horo- witz & S.J. Samuels (Eds.), Comprehending oral and written language (pp. 355-369). New York: Academic Press.

ROYER, J.M. (1985). Reading from the perspective of a bio-

logical metaphor. Contemporary Educational Psychol- ogy, 10, 150-200.

RUBIN, A. (1980). A theoretical taxonomy of the difference between oral and written language. In R.J. Spiro, B.C. Bruce, & W.F. Brewer (Eds.), Theoretical issues in reading comprehension (pp. 239-252). Hillsdale, NJ: Erlbaum.

SEIDENBERG, M.S., TANENHAUS, M.K., LEIMAN, J.M., &

BIENKOWSKI, M. (1982). Automatic access of the mean-

ings of ambiguous words in context: Some limitations of

knowledge-based processing. Cognitive Psychology, 14, 489-537.

STICHT, T.G., BECK, L.J., HANKY, R.N., KLEIMAN, G.M., &

JAMES, J.H. (1974). Auding and reading: A developmen- tal model. Alexandria, VA: Human Resources Research Organization.

STICHT, T.G., & JAMES, J.H. (1984). Listening and reading. In P.D. Pearson (Ed.), Handbook of reading research (pp. 293-318). New York: Longman.

SWINNEY, D.A. (1979). Lexical access during sentence com-

prehension: (Re)Consideration of context effects. Jour- nal of Verbal Learning and Verbal Behavior, 18, 645-659.

TOWNSEND, D.J., CARRITHERS, C., & BEVER, T.G. (1987). Listening and reading processes in college- and middle school-age readers. In R. Horowitz & S.J. Samuels (Eds.), Comprehending oral and written language (pp. 217-242). New York: Academic Press.

YOUNG, D.J. (1980). The structure of English clauses. New York: St. Martin's Press.

Footnotes The study reported here was submitted as part of the re-

quirements for a Master of Science degree at the University of Massachusetts, Amherst. I would like to gratefully ac- knowledge the contributions to the research of Charles Clif- ton, James M. Royer, and Arnold Well. Special thanks to James M. Royer for help with the revisions. I would also like to thank Kara Kritis for her assistance with data collec- tion.

'An additional ANOVA on the data by subjects was con- ducted in which order was included along with all other var- iables. There was no significant main effect of order (p > .05), and there were no significant two-way interactions with order. There was one significant three-way interaction involving order; however, it was not interpretable.

Received July 29, 1988 Revision received July 7, 1989

Accepted August 31, 1989

Page 15: Convergence of Listening and Reading Processing

Listening and reading processing SINATRA 129

APPENDIX

Stimuli used in the study (Note. Words in bold face were used for the visual mismatch condi- tion.)

Full good sentences

1. Sue wants to go for a walk/ride. 2. David kept his savings/dollar in an old sock. 3. I will come to see/get you on Thursday. 4. The project was wholly/mostly ineffectual. 5. He would not think of letting us/me help. 6. The bottle/carafe fell off the table. 7. She worked in the garden/fields yesterday. 8. The/His data are inconclusive. 9. By late afternoon, William was exhausted/deficient.

10. The church stands in the square/common. 11. The young man was elected class president/treasurer. 12. Ellen is the one who will/must succeed. 13. He worked hard/alot because he needed the grade. 14. The policeman arrested the burglar/robber. 15. He is living/acting like a millionaire. 16. Sailing a boat/yawl is fun. 17. Most members are in favor of the motion/action. 18. I move/hold that the nominations be closed. 19. They will consent/concede to any arrangements. 20. He taught/helped me to play the piano. 21. The room was full of sunlight/daylight. 22. The school offers three separate/distinct curricula. 23. The letter was signed by the author/writer. 24. Cathy wanted/needed a singing career. 25. He objected to the suggestion/statements. 26. The students are organizing social/sports activities. 27. You seem/look uninterested in the problem. 28. Nobody realized that the train was late/slow. 29. We suggest/request that you take warm clothes with

you. 30. The old house was empty/quiet. 31. I can't/won't find my car keys. 32. He blamed the management for the dispute/quarrel.

Syntactic nonsense strings

1. It had looked the hours of the street/routes. 2. He was awakened reading/looking early for miles. 3. The book had reached steadily/normally. 4. Mary must be for the novel/books. 5. The subway/trains reduces away fair. 6. I ran/jog he must not have now. 7. He has been raining/pouring a library. 8. A crowd left under/below two lakes. 9. The play can more than this picture/etching abroad.

10. Most for the heat were bright/golden. 11. She won/got that he snapped pay. 12. He drove the indifferent taxes late/long. 13. She amused a hundred novels all her week/days. 14. Summer books went home and are mending/helping

late.

15. They were at his/her Sarah last. 16. I walked in the last February/December. 17. His wide shop/mill heard his agreement. 18. The match/games arrived the end easily. 19. He and I perfectly began/arose the money. 20. I have proved for this story/tales since June. 21. The apartment plan/idea is in good suit. 22. Genius managers had a workable judge/chief. 23. They closed/sealed the dollars this imitation. 24. We signed each other by beginning/producing month. 25. Now and then she saw a flu hope/wish. 26. Since they were loud, it showed twigs/stick. 27. You are mending in a second/double book. 28. He was she who came the noise/sound. 29. We said as unfinished/incomplete as Bill. 30. It has been her this door/gate. 31. He was by his tire/tube and could tonight. 32. They can spend/waste the car.

Random word strings

1. owe riot month course/routes 2. window work/jobs today playing 3. bought reached/grabbed cotton listening 4. becoming book matter/affairs 5. proportions entering repaired/adjusted 6. student piano cutting/slicing 7. place girls open/ajar tricky 8. saw looks/seems line telegrams 9. grass/lawns demonstration setting o'clock

10. this lobby serious/sincere next 11. continuous/constantly near post excellent 12. sending very happened/occurred heard 13. seeks/looks three saw dress ago 14. him flowers/blossom ten step 15. dollars/payment never office they 16. room/area married voices must 17. private tomorrow crisis apples/fruits 18. shoes/boots children this 19. need caught/seized considered these 20. soon forward/leading economy them 21. normal done/over calm whole 22. wake remained/survived employment left 23. five near cheating getting/gaining 24. bankrupt one caught problem/dilemma 25. reached lake/pond fail sun 26. keeps/saves sale station like 27. transportation kinds/types clouds tomorrow 28. voting return/arrive gardener accident 29. those during place/point three 30. beautiful/wonderful population her 31. asked/urged public part 32. questions/proposals days going radio

Nonword strings

1. mar/fot sorth pelink 2. tinner thepe/varps esterway 3. trote/lurts nostfard tuss 4. cotridering/tropormian nass empering

Page 16: Convergence of Listening and Reading Processing

130 READING RESEARCH QUARTERLY * Spring 1990 XXV/2

5. nost/dilt touse ponet 6. storp mape/foon hassist 7. saff/pake fopt zact 8. turls othen povies/vearns 9. remected tropidles/siffement tobs

10. feld/relt menerap liffapent 11. tust dake/jeat romether 12. sitoved/deturns reasing poom 13. swoe blamnetz bep/cak 14. renake/emeged pilk romithy 15. goint fess/brop fote 16. itherant/levetals feasy nime 17. torst povempter/leparates elethion 18. romission smithed tiport/nublic 19. neturned/cetordly dalimornia togust 20. trings sland/masps sork 21. rame teru mest/rork 22. dack asarant/rastard comiterid 23. anways/borked vived heary 24. dinnow terpect/beathed droken 25. awfost/kenner linished nork 26. offite sint/poat nocuments 27. peam neag/fant neckord 28. sustec/rimart plame filse 29. noy sardly rotain/mesent 30. tald srope/lounts stoff 31. bonis med/tor dubosity 32. veth/lext themedrin stithoon

Auditory foils used in the study

Full good sentences

1. He left home an hour ago. 2. They stopped when they reached the lake. 3. I'm going to put the books away. 4. Peter's taking them on a tour. 5. They failed to report the crime. 6. She went to the grocery store for milk. 7. He looked as if he were confident. 8. We can save fuel by using less electricity. 9. We expect to go there next week.

10. The ship broke loose from its moorings. 11. Amy is the one in the raincoat. 12. Jim gave every game his all. 13. The jurors disagreed among themselves. 14. Mr. Jones has ignored the evidence. 15. Mark persuaded him to buy some shares. 16. There will be some tickets available.

Syntactic nonsense strings

1. You are liked walking of the students. 2. The bridge turned the afternoon. 3. I was old haunted to the party. 4. None in the grass borrowed the audience. 5. She wandered the remote students. 6. Tom was you to give everyone a prospect. 7. Chris had the attention out. 8. The right tape novel was tired. 9. He is invited of them.

10. They noticed indeed empty picnics. 11. The mystery recorder must want at the class. 12. I read a house on the game. 13. The present by the newspaper passed. 14. Everybody caught that you would find. 15. That examination was whole only. 16. If you seemed me, we were excited quickly.

Random word strings

1. trip fruit classroom glad 2. know eaten results serves 3. door job course new 4. harder found Susan would 5. going Jane members song 6. telephone key lawn last 7. education occurs fashion 8. plan night objected 9. strongly restaurant play

10. over students long were 11. skirts unlocks again studied 12. rang sing late here 13. passed club cancelled 14. agency wine tennis tending 15. secretary pie gardening rest 16. spoiled liked rains special

Nonword strings

1. broup sorked pell 2. renands nost donether 3. troblet durely nonitical 4. san smains lang 5. stayet rome peneral 6. bront naid mell 7. goith houte mearched 8. nent nolimays neabons 9. mot wath flace

10. teft har rall 11. domnor ippennet rafients 12. roon saners bonorrow 13. prere proth whike 14. prener rud wibe 15. soat daces dape 16. clane flet dountains