sexta-feira, 12 de fevereiro de 2010

Speech and Reading

(R. Conrad)

For a very large number of children, the first word they learn to read is their own name — and one can think of no more meaningful word. Some time may elapse before a second word is confidently added to the reading repertoire. But this first word marks the attainment of a concept of immense intellectual importance: the child accepts the idea that an untidy nonrepresentational pattern of lines in some way symbolizes a name. We say to children, "What does this word say?" In so doing, we intuitively link reading with speaking, dramatically distinguishing between the identification of printed pictures of objects and printed names of objects. Never, in Western culture, would we point to the picture of a dog and ask, "What does this picture say?"

If we examine this intuition in more detail, there is an implication that the printed word says something to the child; the child listens to what is said and then repeats it out loud. But it is not hard to accept that the child, looking at the word, says something to himself, listens to himself, and then repeats what he has heard. In this analysis we are at least within the realm of theoretical possibility. But this does
not make it correct, and in this chapter we shall examine evidence concerning the role of certain processes -- and their interrelationships -- involved in reading.

In no sense is this chapter a broad survey of reading processes. Quite the contrary. We shall be concerned only with the transduction problem of a visual input transformed into a speech-motor output when we read aloud. But the main emphasis will be to consider the speculation that has continuously intrigued students, that the identical transduction necessanly occurs when we read silently to ourselves. We shall consider codes and memories——those systems required for viably maintaining the substance of perceptions during the addition of subsequent perceptions so that higher-order manipulations are possible and textual meaning is achieved. (...)


Speech and Reading in Adults

Does all reading involve speech, and does reading aloud merely add sound? (...) We can
than pose as a primary question, do the processes involved in comprehension occur before the sound is made? In this case, speaking aloud would merely be fulfilling the behavior requirement to read aloud. Or do we make the speech sounds, listen to them, and comprehend what it is we have heard? What we are asking here is whether comprehension of printed material—reading—is possible directly front the visual input.
Or do we have to say words, whether covertly or overtly, in order to understand their meaning? Certainly the latter possibility has a conceptual elegance, since it would fit reading and listening to speech into a single behavioral framework, the only difference being the source of the speech-oneself or another person. But since behavioral mechanisms rarely fall into simple patterns, a discussion of the evidence is necessary; and the reader might just as well be warned at this point that it will
not be conclusive.

(...) At one extreme, the most obvious, we can think of reading with almost full articulation; that is, all articulatory processes are involved except those required for making sounds. Although nothing is heard, lips visibly move to form speech sounds and movements of the speech organs can easily be felt with the fingers. In one sense this is unquestionably silent reading. The continuum then moves toward less directly observable behavior. Lips may be closed and apparently unmoving, movements in
the throat may be too attenuated to be felt. Nevertheless, articulation may still be detected by electromyographic (EMG) and related techniques. Here, the fact that reading is accompanied by electrical activity in muscles required in the production of speech sounds, though no movement is visible to the eye, is taken as evidence for the occurrence of silent articulation of speech during reading. This line of investigation
began with Curtis [1900] and reviews are to be found, for example, in Edfeldt [1960] and McGuigan [1970].

(...) Locke and Fehr [1970] required subjects to read silently (for subsequent recall) two classes of word -- those that did, or did not, include labial phonemes. Using surface electrodes, they also recorded activity of the labial musculature. Though the results were not entirely unambiguous, they seemed to indicate that "labial" words, silently read, do show more movements of the labial muscles than do "nonlabial" words. At any rate the procedure is novel and highly promising.

But even the complete absence of detectable speech-motor activity does note preclude the occurrence of silent speech in the form of speech imagery. We are not saying here that this ever happens, only that it can happen. If auditory imagery is a genuine biological phenomenon, then the sounds of speech must be included in its definition, and silent reading can be accompanied by a succession of auditory speech images that might have the same psychological function in the reading process as does silent articulation. Granted auditory imagery in this context, then speech-motor imagery, even though unresponsive to EMG recording, must be included in this definition as well. If in imagination we can move a leg, then there is no reason why in imagination we should not be able to move a tongue and so image the feel of silent speech without articulation and without imaged sound. All these phenomena would be silent speech. The fact that no visible lip movements is observed would not preclude the presence of silent speech in reading. EMG silence equally does not preclude silent speech. As always, it is easier to prove the presence of a phenomenon than its absence.

(...)

[McGuigan 1970] uses EMG recording. As we have said, this category has a history going back to the beginning of this century; and almost without exception, results show more EMG response from speech muscles during silent reading than during rest. One of the most comprehensive and carefully controlled studies is that of Edfeldt [1960], who reported articulation during silent reading with almost all of 84 subjects. (Pervesely, Edfeldt does not report whether any of these subjects gave negative introspections). There are indeed so many good studies reporting the presence of articulation during silent reading, that we might be justified in concluding that the case is proved. But of course, the case that appears to be proved is that silent reading is accompanied by articulation. What is far from proved is that articulation is necessarily involved in silent reading. This kind of imperative seems most unlikely. No one has convincingly shown comprehension to be seriously impaired directly as a result of preventing articulation in some way. This latter is not easily accomplished by mechanical means [Novikova 1966]. Indeed, the contrary may be true.
Hardyck, Petrinovich et al. [1967], using a conditioning procedure, reported the complete inhibition of articulation during silent reading with unimpaired comprehension. A later report qualified this [Hardyck and Petrinovich 1970]. In any case, absence of articulation does not preclude the use of speech imagery, so that comprehension when there is no EMG response could still be based, as a requirement, on silent speech.

Reading without Speech

Since this paper is directly concerned with the role of speech in reading and in learning to read, it is certainly pertinent to consider the nature and the effectiveness of reading skill in the deaf, who have had no experience of aural speech. There are of course, other pathologically handicapped populations who do not speak, such as certain aphasics. But unless deafness is present, opportunity to hear speech has been present. We have seen how tenaciously the normal hearing person clings to his phonological code when he reads material that he is to remember; and we have seen the way young children appear spontaneously to come to use ts phonological name-code when they memorize a series of pictures. When, through experimental manipulation, the phonological code becomes difficult to use, STM is gravely impaired. We have therefore suggested that there is a close association between silent reading and silently speaking because comprehension requires memory. The STM involved seems best supported by phonological coding.

Only among the deaf can we find people with no speech experience -- or at any rate with relatively little -- who for our present purposes therefore provide an invaluable control. We can simply ask the question: can the deaf learn to read? Were the answer an emphatic negative much of our enquiry would come to a comfortable end, since we would be very close to proving that reading is possible only when phonological coding is available. But of course the truth is not as simple as that, and most deaf children do, to some extent, learn to read. That immediately tells us that for hearing people, phonological coding is a preference, not a necessity. Knowing the versatility of man, this is what we would have expected. But since it has turned out to be exceedingly difficult to get hearing subjects to abstain from their predilection for phonological coding, we might hope that studies of the deaf would give us clearer insight into the rules governing the development and use of reading codes in general. By this we refer to the kinds of transductions that occur between seeing a printed item and storing it in a form in which it is available for future use, so that we can say: an item is remembered; a phrase is comprehended.

The paradox of a "normal deaf" population is too great for the term to be meaningful. In the first place, anyone whose hearing does not fall within defined "normal" limits is deaf. In the studies to be discussed, we are concerned with a category of profound deafness. Over the range of useful speech frequencies between 250 and 4000 Hz, our subjects, mostly children, had hearing losses of not less than about 75 dB in their better ear. These children, even with hearing-aid amplification, would have very little awareness of different speech sounds. Unless they can see lips moving, they would not know that someone in a room was speaking. They are profoundly deaf. Second their medical records would show that they were either born deaf -- sometimes of deaf parents -- or became profoundly deaf within the first year or so of life; they have never used normal speech. These are features common to any person who can loosely be described as "congenitally and profoundly deaf". All of the deaf subjects of the studies to be discussed are in this category. But by the time a deaf child is likely to take part in reading experiments, variations in home background, intelligence, other pathology, and particularly in the educational theories that have guided his school training, will become important sources of experimental "noise." In using these subjects in experiments we have to confine ourselves to relatively simple questions and be content only with large experimental effects.

Probably in most schools for the deaf some attempt is made to teach deaf children to speak. There are some schools where no other mode of communication is used, come what may, between teachers and pupils. Other schools regard communication by no matter what means, as essential. Even less then, than with normal children, can we talk of deaf children "on average." But no matter how speech oriented a school might be, children who are profoundly deaf from an early age exceedingly rarely attain a quality of speech that can be readily understood by Strangers. It is exceedingly rare that a hearing person can address such a deaf person normally, even though making sure that his lips can be seen, and get normal comprehension of his speech at normal speeds. It is exceedingly rare to see two such deaf children speaking to each other using the language that hearing persons would use. All of these things do occur, but they are levels of speech and language skills that certainly fewer than one in a hundred profoundly deaf children achieve.

If then speech is a skill acquired by the deaf with immense difficulty when acquired at all, from all that has gone before we would expect reading also to present grave difficulty to the deaf. Apart from anything else, this would be due to severe vocabulary deficiency and to the handicap of never having acquired the easy use to the innumerable rules of grammar that the rest of us pick up through hearing before we usually begin to learn to read. But there is something else also. There is the fact of only partial, or even of complete absence of, availability of phonological coding as an aid to comprehension. And indeed there is very good evidence that profoundly deaf children have great difficulty in learning to read. Myklebust [1960] reports on a number of studies of reading ability in the deaf.

For example, on the Columbia Vocabulary Test, at 9 years the mean score for normal children is about 20; for deaf children it is just over 3. At 11 years the respective values are 33:6; at 13 years, 43:10; at 15 years, 63:11. Not only is the difference huge, but it increases with age. The 15-year-old deaf child has a much poorer vocabulary than a 9-year-old normal child. In every aspect of the grammatical handling of words the deaf child is many years behind the hearing child on attainment tests. It is not therefore surprising that similar discrepancies are reported when memory span for verbal material is examined. Blair [1957], Furth [1964], Olsson and Furth [1966], and others have shown that deaf children are grossly inferior in digit span to hearing children. But when the material to be remembered consists of shapes that do not readily have names, deaf and hearing children have the same span. Furthermore the Olsson and Furth data show that whereas for hearing subjects the difference in span between digits and shapes is substantial, for the deaf it is quite small. It is not impossible then, that for the deaf, digits are just another set of shapes. There is no internal evidence in the Olsson and Furth study to indicate whether the deaf group did in fact code digits by shape. Olsson and Furth suggest that they could have finger—spelled them; there is uncertainty. The point is that there is some suggestive evidence here that phonological coding has unique advantages for STM over certain other codes, and among these we must include visual codes. This is unquestionably a loose argument since we do not know for certain what kind of codes in this experiment either group used with any of the test material. Then, as Furth [1966] points out elsewhere, any direct comparison of cognitive abilities between deaf and hearing must be taken with great caution because of innumerable differences that cannot be taken into account when matching groups.

Since many studies show deaf and hearing to have similar span on material not easily verbalized, but inferiority of the deaf when nameable items are used, the need is emphasized for a clearer understanding of the STM code used by the deaf when presented with words and other items having familiar names. We have continuously pointed out the important role that phonological coding has in reading: but we are equally clear that the deaf do read. They could be using a phonological code very ineffectively, or they could be using some other code or codes that are in themselves less efficient for the purpose than is a phonological code.

(...)

Now we have tended to discuss codes used by the deaf when they read printed verbal material as if we believe that when an articulatory code is not used, then a visual code is used. This is not our belief. It so happens that a good deal of the data we have used in this discussion can be taken to support this view. But this is largely because tasks have been designed that would favor visual coding were it available to the deaf. We are quite sure that other codes are also available to most of the deaf that are based on finger-spelling, signing, and lip-reading at the least. On present knowledge it is much harder to construct reading test material that could greatly benefit, for example, a finger-spelling code. Indeed it seems plausible that with reading skills, since the deaf do not have available what seem to be the most efficient codes, they could very well make extensive use of multiple coding, no one code being particularly efficient, to a much greater extent than do those of us with normal sensory experience.

(...)

By analogy, words have been "designed" to provide high auditory discriminability. When we read we continue to take advantage of this fact. The visual appearance of words has limited relevance. The deaf try to read a language perfectly adapted for the use of hearing people. They are faced with many commonly used words of the same length and general configuration; the majority of words begin with a minority of different consonants, usually followed by a vowel, all of which sound different but which in print, except for i, look quite similar. Just as we try to teach the deaf a spoken language that permits them to use only minimally those cognitive abilities that are intact, so too we try to teach them a written language, making the tacit assumption that when a deaf child sees a printed word he will cognitively “do” with it just what a hearing child will.

In summary, what this section has tried to do is to show that speech or speech-based codes are not necessary for reading. We have agreed that deaf children who do not use speech-based codes are poor readers. But we have argued that the reasons for this are far from simple. In particular it may be that nonspeech codes, when developed by training to the extent that speech codes are, would not in themselves be inefficient. But when used to transduce printed words that derive from spoken forms, then they are bound to be at a disadvantage.

Nenhum comentário:

Postar um comentário