## sábado, 20 de fevereiro de 2010

### Misreading: A Search for Causes

(Donald Shankweiler and Isabelle Y. Liberman)

Because speech is universal and reading is not, we may suppose that the latter is more difficult and less natural. Indeed, we know that a large part of the early education of the school child must be devoted to instruction in reading and that the instruction often fails, even in the most favorable circumstances. [judging from the long history of debate concerning the proper methods of teaching children to read [Mathews 1966], the problem has always been with us. Nor do we appear to have come closer to a solution: we are still a long way from understanding how children learn to read and what has gone wrong when they fail.

In the extensive literature about reading since the 1890s there have been sporadic surges of interest in the examination of oral reading errors as a means of studying the process of reading acquisition. The history of this topic has been well summarized by Weber [1968] (...).

(...)

We are, in addition, curious to know whether the difficulties in reading are to be found at a visual stage or at a subsequent linguistic stage of the process This requires us to consider the special case of reversal errors, in which optical considerations are, on the face of it, primary. Our inquiry into linguistic aspects of reading errors then leads us to ask which constituents of words tend to be misread, and whether the same ones tend to be misheard. We examine errors with regard to the position of the constituent segments within the word and the linguistic status of the segments in an attempt to produce a coherent account of the possible causes of the error pattern in reading.

(...)

The Word as the Locus of Difficulty in Beginning Reading

One often encounters the claim that there are many children who can read individual words yet do not seem able to comprehend connected text [Anderson and Dearborn 1952; Goodman 1968]. The existence of such children is taken to support the view that methods of instruction that stress spelling-to-sound correspondences and other aspects of decoding are sufficient and many even produce mechanical readers who are expert at decoding but fail to comprehend sentences. (...)

(...)

The Contribution of Visual Factors to the Error Pattern in Beginning

We have seen that a number of converging results support the belief that the primary locus of difficulty in beginning reading is the word. But within the word, what is the nature of the difficulty? To what extent are the problems visual and to what extent linguistic?

In considering this question, we asked first whether the problem is in the perception of individual letters. There is considerable agreement that after the first grade, even those children who have made little further progress in learning to read do not have significant difficulty in visual identification of individual letters [Vernon 1960; Shankweiler 1964; Doehring 1968].

REVERSALS AND OPTICAL SHAPE PERCEPTION

The occurrence in the alphabet of reversible letters may present special problems, however. The tendency for young children to confuse letters of similar shape that differ in orientation (such as "b, d, p, q") is well known. Gibson and her colleagues [1962, 1965] have isolated a number of component abilities in letter identification and studied their developmental course by the use of letter-like forms that incorporate basic features of the alphabet. They find that children do not readily distinguish pairs of shapes that are 180-degree transformations (i.e., reversals) of each other at age 5 or 6, but by age 7 or 8, orientation has become a distinctive property of the optical character. It is of interest, therefore, to investigate how much reversible letters contribute to the error pattern of 8-year-old children who are having reading difficulties.

Reversal of the direction of letter sequences (e.g., reading "from" for "form") is another phenomenon that is usually considered to be intrinsically related to orientation reversal. Both types of reversals are often thought to be indicative of a disturbance in the visual directional scan of print in children with reading disability (see Benton [1962] for a comprehensive review of the relevant research). One early investigator considered reversal phenomena to be so central to the problems in reading that he used the term "strephosymbolia" to designate specific reading disability [Orton 1925]. We should ask, then, whether reversals of letter orientation and sequence loom large as obstacles to learning to read. Do they covary in their occurrence, and what is the relative significance of the optical and linguistic components of the problem?

(...)

RELATIONSHIPS BETWEEN REVERSALS AND OTHER TYPES OF ERRORS

It was found that, even among these poor readers, reversals accounted for only a small proportion of the total error, through the list was constructed to provide maximum opportunity for reversals to occur. Separating the two types, we found that sequences reversals accounted for 15 percent of the total errors made, and orientation errors only 10 percent, whereas other consonant errors accounted for 32 percent of the total and vowel errors 43 percent. Moreover, individual differences in reversal tendency were large (rates of sequence reversal ranged from 4 to 19 percent; rates for orientation reversal ranged from 3 to 31 percent). Viewed in terms of opportunities for error, orientation errors occurred less frequently than other consonant errors. Test-retest comparisons showed that whereas other reading errors were rather stable, reversals -- and particularly orientation reversals -- were unstable.

(...)

ORIENTATION REVERSALS AND REVERSALS OF SEQUENCES:
NO COMMON CAUSE?

Having considered the two types of reversals separately, we find no support for assuming that they have a common cause in children with reading problems. Among the poor third-grade readers, sequence reversal and orientation reversal were found to be wholly uncorrelated with each other, whereas vowel and consonant errors correlated 0.73. (...)

ORIENTATION ERRORS: VISUAL OR PHONETIC?

In further pursuing the orientation errors, we examined the nature of the substitutions among the reversible letters "b, d, p, g." Tabulation of these showed that the possibility of generating another letter by a simple 180-degree transformation is indeed a relevant factor in producing the confusions among these letters. This is, of course, in agreement with the conclusion reached by Gibson and her colleagues [1962].

At the same time, other observations [I. Y. Liberman, Shankweiler et al. 1971] indicate that letter reversals may be a symptom and not a cause of reading difficulty. (...)

(...)

Linguistic Aspects of the Error Pattern in Reading and Speech

"In reading research, the deep interest in words as visual displays stands in contrast to the relative neglect of written words as linguistic units represented graphically." [Weber 1968, p.113]

The findings we have discussed in the preceding section suggest that the chief problems the young child encounters in reading words are beyond the stage of visual identification of letters. It therefore seemed profitable to study the error pattern from a linguistic point of view.

(...)

(...) the substantially greater error rate for final consonants than for initial ones is certainly contrary to what would be expected by an analysis of the reading process in terms of sequential probabilities. If the child at the early stages of learning to read were able to utilize the constraints that are built into the language, he would make fewer errors at the end than at the beginning, not more. In fact, what we often see is that the child breaks down after he has gotten the first letter correct and can go no further. We will suggest later why this may happen.

In order to understand the error pattern in reading, it should be instructive to compare it with the pattern of errors generated when isolated monosyllables are presented by ear for oral repetition. (...)

The error pattern for oral repetition shows some striking differences from that in reading. With auditory presentation, errors in oral repetition averaged 7 percent when tabulated by phoneme, as compared with 24 percent in reading, and were about equally distributed between initial and final position, rather than being markedly different. Moreover, contrary to what occurred when the list was read, fewer errors occurred
on vowels than on consonants.

(...)

It is clear from the figure that the perception of speech by reading has problems which are separate and distinct from the problems of perceiving speech by ear. We cannot predict the error rate for a given phoneme in reading from its error rate in listening. If a phoneme were exactly as difficult to read as to hear, the point would fall on the diagonal line that has been dotted in. Vertical distance from the diagonal to any point below it is a measure of the specific difficulty of reading the phoneme as distinguished from listening to it. Although the reliability of the individual points in the array has not been assessed, the trends are unmistakable. The points are very widely scattered for the consonants. As for the vowels, they are seldom misheard but often misread (suggesting, incidentally, that the high error rate on vowels in reading cannot be an artificial transcription difficulties).

(...)

The following analysis illustrates how vowel errors may be analyzed to discover whether, in fact, the error pattern is nonrandom and, if it is, to discover what the major substitutions are. Figure 2 shows a confusion matrix for vowels based on the responses of 11 children at the end of the third grade (Group C2 in Table 6) who are somewhat retarded in reading. Each row in the matrix refers to a vowel phoneme represented in the words (of List 2) and each column contains entries of the transcriptions of the responses given in oral reading. Thus the rows give the frequency distribution for each vowel percentaged against the number of occurrences, which is approximately 25 per vowel per subject.

It may be seen that the errors are not distributed randomly. (...) Hence we may conclude that the error rate on vowels in our list is related to the number of orthographic representations of each vowel.

The data thus support the idea that differences in error rate among vowels reflect differences in their orthographic complexity. Moreover, as we have said, the fact that vowels, in general, map onto sound more complexly than consonants is one reason they tend to be misread more frequently than consonants.

It may be, however, that these orthographic differences among segments are themselves partly rooted in speech. Many data from speech research indicate that vowels are often processed differently from consonants when perceived by ear. A number of experiments have shown that the tendency to categorical perception is greater in the encoded stop consonants than in the unencoded vowels [A. M. Liberman, Cooper et al. 1967; A. M. Liberman 1970]. It may be argued that as a consequence of the continuous nature of their perception, vowels tend to be somewhat indefinite as phonologic entities, as illustrated by the major part they play in variation among dialects and the persistence of allophones within the same geographical locality. By the same reasoning, it could be that the continuous nature of vowel perception is one cause of complex orthography, suggesting that one reason that multiple representations are tolerated may lie very close to speech.

We should also consider the possibility that the error pattern of the vowels reflects not just the complex relation between letter and sound but also confusions that arise as the reader recodes phonetically. There is now a great deal of evidence [Conrad 1964, this volume] that normal readers do, in fact, recode the letters into phonetic units for storage and use in short-term memory. If so, we should expect that vowel errors would represent displacements from the correct vowels to those that are phonetically adjacent and similar, the more so because, as we have just noted, vowel perception is more nearly continuous than categorical. That such displacements did in general occur is indicated in Figure 2 by the fact that the errors tend to lie near the diagonal. More data and, in particular, a more complete selection of items will be required to determine the contribution to vowel errors of orthographic complexity and the confusions of phonetic receding.

Summary and Conclusions

In an attempt to understand the problems encountered by the beginning reader and children who fail to learn, we have investigated the child's misreadings and how they relate to speech. The first question we asked was whether the major barrier to achieving fluency in reading is at the level of connected text or in dealing with individual words. Having concluded from our own findings and the research of others that the word and its components are of primary importance, we then looked more closely at the error patterns in reading words.

Since reading is the perception of language by eye, it seemed important to ask whether the principal difficulties within the word are to be found at a visual stage of the process or at a subsequent linguistic stage. We considered the special case of reversals of letter sequence and orientation in which the properties of visual confusability are, on the face of it, primary. We found that although optical reversibility contributes to the error rate, for the children we have studied it is of secondary importance to linguistic factors. Our investigation of the reversal tendency then led us to consider whether individual differences in reading ability might reflect differences in the degree and kind of functional asymmetries of the cerebral hemisphere. Although the evidence is at this time not clearly supportive of a relation between cerebral ambilaterality and reading disability, it was suggested that new techniques offer an opportunity to explore this relationship more fully in the future.

When we turned to the linguistic aspects of the error pattern in words, we found, as others have, that medial and final segments in the word are more often misread than initial ones and vowels more often than consonants. We then considered why the error pattern in mishearing differed from misreading in both these respects. In regard to segment position, we concluded that children in the early stages of learning to read tend to get the initial segment correct and fail on subsequent ones because the do not have the conscious awareness of phonemic segmentation needed specially in reading but not in speaking and listening.

As for vowels in speech, we suggested, first of all, that they may tend to be heard correctly because they are carried by the strongest portion of the acoustic signal. In reading, the situation is different: alphabetic representation of the vowels possess no such special distinctiveness. Moreover, their embedded placement within the syllable and their orthographic complexity combine to create difficulties in reading.
Evidence for the importance of orthographic complexity was seen in our data by the fact that the differences among vowels in error rate in reading were predictable from the number of orthographic representations of each vowel. However, we also considered the possibility that phonetic confusions may account for a significant portion of vowel errors, and we suggested how this hypothesis might be tested.

We believe that the comparative study of reading and speech is of great importance for understanding how the problems of perceiving language by eye differ from the problems of perceiving it by ear, and for discovering why learning to read, unlike speaking and listening, is a difficult accomplishment.

## sexta-feira, 12 de fevereiro de 2010

For a very large number of children, the first word they learn to read is their own name — and one can think of no more meaningful word. Some time may elapse before a second word is confidently added to the reading repertoire. But this first word marks the attainment of a concept of immense intellectual importance: the child accepts the idea that an untidy nonrepresentational pattern of lines in some way symbolizes a name. We say to children, "What does this word say?" In so doing, we intuitively link reading with speaking, dramatically distinguishing between the identification of printed pictures of objects and printed names of objects. Never, in Western culture, would we point to the picture of a dog and ask, "What does this picture say?"

If we examine this intuition in more detail, there is an implication that the printed word says something to the child; the child listens to what is said and then repeats it out loud. But it is not hard to accept that the child, looking at the word, says something to himself, listens to himself, and then repeats what he has heard. In this analysis we are at least within the realm of theoretical possibility. But this does
not make it correct, and in this chapter we shall examine evidence concerning the role of certain processes -- and their interrelationships -- involved in reading.

In no sense is this chapter a broad survey of reading processes. Quite the contrary. We shall be concerned only with the transduction problem of a visual input transformed into a speech-motor output when we read aloud. But the main emphasis will be to consider the speculation that has continuously intrigued students, that the identical transduction necessanly occurs when we read silently to ourselves. We shall consider codes and memories——those systems required for viably maintaining the substance of perceptions during the addition of subsequent perceptions so that higher-order manipulations are possible and textual meaning is achieved. (...)

than pose as a primary question, do the processes involved in comprehension occur before the sound is made? In this case, speaking aloud would merely be fulfilling the behavior requirement to read aloud. Or do we make the speech sounds, listen to them, and comprehend what it is we have heard? What we are asking here is whether comprehension of printed material—reading—is possible directly front the visual input.
Or do we have to say words, whether covertly or overtly, in order to understand their meaning? Certainly the latter possibility has a conceptual elegance, since it would fit reading and listening to speech into a single behavioral framework, the only difference being the source of the speech-oneself or another person. But since behavioral mechanisms rarely fall into simple patterns, a discussion of the evidence is necessary; and the reader might just as well be warned at this point that it will
not be conclusive.

(...) At one extreme, the most obvious, we can think of reading with almost full articulation; that is, all articulatory processes are involved except those required for making sounds. Although nothing is heard, lips visibly move to form speech sounds and movements of the speech organs can easily be felt with the fingers. In one sense this is unquestionably silent reading. The continuum then moves toward less directly observable behavior. Lips may be closed and apparently unmoving, movements in
the throat may be too attenuated to be felt. Nevertheless, articulation may still be detected by electromyographic (EMG) and related techniques. Here, the fact that reading is accompanied by electrical activity in muscles required in the production of speech sounds, though no movement is visible to the eye, is taken as evidence for the occurrence of silent articulation of speech during reading. This line of investigation
began with Curtis [1900] and reviews are to be found, for example, in Edfeldt [1960] and McGuigan [1970].

(...) Locke and Fehr [1970] required subjects to read silently (for subsequent recall) two classes of word -- those that did, or did not, include labial phonemes. Using surface electrodes, they also recorded activity of the labial musculature. Though the results were not entirely unambiguous, they seemed to indicate that "labial" words, silently read, do show more movements of the labial muscles than do "nonlabial" words. At any rate the procedure is novel and highly promising.

But even the complete absence of detectable speech-motor activity does note preclude the occurrence of silent speech in the form of speech imagery. We are not saying here that this ever happens, only that it can happen. If auditory imagery is a genuine biological phenomenon, then the sounds of speech must be included in its definition, and silent reading can be accompanied by a succession of auditory speech images that might have the same psychological function in the reading process as does silent articulation. Granted auditory imagery in this context, then speech-motor imagery, even though unresponsive to EMG recording, must be included in this definition as well. If in imagination we can move a leg, then there is no reason why in imagination we should not be able to move a tongue and so image the feel of silent speech without articulation and without imaged sound. All these phenomena would be silent speech. The fact that no visible lip movements is observed would not preclude the presence of silent speech in reading. EMG silence equally does not preclude silent speech. As always, it is easier to prove the presence of a phenomenon than its absence.

(...)

[McGuigan 1970] uses EMG recording. As we have said, this category has a history going back to the beginning of this century; and almost without exception, results show more EMG response from speech muscles during silent reading than during rest. One of the most comprehensive and carefully controlled studies is that of Edfeldt [1960], who reported articulation during silent reading with almost all of 84 subjects. (Pervesely, Edfeldt does not report whether any of these subjects gave negative introspections). There are indeed so many good studies reporting the presence of articulation during silent reading, that we might be justified in concluding that the case is proved. But of course, the case that appears to be proved is that silent reading is accompanied by articulation. What is far from proved is that articulation is necessarily involved in silent reading. This kind of imperative seems most unlikely. No one has convincingly shown comprehension to be seriously impaired directly as a result of preventing articulation in some way. This latter is not easily accomplished by mechanical means [Novikova 1966]. Indeed, the contrary may be true.
Hardyck, Petrinovich et al. [1967], using a conditioning procedure, reported the complete inhibition of articulation during silent reading with unimpaired comprehension. A later report qualified this [Hardyck and Petrinovich 1970]. In any case, absence of articulation does not preclude the use of speech imagery, so that comprehension when there is no EMG response could still be based, as a requirement, on silent speech.

Since this paper is directly concerned with the role of speech in reading and in learning to read, it is certainly pertinent to consider the nature and the effectiveness of reading skill in the deaf, who have had no experience of aural speech. There are of course, other pathologically handicapped populations who do not speak, such as certain aphasics. But unless deafness is present, opportunity to hear speech has been present. We have seen how tenaciously the normal hearing person clings to his phonological code when he reads material that he is to remember; and we have seen the way young children appear spontaneously to come to use ts phonological name-code when they memorize a series of pictures. When, through experimental manipulation, the phonological code becomes difficult to use, STM is gravely impaired. We have therefore suggested that there is a close association between silent reading and silently speaking because comprehension requires memory. The STM involved seems best supported by phonological coding.

Only among the deaf can we find people with no speech experience -- or at any rate with relatively little -- who for our present purposes therefore provide an invaluable control. We can simply ask the question: can the deaf learn to read? Were the answer an emphatic negative much of our enquiry would come to a comfortable end, since we would be very close to proving that reading is possible only when phonological coding is available. But of course the truth is not as simple as that, and most deaf children do, to some extent, learn to read. That immediately tells us that for hearing people, phonological coding is a preference, not a necessity. Knowing the versatility of man, this is what we would have expected. But since it has turned out to be exceedingly difficult to get hearing subjects to abstain from their predilection for phonological coding, we might hope that studies of the deaf would give us clearer insight into the rules governing the development and use of reading codes in general. By this we refer to the kinds of transductions that occur between seeing a printed item and storing it in a form in which it is available for future use, so that we can say: an item is remembered; a phrase is comprehended.

The paradox of a "normal deaf" population is too great for the term to be meaningful. In the first place, anyone whose hearing does not fall within defined "normal" limits is deaf. In the studies to be discussed, we are concerned with a category of profound deafness. Over the range of useful speech frequencies between 250 and 4000 Hz, our subjects, mostly children, had hearing losses of not less than about 75 dB in their better ear. These children, even with hearing-aid amplification, would have very little awareness of different speech sounds. Unless they can see lips moving, they would not know that someone in a room was speaking. They are profoundly deaf. Second their medical records would show that they were either born deaf -- sometimes of deaf parents -- or became profoundly deaf within the first year or so of life; they have never used normal speech. These are features common to any person who can loosely be described as "congenitally and profoundly deaf". All of the deaf subjects of the studies to be discussed are in this category. But by the time a deaf child is likely to take part in reading experiments, variations in home background, intelligence, other pathology, and particularly in the educational theories that have guided his school training, will become important sources of experimental "noise." In using these subjects in experiments we have to confine ourselves to relatively simple questions and be content only with large experimental effects.

Probably in most schools for the deaf some attempt is made to teach deaf children to speak. There are some schools where no other mode of communication is used, come what may, between teachers and pupils. Other schools regard communication by no matter what means, as essential. Even less then, than with normal children, can we talk of deaf children "on average." But no matter how speech oriented a school might be, children who are profoundly deaf from an early age exceedingly rarely attain a quality of speech that can be readily understood by Strangers. It is exceedingly rare that a hearing person can address such a deaf person normally, even though making sure that his lips can be seen, and get normal comprehension of his speech at normal speeds. It is exceedingly rare to see two such deaf children speaking to each other using the language that hearing persons would use. All of these things do occur, but they are levels of speech and language skills that certainly fewer than one in a hundred profoundly deaf children achieve.

If then speech is a skill acquired by the deaf with immense difficulty when acquired at all, from all that has gone before we would expect reading also to present grave difficulty to the deaf. Apart from anything else, this would be due to severe vocabulary deficiency and to the handicap of never having acquired the easy use to the innumerable rules of grammar that the rest of us pick up through hearing before we usually begin to learn to read. But there is something else also. There is the fact of only partial, or even of complete absence of, availability of phonological coding as an aid to comprehension. And indeed there is very good evidence that profoundly deaf children have great difficulty in learning to read. Myklebust [1960] reports on a number of studies of reading ability in the deaf.

For example, on the Columbia Vocabulary Test, at 9 years the mean score for normal children is about 20; for deaf children it is just over 3. At 11 years the respective values are 33:6; at 13 years, 43:10; at 15 years, 63:11. Not only is the difference huge, but it increases with age. The 15-year-old deaf child has a much poorer vocabulary than a 9-year-old normal child. In every aspect of the grammatical handling of words the deaf child is many years behind the hearing child on attainment tests. It is not therefore surprising that similar discrepancies are reported when memory span for verbal material is examined. Blair [1957], Furth [1964], Olsson and Furth [1966], and others have shown that deaf children are grossly inferior in digit span to hearing children. But when the material to be remembered consists of shapes that do not readily have names, deaf and hearing children have the same span. Furthermore the Olsson and Furth data show that whereas for hearing subjects the difference in span between digits and shapes is substantial, for the deaf it is quite small. It is not impossible then, that for the deaf, digits are just another set of shapes. There is no internal evidence in the Olsson and Furth study to indicate whether the deaf group did in fact code digits by shape. Olsson and Furth suggest that they could have finger—spelled them; there is uncertainty. The point is that there is some suggestive evidence here that phonological coding has unique advantages for STM over certain other codes, and among these we must include visual codes. This is unquestionably a loose argument since we do not know for certain what kind of codes in this experiment either group used with any of the test material. Then, as Furth [1966] points out elsewhere, any direct comparison of cognitive abilities between deaf and hearing must be taken with great caution because of innumerable differences that cannot be taken into account when matching groups.

Since many studies show deaf and hearing to have similar span on material not easily verbalized, but inferiority of the deaf when nameable items are used, the need is emphasized for a clearer understanding of the STM code used by the deaf when presented with words and other items having familiar names. We have continuously pointed out the important role that phonological coding has in reading: but we are equally clear that the deaf do read. They could be using a phonological code very ineffectively, or they could be using some other code or codes that are in themselves less efficient for the purpose than is a phonological code.

(...)

Now we have tended to discuss codes used by the deaf when they read printed verbal material as if we believe that when an articulatory code is not used, then a visual code is used. This is not our belief. It so happens that a good deal of the data we have used in this discussion can be taken to support this view. But this is largely because tasks have been designed that would favor visual coding were it available to the deaf. We are quite sure that other codes are also available to most of the deaf that are based on finger-spelling, signing, and lip-reading at the least. On present knowledge it is much harder to construct reading test material that could greatly benefit, for example, a finger-spelling code. Indeed it seems plausible that with reading skills, since the deaf do not have available what seem to be the most efficient codes, they could very well make extensive use of multiple coding, no one code being particularly efficient, to a much greater extent than do those of us with normal sensory experience.

(...)

By analogy, words have been "designed" to provide high auditory discriminability. When we read we continue to take advantage of this fact. The visual appearance of words has limited relevance. The deaf try to read a language perfectly adapted for the use of hearing people. They are faced with many commonly used words of the same length and general configuration; the majority of words begin with a minority of different consonants, usually followed by a vowel, all of which sound different but which in print, except for i, look quite similar. Just as we try to teach the deaf a spoken language that permits them to use only minimally those cognitive abilities that are intact, so too we try to teach them a written language, making the tacit assumption that when a deaf child sees a printed word he will cognitively “do” with it just what a hearing child will.

In summary, what this section has tried to do is to show that speech or speech-based codes are not necessary for reading. We have agreed that deaf children who do not use speech-based codes are poor readers. But we have argued that the reasons for this are far from simple. In particular it may be that nonspeech codes, when developed by training to the extent that speech codes are, would not in themselves be inefficient. But when used to transduce printed words that derive from spoken forms, then they are bound to be at a disadvantage.

## quinta-feira, 11 de fevereiro de 2010

### The legacy of Zellig Harris: Language and information into the 21st century

(John Goldsmith)

Zellig Harris (1909–1992) cast a long shadow across twentieth century linguistics. In mid-century, he was a leading figure in American linguistics, serving as president of the Linguistic Society of America in 1955, just a year before Roman Jakobson. It is fair to say that during that decade—the years just before generative grammar came on the scene—Zellig Harris and Charles Hockett were the two leading figures in the development of American linguistic theory. Today, I daresay Harris is remembered by most linguists as the mentor and advisor to Noam Chomsky at the University of Pennsylvania—and the originator of transformational analysis.

(...)

Harris’s work must be situated in terms of the conflict between two visions of linguistic science: the MEDIATIONALIST view, which sees the goal of linguistic research as the discovery of the way in which natural languages link form and meaning, and the DISTRIBUTIONALIST view, which sees the goal as the fully explicit rendering of how the individual pieces of language (phoneme, syllable, morpheme, word, construction, etc.) connect to one another in the ways that define each individual language. The mediationalist view lurks behind most conceptions of language study, formal and nonformal, but it was Harris’s view that each successive improvement in linguistic theory took us a step further AWAY from the mediationalist view, much as advances in biology led scientists to understand that the study of living cells required no new forms of energy, structure, or organization in addition to those which were required to understand nonliving matter. Harris had no use for mediationalist conceptions of linguistics. For linguists in 2005, steeped as we are in an atmosphere of linguistic mediationalism, this makes Harris quite difficult to understand at first.

Harris’s goal was to show that all that was worthwhile in linguistic analysis could best be understood in terms of distribution of components at different hierarchical levels, because he understood—or at least he believed—that there was no other basis on which to establish a coherent and general linguistic theory. His genius lay in the construction of a conception of how such a vision could be put into place concretely.

Harris’s view, from his earliest work through his final statements in the early 1990s, was that the best foundational chances for linguistics were to be found in stablishing
a science of EXTERNAL LINGUISTIC FACTS (such as corpora, though they would typically be augmented by other external facts, like speaker judgments), rather than a science of internalized speaker knowledge. (...)

(...)

Harris did not appear to make a great effort to make his conclusions easily accessible to the reader. And yet once his ideas are understood, it is hard to deny that his way of stating them is direct, elegant, and striking. Let us approach the central idea of all of Harris’s work, as summarized by Harris himself in his introductory paper to this volume:

"The structure of language can be found only from the non-equiprobability of combination of parts. This means that the description of a language is the description of contributory departures from equiprobability, and the least statement of such contributions (constraints) that is adequate to describe the sentences and discourses of the language is the most revealing."

Picking this apart into pieces:

1. Linguistic analysis consists of building a representation out of a finite number of
formal objects.

2. The essence of any given language is the restrictions, or constraints, that it places on how the pieces may be put together—these may be phonemes, morphemes, constituents, what have you. If there were no structure, then pieces could be put together any which way; structure MEANS — it is nothing more or less than — restrictions on how pieces can be put together.

3. These restrictions may be absolute ('no pk' clusters are permitted in this language’) or, much more likely, they are statements of distribution, best expressed in the mathematics of probability. A crude reformulation of this would be in the language of markedness, which is arguably an informal way of talking about distributional frequencies. A better way is to use the mathematical vocabulary of distributions, which is to say, probability theory.

4. A formal system can be described formally in a multitude of ways. These are not equivalent: there is a priority among them based on their formal length. In general, one will be significantly shorter than the others, and knowing its length is important.

It is probably impossible to understand the intellectual pull of this research program if one does not appreciate the revolutionary character (and the perceived success) of the phoneme. If the phoneme today seems passe, the discarded error of an earlier generation, then today’s linguist should think of its descendant—for most of us, the
idea of an underlying segment. (...)

(...)

It was his view that the important relationship between sounds lay not in their phonetics, but in their DISTRIBUTION (...). What tells us that the flap and the other t’s of English are realizations of a single phoneme /t/ is not the similarity of sound, but the complementarity and predictability of the distribution.

"It is pointless to mix phonetic and distributional contrasts. If phonemes which are phonetically similar are also similar in their distribution, that is a result which must be independently proved. For the crux of the matter is that phonetic and distributional contrasts are methodologically different, and that only distributional contrasts are relevant while phonetic contrasts are irrelevant.

This becomes clear as soon as we consider what is the scientific operation of working out the phonemic pattern. For phonemes are in the first instance determined on the basis of distribution. Two positional variants may be considered one phoneme if they are in complementary distribution; never otherwise. In identical environment (distribution) two sounds are assigned to two phonemes if their difference distinguishes one morpheme from another; in complementary distribution this test cannot be applied. The distributional analysis is simply the unfolding of the criterion used for the original classification. If it yields a patterned arrangement of phonemes, that is an interesting result for linguistic structure."

(...)

Rudolf Carnap had (...), in The logical syntax of language (published in 1934 under the title Logische Syntax der Sprache), argued for a coming together of formal syntax and formal logic: by formal, he meant analysis ignoring meaning and considering only categories and combinations of symbols; by syntax, the rules by which items are combined to form expressions (sentences); and by logic, the rules by which valid inferences from one sentence to another can be made. The contrast between syntax and logic was dubbed by Carnap (in English) as the difference between FORMATION rules and TRANSFORMATION rules, an interesting terminological suggestion and one that may have later influenced Harris.

## quarta-feira, 10 de fevereiro de 2010

### Component Processes in Reading: A Performance Analysis

(Michael I. Posner, Joe L. Lewis, and Carol Conrad)

Introduction

A detailed analysis of the internal structures and mental operations involved in the process of reading might help us understand problems in acquiring the skill. This is a point that our keynote speaker [Gibson 1965] made several years ago. In the last few years there has been a considerable advance in the development of techniques used to isolate stages of processing and their interrelationships [Neisser 1967; Posner 1969; Sternberg, 1969]. This paper is an effort to review both the techniques and the results that might aid in elucidating the component processes in reading.

(...)

Isolable Subsystems

Let us be quite concrete. A young child who has never before seen the symbol "A" must be aware primarily of the visual form of the letter. But this is not so with an adult. Consciousness of the letter is suffused with past experience: its association to other visual forms (e.g., "a”), the phoneme /a/, its status as a vowel, and as the first letter of a list called the alphabet. Yet, even in the skilled reader, by appropriate
experimental technique, we can isolate the visual system processes from these other influences. We can, in fact, argue that the visual processes represent in the adult an isolable subsystem the properties of which can be studied.

Experiments showing that the visual process is an isolable subsystem of letter processing in the adult [Posner 1969] suggest that there are important psychological problems involved in passing from one subsystem to another (e.g., visual to name). Perhaps it is at the boundaries between any two isolable subsystems that special difficulties in cognitive processing lie. Indeed, the problem of coordinating modality-specific subsystems may represent one explanation of the difliculty in the seemingly simple translation from a visual word to the word name.

The idea of an isolable subsystem is a complex one [Miller 1970]. In the recent experimental literature there have been many eFforts to discover serial “stages" of processing [Clark and Chase 1971; E. E. Smith 1968; Sternberg 1959, Trabasso 1970]. These tasks tend to be ones in which one stage must depend directly upon the outcome of the prexious stage. There is still dispute about the details of these models (eg., whether the comparison stage is serial), but they have had sufficient success to show that internal mental operations can be isolated for study.

(...)

Visual Codes

The words that you are presently reading are unique conngurations of print. The names that these words represent are abstractions in the sense that they stand for a variety of perceptually ditcferent visual forrns (eg., "PLANT, plant”) and auditory patterns [eg., the word plant spoken by a male or a female). The name of a word gains its meaning from the semantic structure to which it is related. The word plant may be related to a structure dealing with living things, or to one dealing with labor unions and assembly lines [Quillian 1969]. At one level the word is a visual code, at another a name, and at still another, an aspect of the overall semantic structure of which it is a part.

AN EXPERIMENTAL METHOD

The mental operations that transform one code into another can be observed in the time required for making classifications. Suppose that the subject is shown a pair of items and is asked to press, as quickly as possible, one key if they are "same" and another if they are "different." Figure 1 (left diagram) illustrates the results from an experiment in which items were letters and the definition of “same" was "both vowels” or "both consonants” [Posner and Mitchell 1967]. If the letters were identical in physical form ("A A"), the reaction time was faster than if they had only a name in common (e.g., "A a"), which in turn was faster than when items shared only the same class (e.g., “A e,” both vowels). A similar result [Schaeffer and Beller 1970] is shown (right diagram) for an experiment in which word pairs were used and the subject was required to press the key when both words were "living things" or both were "nonliving things". These figures illustrate the method measuring mental operations by the amount of time they require.

ISOLABILITY

Can we isolate operations that are performed on visual information? The problem is how to determine if the operations performed on letters or words use visual representations rather than letter names or semantic information. Suppose that a pair of items is presented simultaneously. The subject is then required to press one key if the two items have the same name and another key if the names are different. If the two are physically identical (e.g. "A A") it is logically possible to base the match upon a visual form. On the other hand, for letters like "A a", which are not similar in physical form, the match is more likely to be based upon a learned correspondence between the visual forms such as the letter name.

Experimental data show that these logical distinctions apply to actual performances of subjects. The time for matching identical letters (e.g., "A A") is faster than that for upper and lower case forms of the same letters (e.g. "A a"). (...)

(...)

FAMILIARITY

The isolability of visual and name processes, even for letters, allows us to study properties of the visual code independently of names. For example, it is now clear that any number of letters may be matched simultaneously, as long as they are physically identical. This means that the contact between external letters and their internal pattern recognizers can go in parallel and with no interference [Beller 1970; Donderi and Case 1970]. (...)

In her 1965 paper, Gibson argues that the primary units of analysis for reading are spelling patterns rather than single letters. It is possible now to show that these familiarity effects occur within the visual code and do not depend upon feedback from the letter names. Recent experiments have allowed us to study the influence of past experience upon mental operations within the visual system. If a subject is required to match two strings of letters to determine if they are physically identical, he can do so much faster if they form a familiar word than if they are nonsense strings [Eichelman 1970; Krueger 1970]. (...)

Moreover, the word can appears to form a unit. If the subject has to identify a single letter from a brief exposure, he can do so as efficiently when a words is presented as when a single letter is presented [Reicher 1969; Wheeler 1970]. It thus appears that having had past experience with certain sequences of letters allows us to perform matching and other visual operations upon them with great efficiency.

(...)

(...) In one study [Smith, Lott el al. 1969] it was found that subjects scanned a visual array just as rapidly whether the visual patterns were familiar ("PLANT, plant") or quite new ("pLaNt"), provided that the letters were equated for visibility. (...)

Nothing we have said suggests a solution to the more general problem of pattern recognition. We simply do not know how the input is brought into contact with the internal system. (...)

(...)

Name Codes

The concept of the name of a word or letter is an extremely important one for the psychology of reading. Many theories implicitly assume that the internal representation that stands for the name of a word is the same regardless of the modality through which the information was received [Morton 1969]. This assumption greatly simplifies an analysis of reading. The unique problem of reading would then involve mainly converting from a visual to a name code. From there on, comprehension would be based on mechanisms already present for listening. We have already reviewed one objection to this idea, namely, the view that meaning is connected directly to the visual forms. A second objection is that subjects can recall the channel of entry by which a stimulus was presented [Murdock 1967]. This objection, however, can be met by recognizing that the activation of a name code does not obliterate information about the past history of the input [Posner 1969].

Name Codes and Meaning

It is obvious that knowing the name of a word is not the same as knowing its meaning. James [1890] commented on his point (Vol.I, p. 263), "it is more difficult to ascend to the meaning of word than to pass from one word to another; or to put it otherwise, it is harder to be a thinker than to be a rhetorician, and on the whole nothing is commoner than trains of words not understood." It is well known that the word associations of children often involve similarity of word sound, a type of association that is reduced in frequency later in life. If subjects are asked to signify that they have read a word, they respond much more quickly than when required to signify that they understand it [Wickens 1970]. (...)

(...)

## terça-feira, 9 de fevereiro de 2010

### Reading, the Linguistic Process, and Linguistic Awareness

(Ignatius G. Mattingly)

More recently, however, the perception of speech has come to be regarded by many as an “active" process basically similar to speech production. The listener understands what is said through it process of “analysis by synthesis" [Stevens and Halle 1967]. Parallel proposals have accordingly been made for reading. Thus Hochberg and Brook [1970] suggest that once the reader can visually discriminate letters and letter groups and has mastered the phoneme-grapheme correspondences of his writing system, he uses the same hypothesis-testing procedure in reading as he does in listening (Goodman’s [1970] view of reading as a "psycholinguistic guessing game" is a similar proposal). Though the model of linguistic processing is different from that of Bloomfield and Fries, the assumption of a simple parallel between reading and listening remains, and the only differences mentioned are those assignable to modality, for example, the use which the reader makes of peripheral vision, which has no analog in listening.

(...)

We know that all living languages are spoken languages, and that every normal child gains the ability to understand his native speech as part of a maturational process of language acquisition. In fact we must suppose that, as a prerequisite for language acquisition, the child has some kind of innate capability to perceive speech, In order to extract from the utterances of others the "primary linguistic data" that he needs for acquisition, he must have a "technique for representing input signa1s" [Chomsky 1965, p. 30].

In contrast, relatively few languages are written languages. In general, children must be deliberately taught to read and write, and despite this teaching, many of them fail to learn. Someone who has been unable to acquire language by listening -- a congenitally deaf child, for instance -- will hardly be able to acquire it through reading; on the contrary, as Liberman and Furth [Kavanagh 1968] point out, a child with a language deficit owing to deafness will have great difficulty learning

The apparent naturalness of listening does not mean that it is in all respects a more efficient process. Though many people find reading difficult, there are a few readers who are very proficient: in fact, they read at rates well over 2000 words per minute with complete comprehension. Listening is always a slower process: even when speech is artificially speeded up in a way which preserver frequency relationships, 400 words
per minute is about the maximum possible rate [Orr, Friedman et at. 1965]. It has often been suggested [e.g., Bever and Bower 1966; Bower, 1970] that high-speed readers are somehow able to go directly to a deep level of language, omitting the intermediate stages of processing to which other readers and all listeners must presumably have recourse.

(...) The listener is processing a complex acoustic signal in which the speech cues that constitute significant linguistic data are buried. Before he can use these cues, the listener has to "demodulate" the signal: that is, he has to separate the cues from
the irrelevant detail. The complexity of this task is indicated by the fact that no scheme for speech recognition by machine has yet been devised that can perform it properly. The demodulation is largely unconscious; as a rule, a listener is unable to perceive the actual acoustic form of the event which serves as a cue unless it is artificially excised from its speech context [Mattingly, Liberman et al. 1971]. The cues are not discrete events well separated in time or frequency; they blend into one another; we cannot, for instance, realistically identify a certain instant as the ending of a formant transition for an initial consonant and the beginning of the steady state of the following vowel.

The reader, on the other hand, is processing a series of symbols that are quite simply related to the physical medium that conveys them. The task of demodulation is straightforward: the marks in black ink are information; the white paper is background. The reader has no particular difficulty in seeing the letters as visual shapes if he wants to. In printed text, the symbols are discrete units. In cursive writing, of course, one can slur together the symbols to a surprising degree without loss of legibility. But though they are deformed, the cursive symbols remain essentially discrete. It makes sense to view cursive writing as a string of separate symbols connected together for practical (convenience; it makes no sense at all to view the speech signal in this way.

(...)

Our view is that reading is a language-based skill like Pig Latin or versification and not a form of primary linguistic activity analogous to listening. From this viewpoint, let us try to give an account, necessarily much oversimplified, of the process of reading a sentence.

The reader first forms a preliminary, quasiphonological representation of the sentence based on his visual perception of the written text. The form in which this text presents itself is determined not by the actual linguistic information conveyed by the sentence but by the writer's linguistic awareness of the process of synthesizing the sentence, an awareness which the writer wishes to impart to the reader.

(...)

How can we explain the very high speeds at which some people read? To say that such readers go directly to a semantic representation, omitting most of the process of linguistic synthesis, is to hypothesize a special type of reader who differs from other readers in the nature of his primary linguistic activity, and differs in a way which we have no other grounds for supposing possible. As far as I know, no one has suggested that high-speed readers can listen, rapidly or slowly, in the way they are presumed to read. A more plausible explanation is that linguistic synthesis takes place much faster than has been supposed, and that the rapid reader has learned how to take advantage of this. The relevant experiments (summarized by Neisser [1967]) have measured the rate at which rapidly articulated or artificially speeded speech can be comprehended, and the rate at which a subject can count silently, that is, the rate
of "inner speech". But since temporal relationships in speech can only withstand so much distortion, speeded speech experiments may merely reflect limitations on the rate of input. The counting experiment not only used unrealistic material but assumed that inner speech is an essential concomitant of linguistic synthesis.

## sexta-feira, 5 de fevereiro de 2010

### Rate Distortion Theory

(Elements of Information Theory, Thomas M. Cover and Joy A. Thomas)

The description of an arbitrary real number requires an infinite number of bits, so a finite representation of a continuous random variable can never be perfect. How well can we do? To frame the question appropriately, it is necessary to define the “goodness” of a representation of a source. This is accomplished by defining a distortion measure which is a measure of distance between the random variable and its representation. The basic problem in rate distortion theory can then be stated as follows: Given a source distribution and a distortion measure, what is the minimum expected distortion achievable at a particular rate? Or, equivalently, what is the minimum rate description required to achieve a particular distortion?

One of the most intriguing aspects of this theory is that joint descriptions are more efficient than individual descriptions. It is simpler to describe an elephant and a chicken with one description than to describe each alone. This is true even for independent random variables. It is simpler to describe X1 and X2 together (at a given distortion for each) than to describe each by itself. Why don’t independent problems have independent solutions? The answer is found in the geometry. Apparently, rectangular grid points (arising from independent descriptions) do not fill up the space efficiently.

### Fórmulas Matemáticas no Blogger

Para adicionar fórmulas ao seu Blogger é bem fácil. Siga o link:

Veja aqui um primeiro teste
$A = \sum_{i=0}^{N} x_i$

## terça-feira, 2 de fevereiro de 2010

### Written Signs

(Writing Systems: An introduction to their linguistic analysis, Florian Coulmas)

These technical developments have repercussions on the structure of the signs and the way they are processed. Recognition of the signs is no longer based on similarity but on discrimination, as a pictorial likeness is gradually replaced by the necessity to distinguish one sign from another. Differentiation thus becomes the principal design feature of the signs. For example, that sign of a bull resembles a bull is now less important than that it differs from the sign of a cow.

(...)

The relationship between signs and objects is superseded by multiple relationships between signs and other signs as the scribes' chief concern. The signs thus become part of a graphic system characterized by negative differentiation. The underlying principle is that the many signs are to be kept from becoming confused with one another, much like the units of a language. The creation of new signs follows the same principle when lines are added to existing signs or one sign is adjoined to another. Contrast with all other signs become a defining feature of every sign.

### Writing Auto-indexicality

Every written document not only embodies the message 'I am meant to be read' but also instructions, however indirect, as how this can be done. In other words, the systematic make-up of writing contains a key to its own decipherment.

(Writing Systems: An Introduction to Their Linguistic Analysis, Florian Coulmas :21)

### Vowel Incorporation

Most writing systems are interpreted as referring in some way to the phonetic composition of speech forms. In the process the natural continuous flow of sound is artificially broken up into discrete units of various size. The syllable is an intuitively salient unit exploited to this end by several ancient and modern writing system such as Assyrian cuneiform, Cypriot and Japanese kana (...). Syllables are typically composed of consonants and vowels, which, in the Western tradition, as a reflection of the Greek alphabet are both uniformly considered sound segments, while in Semitic writing consonants and vowels are conceptualized and symbolized differently. The use of 'matres lectionis' in archaic Semitic documents is clear evidence that the Semitic scribes had a notion of a vowel as a unit of language. For reasons having to do with the conservative nature of writing systems in general and with the semantic significance of consonants in Semitic languages, they chose not to, or were not able to, treat both classes of sounds in the same manner.

Linguistic analysis

All writing systems are based on, and hence more or less explicitly incorporate a linguistic analysis. In the case of Indian writing systems this is especially obvious.

### Symbols as a Generalized Language

Writing is an example of a language isomorph in that it has a close part-to-part correspondence with natural language and scientific formulae are extensions of language in that they begin with language and then go on to constructions which are language-like, but not actually used in natural language. Symbols are still wider generalizations of language than either isomorphs or extensions. In the widest sense a symbol is anything, linguistic or non-linguistic, which stands for or "symbolizes", something else. The symbol "a" stands for the sound [a], the visual symbol "|", whatever you call it, stands for the number 'one' in more than one system of writing. A repeated low-pitch horn may stand for a warning that there is a heavy fog in the harbour. A system however has to be something which cann be conveniently produced, presented, and perceived without necessary perceiving the object it stands for.
(...)
we shall follow the therminology of Charles W. Morris (...) Moris deals with is not symbols, but, as the title of his work implies, signs, of which symbols form a special case. For example, lowering clouds are a sign of rain, a shiny wet road is a sign that the road is slippery, but the road sign which says "Slippery When Wet" is not only a
sign, but also a symbol. In general, as we have noted above, a symbol is something which can be conveniently produced and has a conventionalized, usually arbitrary, relation to what is symbolized.

what is one symbol?

In analysing a complexity thing, such as symbolic systems, there is always the twofold problem of (1) identification or differentiation on the one hand, and (2) idividualization or segmentation on the other. We have already met with similar problems in the case of phonemes and words. More generally, we can ask: What similar of different things can be classed together as instances of the same symbol? This is a question of kind. Or we can ask: How much a chunk of a thing extending in space or time or both in space and time (such as gestures) shall be considered one piece of a symbol? This is a problem of size. To revert to our linguistic interest in things grammatic, the former is a paradigmatic problem, while the latter is a syntagmatic problem. This is in fact not too far from Morris's terminology, since he calls the study of the structure of signs (including symbols) themselves syntactics, which of course has a much wider application that syntax in the grammatical sense.

(1) To take the problem of identity of symbols first, it will be more convenient to regard a symbol, not as one event or one thing, but as a collection of events or things considered as members of a class, in other words, a symbol is usually taken as a type rather than a token. On the other hand one instance of a symbol, or token, such as an utterance made on one occasion, is often termed a 'signal'. In common usage, one speaks of signals usually in connection with special forms of visual and other forms of communication other that linguistic forms, but there is no reason why a signal in the sense of one instance of the use of a symbol should not include language.
(...)

(2) As to the problem of segmentation, there are two sides to consider: (a) What is one symbol and what is a complex of symbols? (b) Where does a particular symbol begin and end? These are obviously generalizations of corresponding linguistic problems of subunits of language, with which we are already familiar. As for the complexity of symbols, no upper limit can be set. As Rudolph Carnap has noted, to any sentence which is reputed to be the longest sentence possible, one can always add the co-ordinate clause 'and the moon is round', which makes it a longer sentence. (...) The lower limit to the size of a symbol is not the smallest physical element which is perceivable, but a symbol which, even if perceivable when subdivided, whould no longer be a symbol (or a set of symbols) in the system of which it is a part.

(...)

Symbol and object

(1) Symbols and icons. The normal relation between a symbol and its object, or denotatum in Morris's terminology, is conventional, arbitary, and fortuitous. There is usually no similarity or casual relation between the two. There is for example nothing intrinsically long about the English word 'long' or intrinsically short about the word 'short'. In fact the word 'short' is longer not only graphically but also phonetically and foreigners often tend to pronounce it 'shot' in order so make it sound more symbolic -- symbolic in the popular sense we noted above: 'fitting, expressive, consonant, appropriate', which is precisely the opposite of 'conventional, arbitrary', etc. In this popular sense red is symbolic of danger, stop, etc., because it is physiologically more impressive.

(...)

4. Ambiguity, vagueness, and generality. Symbol and object may correspond in the relation of one to one, one to many, many to one, or many to many, understanding of course that one symbol may consist of a class of various members whose differences do not matter.

### Syllables

(Writing Systems: An introduction to their linguistic analysis, Florian Coulmas)

Intuitive notions of the syllable are vague. Attempts at precision move the discussion to a different level of analytical notions defined in a theoretically justified way. In phonology, the syllable is seen either as the minimum unit of sequential speech sounds or as a unit of the metrical system of a language. Certain theories consider the syllable as a basic phonological unit 'sui generis', while others derive its properties from those of the composite phonemes. Clearly, a syllable is a unit of articulation, and although a universally accepted articulatory definition is not available, phoneticians of different schools are agreed that syllables possess psychological reality for speakers. A syllable is a unit of speech that can be articulated in isolation and bear a single degree of stress, as in English, or a singles tone, as in Chinese. Different languages allow for different syllables. The specific structure of possible syllables is thus part of the phonological system of a language. In very general terms, syllables are units of speech consisting of an obligatory nucleus, usually a vowel (V), and optional initial and final margins, usually consonants (C). An alternative way of describing the structure of the syllable is to divide it into onset and rhyme, where the onset is the initial margin and the rhyme is further subdivided into peak and coda. A syllable with a vowel in coda position is called 'open', and a syllable with a consonant in coda position 'closed'.

(...)

The syllable is also the domain of stress, another feature of cross-linguistic variation. In French, stress is not very important, it rarely affect meaning. In English it can be distinctive, as in 'increase with stress on the first syllable, a noun, and in'crease, a verb, stressed on the second syllable. (...)

In some languages vowel length is distinctive, which means that there are minimal pair of syllables that differ in phonological time only. (...)

The syllable further function as the unit to which a pitch level is assigned. Languages that use pitch level to distinguish words are known as 'tone languages', and distinctive pitch levels are called 'tones'. In tone languages it is relations between the pitch of different syllables rather than the absolute pitch that is important. (...)

To summarize, segmental composition, stress, duration and tone are properties of the syllable. The importance of these features varies across languages and, although the syllable is crucial as a unit within which the distribution of phonological features can be stated, it is best defined as a unit for each language separately. This has important consequences for the analysis of syllabic writing systems.

### Subliminal Perception

(Cognitive Psychology: A Student's Handbook, Michael W. Eysenck and Mark T. Keane)

In 1957, it was reported in the press that James Vicary flashed the words EAT POPCORN and DRINK COCA-COLA for 1/300th of a second numerous times during the cinema showing of a film. This subliminal advertising allegedly led to an 18% increase in the cinema sales of Coca-Cola and 58% increase in popcorn sales. However, the film (Picnic) contained scenes of eating and drinking, and the increased sales were probably due to the film itself rather than the subliminal advertising. This conclusion is based on the fact that there is very little evidence from over 200 studies that subliminal advertising is effective in changing behaviour (Pratkanis & Aronson, 1992).

Pratkanis, A. R., & Aronson, E. (1992). Age of propaganda: The everyday use and abuse of persuasion. New York: W.H. Freeman.

### Stroop Effect

The Stroop effect, in which the naming of the colours in which words are printed is slowed down by using colour words (e.g., the word YELLOW printed in red), seems to involve unavoidable and automatic processing of the colour words. However, Kahaneman and Henik (1979) found that the Stroop effect was much larger when the distracting information (i.e., the colour name) was in the same locations as the to-be-named colour rather than in an adjacent location. Thus, the processes producing the Stroop effect are not entirely unavoidable and so not completely automatic.

### Sound Patterns

(The Sound Shape of Language, Roman Jakobson and Linda R. Waugh)

"Sound Patterns in Language" (1925) was Edward Sapir's momentous contribution to the first issue of the first volume of the review 'Language', published by the newborn Linguistic Society of America. This first American pathfinder (1884-1939) in the theoretical insights into the sound shape of language said that "a speech sound is not merely an articulation or an acoustic image, but material for symbolic expression in an appropriate linguistic context"; and it was on "the relational gaps between sounds of a language" that Sapir put the chief emphasis. Similarly, the topological idea that in any analysis of structure "it is not things that matter but the relations between
them", an idea which found a manifold expression in contemporaneous sciences and arts, was a main guide for the exponents of the Prague Linguistic Circle, founded in 1926. They endeavored to derive the characteristics of phonemes from the interrelations of these units and in the "Project of Standardized Phonological Terminology" of 1930 they defined a 'phonological unit' as a term of an opposition. The concept of 'opposition' took on fundamental importance for the differentiation of cognitive meanings. The question of the relationship between the sense-discriminative units bacame the necessary requirement for any delineation of functional sound systems.

### Sociolinguistics of Writing

'For a long time, writing was a secret tool. The possession of writing meant distinction, domination, and controlled communication, in short, the means of an initiation. Historically writting was linked with the division of social classes and their struggles, and (in out country) with the attainment of democracy.' (Roland Barthes)

'Literate societies are characterized by a literate environment which promotes extensive and regular use of literacy in all communicative domains. In such societies, illiteracy is considered to be a stigma by both the literate and the nonliterate sections of the society.' (Chander Daswani)

Language is a social fact, which implies that it is a mental phenomenon. Its written form speaks to me the mind in its own way, shaping the language users' awareness of their language and hence its identity.

### Signs of Words

All words of necessary or common use were spoken before they were written; and while they are unfixed by any visible signs, must have been spoken with great diversity. -- Samuel Johnson

It would be necessary to search for the reason for dividing language into words - for in spite of the difficulty of dealing it, word is a unit that strikes the mind, something central in the mechanism of language. -- Ferdinand de Saussure

Theoretical words

Words are typical units of lexicology and lexicography. This seems obvious enough, but there has been a great deal of scholarly discussion about the status of the word in language structure. Some linguists avoid the term altogether giving preference to the morpheme as the smallest and basic grammatical unit. For, while in everyday speech we can live with expressions that have vague and multiple meanings, scientific terms should be unambiguous and, ideally, universally applicable. The word fails on both counts. 'Word' is a highly ambiguous term and hard to define in a way valid for all languages. Words are units at the boundary between morphology and syntax serving important functions as carriers of both semantic (Sampson 1979) and syntactic (Di Sciullo and Williams 1987) information and as such are subject to typological variation. In some languages words seem to be more clearly delimited and more stable than in others. The structural make-up of words depends on typological characteristics of languages.

(note: each verb and its forms are the same word, or different words?)

The remarks by Johnson and Saussure quoted at the beginning of this chapter point to the important fact that words are intuitively given units but hard to pinpoint. Once fixed by visible signs, they acquire a corporeal existence. It should be borne in mind that first and foremost words are lexical units or lemmata, that is, analytic units of the written language.

### Signs of Segments

'Each natural language has a finite number of phonemes (or letters in its alphabet) and each sentence is represented as a finite sequence of these phonemes (or letters).' Noam Chomsky, Syntactic Structures

Segments, more specifically phonemic segments, are, it is widely believed, what alphabetic letters encode. However, alphabetic writing has been cited as evidence both for the psychological reality of segments (Cohn 2001: 198) and for the view that segments are a mere project (Morais et. al. 1979). The argument cuts either way. How would it be possible to encode speech as a sequence of discrete graphical elements (letters) unless there were corresponding units in the mental representation of language?! (...)

Theoretical segments

Phonologists define segments as ensembles of distinctive features referring to manner and place of articulation. These features are the cornerstone of phonological theory. Their combinations yield segments called 'phones' when they are not viewed as elements of a particular language. (...)

The production of phonetic features in connected speech extends over a period of time, starting before a segment begins and coming to an end only after it has been terminated (Günther 1988: 15). Where, then is the segment? According to Pierce (1992:384), 'one common intuition about talking is that we proceed by emitting a sequence of discrete articulation, rather like the letters of an alphabet'. It is quite common to equate segments with the letters of the alphabet in this manner, as witnessed, for instance, in the quote by Noam Chomsky at the beginning of this chapter. However, over the past several decades, phonologists have moved away from the segment, since they were not able to discern it in speech signal. Inspection of phonetic reality (connected speech) has not revealed segments corresponding to discrete phonemes (corresponding in turn to discrete graphemes), because articulators - that is, the physical organs of speech production - work continuously, exhibiting, at any point, the influence of the preceding and following sound. There is hence broad agreement that 'it is impossible, in general, to disarticulate phonological representation into a string of non-overlapping units' (Prince 1992:386). This is a real problem, for how shall we interpret letters as overlapping units? The problem disappears if, for descriptive purposes, we accept a model of language where there is a phonemic level, at which discrete segments are lined up one after another, as in writing.

All attempts to prove that speech actually works on the basis of principles determining the sequential organization of discrete segments have failed. At the same time, Chomsky's above-quoted statement that, on abstract level, speech is represented in terms of finite sequences of segments in indisputable. As a matter of fact, description of this sort have been highly successful. But a good description of an object need not be isomorphic with it. (...) In like manner we must not confuse a segmental description of speech with the speech itself. In a sense, alphabetic orthographies can be understood as descriptions of their respective languages, but in any event the relationship between sequences of alphabetic letters and speech is never a one-to-one mapping relation. It is complex in both directions, and, as any description, hinges on a certain point of view highlighting some aspects at the expense of others.

Alphabetically written words can be read and can be pronounced, even words like chlororophenpyridamine. The pronounceability of alphabetic words rests on a process known as 'phonological recording', that is, the transformation of mental representations of sequences of letters into mental representations of sequences of sounds. A great deal of reading research deals with the question of whether and to what extent phonological recording is necessary for reading alphabetic texts, a problem to which we will return in chapter 9. For present purposes suffice it to note the obvious fact that alphabetic texts can be given a phonetic interpretation, they can be read aloud. While this is true of all writing, more or less, it is widely assumed that in alphabetic writing this rests on the fact that each letter represents a sound. The question then is, what sound?

note: There are some words in our own language we don't know for sure how to pronouce or there are more than one acceptable pronounciation.

Phonemes

As pointed out above, the prime candidate, the phonetic segment or phone, has proven to be elusive. Phonologists have recourse to a more abstract unit, the phoneme defined as a phone which fulfils a meaning-differentiating function in a given language. Although there are problems with the phoneme, too, many phonologists continue to use this concept, telling us, for example, that on average languages have 22.8 consonants phonemes and 8.7 vowel phonemes. Maddieson (1984) reports these figures on the basis of studying 317 languages. While he found that they differed on a large scale in their sound inventories, distinguishing between a poor 6 and a luxurious 95 consonants and between an equally disparate 3 and 46 vowels, this is clearly an order of magnitude altogether different from that of words, morphemes and syllables, however counted. In this regard, Cicero's (106-43 BCE) Latin was a plain-vanilla language. With 28 phonemes it is pretty close to the average. What this means is that in sound pattern of first-century Latin we find 28 important contrasts that are systematically used to differentiate meaning. A contrast is not the same as a unit, although this distinction is often ignored. Consider, for example, the following definition.

Segmental phoneme: a consonant and vowel sound of a language that functions as a recognizable, discrete unit in. To have phonemic value, a difference in sound must function as a distinguishing element marking a difference in meaning or identity. (Ives, Bursuk and Ives 1979:253)

The difference between a unit and a contrast is often glossed over like this because our inability to pin down the segment can thus be concealed. It is, however, possible to give every contrast a name, say a letter, which is then used to mark it. This kind of relationship between phonological distinctions and letters has often been interpreted as meaning that 'the purpose of alphabetic orthographies is to represent and convey phonologic structures in a graphic form' (Frost 1992: 255). Who, if anybody, stipulated this purpose is unknown. If orthographies have a purpose it is to encode and retrieve linguistic meaning in a graphic form. To represent and convey phonological structure is at best a means to that end, which is of no interest to anyone except linguistic. Instead of assuming a purpose at all it seems more prudent to consider an alphabetic orthography as a possible interpretation or description of the phonological structures of a language, and not usually an ideal one for that matter, if by ideal we mean being parsimonious and as simple as possible.

(...)

Written segments

(...) In Johnson's day, a letter was a thing with three attributes, a name (nomen), a graphical form (figura) and a power (potestas), that is, its pronunciation. Form and name relatively unproblematic, but the power was 'vague and unsettled'.

Uncertainty and polyvalence

This uncertainty has three aspects. One is that, even assuming that each letter of the Latin alphabet was interpreted as a phoneme, these interpretations were clear only as contrasts, that is within the system of Latin phonology as reflection in spelling. Secondly, some uncertainty is bound to arise whenever the letters whose phonetic correspondences are defined with respect to the relevant contrast of one language are applied to another where at least some of the contrasts are different. There is no complete congruence. Finally, there is the uncertainty of which contrast are relevant in the hitherto unwritten language and how they should be marked.

(...)

Historical change

Over time, the gap between spelling and pronunciation is bound to widen in alphabetic orthographies, as spoken forms change and written forms are retained. Many of the so-called 'silent' letters in French can be explained in this way. Catach (1978:65) states that 12.83 per cent of letters are mute letters in French, that is, letters that have no phonetic interpretation whatever. Many of them once had phonetic counterparts that, by regular processes of sound change, have been effaced. (...)

Another historical factor that undermined simplicity and cross-linguistic uniformity in sound-letter correspondences of the Latin alphabet has to do with the gradual reversal of the relationship between speech and writing. 'How shall I write this word?' used to be the initial question where the application of the Latin alphabet to an unwritten language was at issue. As time went by, it was superseded by the question 'How shall I pronounce this word?' (...) writing had become an agent of linguist change, transcending its role as a means of expression. The image become the model.

Some linguists consider that this is an inevitable consequence of writing, as, for example, the title of Kenneth Pike's 1947 book suggests: Phonemics: A Technique for Reducing Languages to Writing. Phonemes are here seen in direct correlation with alphabetic writing, which, from Pike's point of view, is a reduction, an abstraction, rather than a neutral and faithful representation. A letter is a stabilizer, something like a catalyst, which introduces shape where in phonic reality is flux. It is worth nothing that this is a problem not just of description, but of standardization and the power of a fixed norm. Writing by means of letters that supposedly represent sounds fosters an awareness of the necessity to settle on a variety embodying the canonical form of the language in question.

Suprasegmental

Sound features such as stress and pitch are essential parts of utterances, but the Latin alphabet provides no means of encoding them. These features are called 'suprasegmentals' because they do not occur before or after, but together with other vowels and sonorants. They relate not to segments but to syllables and sometimes larger units.

Conclusion

The Latin alphabet is the most widely used script of all time. Its simplicity and elegance as the writing system of the Latin language suggests universal applicability on the basis of the common principle of segmentation. More than any other script it is associated with the idea of the sound segment.

### Signaries and Statistics

It is obvious that the more complexity of the possible syllables of a language interacts with their numbers. A language such as Fijian that permits only open syllables is bound to have a fewer syllables than one that permits syllables with complex initial and final margins of the type of English strength. Also it would appear that a language whose basic lexical stratum is monosyllabic needs more syllable types than one that has a basic stratum of polysyllabic lexemes. A writing system that targets the syllable as the key function unit thus means different things for different languages. (...)

------------------------

The Chinese Script Reform Committee says that there are more than 1,200 syllables in Mandarin, this total number of possible syllables is much smaller than that in English. (...)

------------------------

Syllable Structure: The Limits of Variation
By San Duanmu

The total number of possible syllables, therefore, should be the number of possible onsets times the number of VX rhymes times the number of choices for the extra final C.

Possible monosyllables in English. Total = Onsets x VX x C = 59 x 298 x 10 = 170,510

------------------------

Clearly, this is an order of magnitude that makes syllabaries unmanageable. In practice there are no, and never have been any, complete syllabaries in the above sense, which confirms the more general truth that no writing system encodes every distinction relevant in its language. Various strategies were developed for syllabic writing to get by with signaries such smaller that the number of speech syllables. An inevitable consequence of this is a certain degree of syntagmatic complexity in combining graphic symbols unambiguously to denote speech syllables.

Where syllabic writing evolved, the number of symbols was gradually reduced. (...)

Much has been made of the economic advantage of syllabic writing over word writing, which stems from the fact that the number of the speech syllables of a language is closed while that of words is open. Gelb (1963) in particular considered the economizing on the inventory of signs was the driving force in the development of writing. This is the cornerstone of his theory. To be sure, the structural unit of writing has an effect on the size of a writing system's signary.

### Hearer's Apprehension

(The Sound Shape of Language, Roman Jakobson, Linda R. Waugh)

Indisputably, the grammatical pattern of the sentence, the verbal context of the words at issue, and the situation which surrounds the given utterance prompt the hearer's apprehension of the actual sense of the words so that he doen't need to pick up all the constituents of the sound sequence.

(Roy Harris)

'Reading Saussure' might perhaps be regarded as controversial title for a study of a book which Saussure never wrote. In one sense, we can no more read Saussure who was the founder of Saussurean linguistics than we can read the Socrates who was the founder of Socratic philosophy. (...) (It may come as something of a shock today to realize that none of Saussure's three courses at Geneva was attended by more than a handful of students.)

Are the key concepts of the 'Cours' to be viewed as deriving specifially from the work of Humboldt, or Paul, or Gabelentz, or Durkheim, or Whitney ...? Or were they, as Bloomfield brusquely claimed in his review of the book (Bloomfield 1923), just ideas which had been 'in the air' for a long time?

### Reaction Time

(...) assessing visual search performance only by reaction time (as is generally done) is limited, because speed of performance depends partially on the participants' willingness to accept erros.

(...)
As McElree and Carrasco (1999, p.1532) pointed out, "RT [reaction time] data are of limited value ... because RT can vary with either differences in discriminability, differences in processing speed, or unknown mixtures of the two effects."

(...)
As Wolfe (1998, p.56) pointed out:

"In the real world, distractors are very heterogeneous [diverse]. Stimuli exist in
many size scales in a single view. Items are probably defined by conjunctions of many
features. You don't get several hundred trials with the same targets and distractors... A truly satisfying model of visual search will need... to account for the range of real-world visual behaviour."

### Information Rate

(Lawrence Rabiner)

(...) Here we see the steps in the process laid out along a line corresponding to the basic information rate of the signal (or control) at various stages of the process. The discrete symbol information rate in the raw message text is rather low (about 50 bps [bits per second] corresponding to about 8 sounds per second, where each sound is one of about 50 distinct symbols). After the language code conversion, with the inclusion of prosody information, the information rate rises to about 200 bps. Somewhere in the next stage the representation of the information in the signal (or the control) becomes continuous with an equivalent rate of about 200 bps at the neuromuscular control level, and about 30,000-50,000 bps at the acoustic signal level.
(...)
The steps in the speech-perception mechanism can also be interpreted in terms of information rate in the signal or its control and follows the inverse pattern of the production process. Thus the continuous information rate at the basilar membrane is in the range of 30,000-50,000 bps, while at neural transduction stage it is about 2000 bps. The higher-level processing within the brain converts the neural signals to a discrete representation, which ultimately is decoded into a low-bit-rate message.

### Hilary Putnam

„In his John Locke lectures, Hilary Putnam argues „that certain human abilities – language speaking is the paradigm example – may not theoretically explicable in isolation“, apart from a full model of „human functional organization“, which „may well be unintelligible to humans when stated in any detail.“ The problem is that „we are not, realistically, going to get a detailed explanatory model of the natural kind „human being“ not because of „mere complexity“ but because „we are partially opaque to ourselves, in the sense of not having the ability to understand one another as we understand hydrogen atoms.“ This is a „constitutive fact“ about „human beings in the present period, though perhaps not in a few hundred years.“ (Chomsky 2000: 19).

### Psycholinguistics of Writing

(Writing Systems: An introduction to their linguistic analysis, Florian Coulmas)

'When he was reading, his eye glided over the page, and his heart searched out the sense, but his voice and tongue were at rest. (Augustine)

'Writing requires deliberate analytic action on the part of the child. In speaking, he is hardly conscious of the sound he produces and quite unconscious of the mental operation he performs. In writing, he must take cognizance of the sound structure of each word, dissect it, and reproduce it in alphabetic symbols, which he must have studied and memorized before.' (Lev S. Vygotsky)

(...) the introduction to writing implies a cognitive reorientation and restructuring of symbolic behaviour. Names of objects are conceptually dissociated from their denotata, as signs of physical objects are reinterpreted as signs of linguistic objects, names. In a second step, signs of names are recognized as potentially meaningless signs of bits of sounds, which are then broken down into smaller components.

The bulk of all reading research is concerned with writing systems that make use of the alphabetic notation. (...) It should be kept in mind, however, that this focus on the alphabet has implications for the questions that are asked, how they are pursued, and eventually for theory formation.
(...) linguists and philologists have described and classified writing systems variously as logographic, ideographic, morphosyllabographic, syllabic, phonemic and so on. These classifications are one thing; but how writing systems work in terms of actual perception, processing and production is another. Psycholinguistic research into reading can shed new light on classifications derived from structural descriptions, and lead to a reassessment of how meaningful they are.

Word superiority

In antiquity, texts were commonly redacted in 'scriptura continua', without word
boundaries (Saenger 1991). (...) words, unlike speech sounds, are meaningful, and this is what reading is all about. We read not to intone, but to understand.

The minimal coding unit of alphabetic writing systems is smaller than the word, but modern alphabetic texts consists of words divided by spaces, reflecting the intuitive insight that word separation facilitates reading. The reader's general task is to 'search out the sense' that is linguistically encoded.' (...) Letters are recognized more quickly and more accurately when presented within words (e.g. input) than in isolation or within pseudowords (e.g. inpat). This finding leads to the concept of a lexicon or mental dictionary against which the visual input is matched. In fluent reading, a visual input is linked to a lexical entry that contains morphological and semantic information such as the part of speech of the word and its meaning.

Stroop (1935) discovered that naming the colour of the ink in which a word is written is delayed when that word is the name of a different colour. Stroop, J. Ridley. 1935. Studies of interference in serial verbal reactions. Journal of Experimental Psychology 18, 643-62.