Coarticulation: Concepts

Definition, segments and domain

Typical definitions of coarticulation are that articulators are moving simultaneously but for different phonemes, or that phonemes overlap in time, which explicitly implicate a belief in some sort of underlying “segment” that has its physical expression in articulatory behaviour. Indeed, Liberman & Mattingly 1985 insist that some sort of discrete representation is always implied, even for those who would deny it. The classical arguments in favour of discrete underlying segments in this context have been summarized by e.g. Pisoni & Luce 1987 and Löfqvist 1990. Although many authors seem to prefer not to commit themselves nowadays on what such “segments” might be, an abstract definition of the phoneme is adequate for the moment. Coarticulation as such has been an object of study ever since Menzerath & Lacerda’s 1933 pioneer investigation of lip movement and nasal airflow, but the phenomenon had been noticed much earlier by Sweet 1877:56, 60-63, who recognized that speech sounds were momentary points “in a stream of incessant change” consisting of inevitable simultaneous transitional on and offglides between them. This remained the accepted paradigm until Joos 1948:104-108 reported different spectra for a vowel phoneme in different consonant environments, which was interpreted as evidence of coarticulation extending beyond the transitions. The Kozhevnikov & Chistovich 1965 syllable model offered an explanation for simultaneous expression of serially ordered phonemes, although subsequent reports e.g. Öhman 1966 appeared to contradict this since the domain of coarticulation was seen to extend into neighbouring syllables and indeed some investigators reported domains of several syllables.

Incompatible legacies of different scientific traditions have led to controversies concerning the domain of coarticulation, the relation of coarticulation to assimilation, and the nature of coarticulation itself, all epitomized in the debate between Hammarberg 1976, 1982 and Fowler 1980, 1983. Overviews of current work on coarticulation and related theoretical topics have also been given by Daniloff & Hammarberg 1973, Kent & Minifie 1977, Kent 1983 and Lindblom 1986. Models proposed for coarticulation have tended to fall into two main classes depending on whether their driving principle is coproduction or feature-spreading. But opinions also differ as to whether coarticulation is intentional and preplanned input to the speech motor system or instead the physiological consequences of subcortical control constraints and mechanical properties of the articulators themselves. Opinions differ further as to how knowledge and memory access is handled. “Look ahead” versions of models must have access to at least a major portion of the current syntagm, while subcortical models are restricted to whatever information is initially passed down about the current segment. Coarticulation research has typically been concerned with topics like how far ahead a phoneme may be initiated, how long it may be kept going, what and where its boundaries are, and in what sense simultaneous phonemes are serially ordered. All this implies that articulatory and acoustic attributes can be singled out, delimited, identified and assigned to their respective phonemes.


Investigations of coarticulation have typically comprised just one or two articulators (frequently the lips, mandible, tongue blade or velum), exploiting and depending on the technology currently available, such as e.m.g., movement transduction, optical tracking, dynamic palatography, fibrescopy, cinematography, x-ray motion filming (automatic pellet-tracking or manually traced pictures as here), or by interpreting acoustic features of the speech wave. Very rarely, if at all, has work been reported on the dynamic coordination of all gestures throughout the supralaryngeal vocal tract.

Articulator gestures

A growing area of interest is the study of the gestures themselves and of their place in phonological theory (e.g. Browman & Goldstein 1989 and Boyce et al. 1990) and in speech perception (e.g. Fowler 1986, Liberman & Mattingly 1985, Stevens & Blumstein 1981).

Cortical, subcortical, preplanned or accidental?

The coproduction approach usually sees coarticulation as a low level phenomenon, the inevitable physiological consequence of e.g. the intrinsic timing requirements of the gestures involved due to constraints of the vocal tract (Fowler 1980). In contrast, Liberman et al. 1967 implied high level control when they emphasized the necessity for restructuring phonemes to overcome the inability of the ear to resolve discrete elements arriving at the rates of phoneme flow customary in speech, or of the articulators to produce distinct gestures at such rates. They suggested that “dividing the load among the articulators allows each to operate at a reasonable pace, and tightening the code keeps the information rate high. It is this kind of parallel processing that makes it possible to get high speed performance with low speed machinery…”. If such restructuring of articulation is indeed part of the encoding process, as they believe, then it should be under close high level control, i.e. a preplanned and integral part of the programming.

Coproduction models generally emphasize the simultaneous articulation of especially vowels and consonants, but individual instances of these models can be mutually incompatible. For example, Kozhevnikov & Chistovich 1965, chapt. 4, posited that all the several manoeuvres of the (open) syllable can be initiated at once provided there is no antagonism between them (in which case some gestures must be delayed, requiring some measure of preplanning), whereas Ohman 1966 maintained that coarticulation is precisely the result of the summation of (sometimes) antagonistic consonant features superimposed on a continuous diphthongal vowel to vowel movement (i.e. unplanned and accidental).

Boyce, S.E., R.A. Krakow, F. Bell-Berti & C.E. Gelfer. 1990. Converging sources of evidence for dissecting articulatory movements into core gestures. Journal of Phonetics 18,173-188.
Browman, C P . & L. Goldstein. 1989. Articulatory gestures as phonological units. Phonology 6, 201-251.
Daniloff. R. & R. Hammarberg. 1973. On defining coarticulation. Journal of Phonetics 1, 239-248.
Fowler, C. 1980. Coarticulation and theories of extrinsic timing. Journal of Phonetics 8, 113-133.
Fowler, C. 1983. Realism and unrealism: a reply. Journal of Phonetics 11, 303-322.
Fowler, C. 1986. An event approach to the study of speech perception from a direct-realist perspective. Journal of Phonetics 14, 3-28.
Hammarberg, R. 1976. ‘The metaphysics of coarticulation”. Journal of Phonetics 4. 353-363.
Hammarberg, R. 1982. On redefining coarticulation. Journal of Phonetics 10, 123-137.
Joos, M. 1948. Acoustic phonetics. Language Monograph 23; supplement to Language vol. 24.
Kent, R.D. 1983. The segmental organization of speech. In P.F. MacNeilage (ed), The Production of Speech, chapt. 4. New York: Springer.
Kent. R.D. & F.D. Minifie. 1977. Coarticulation in recent speech production models. Journal of Phonetics 5, 115-133.
Kozhevnikov, V.A. & L.A. Chistovich. 1965. Speech, articulation and perception. Washington: Joint Publications Research Service.
Liberman. A . M . , F. Cooper. D. Shankweiller and M . Studdert-Kennedy. 1967. Perception of the speech code. Psychological Review 74, 431-461.
Liberman, A.M. & I.G. Mattingly. 1985. The motor theory of speech perception revised. Cognition 21, 1-36.
Lindblom, B.E.F. (ed). 1986. Speech processes in the light of event perception and action theory. Journal of Phonetics 14.
Lofqvist, A. 1990. Speech as audible gestures. In W.J. Hardcastle & A. Marcha!, (eds), Speech production and speech modelling, 289-322. Dordrecht: Kluwer.
Menzerath, P. & A. de Lacerda. 1933. Koartikulation, Steuerung und Lautabgrenzung. Phonetische Studien 1. Berlin: Diimler.
Óhman, S. 1966. Coarticulation in VCV utterances: spectrograph measurements. Journal of the Acoustic Society of America 39, 151-168.
Pisoni, D.B. & P.A. Luce. 1987. Acoustic-phonetic representations in word recognition. In U.H. Frauenfelder and L. Komisarjevsky Tyler (eds), Spoken word recognition, 21-52; Cognition Special Issues, M.I.T. Press.
Stevens, K.N. & S. Blumstein. 1981. The search for invariant acoustic correlates of phonetic features. In P.D. Eimas & J.L. Miller (eds). Perspectives in the study of speech, pp. 1-38. New Jersey: Erlbaum.
Sweet, H. 1877. Handbook of phonetics. Oxford: Clarendon.
©Sidney Wood and SWPhonetics, 1994-2012