ASA 126th Meeting Denver 1993 October 4-8

4aSP4. GEST: A computational model of speech production using dynamically defined articulatory gestures.

Catherine P. Browman Louis Goldstein Elliot Saltzman Philip E. Rubin

Haskins Labs., 270 Crown St., New Haven, CT 06511

A computational model of speech production that produces speech for English utterances, using dynamically defined articulatory gestures, will be described. The model is comprised of three submodels. The first part, the linguistic gestural model, generates a gestural score specifying the identity and relations of the gestures involved in the desired utterance. This gestural score is input to the second part, the task dynamic model [e.g., Saltzman and Munhall (1989)], which generates the movements of the various model speech articulators. These movements serve, in turn, as input to the third part, the vocal tract model [Rubin et al. (1981)], which determines the resulting area functions and acoustic signal. Attention will be focused on the integrated system and the first part, since the second and third parts of the model have been described elsewhere.