ASA 129th Meeting - Washington, DC - 1995 May 30 .. Jun 06

1aSC38. Adding articulatory features to acoustic features for automatic speech recognition.

Igor Zlokarnik

Los Alamos Natl. Lab., CIC-3, MS B256, Los Alamos, NM 87545

A hidden-Markov-model (HMM) based speech recognition system was evaluated that makes use of simultaneously recorded acoustic and articulatory data. The articulatory measurements were gathered by means of electromagnetic articulography and describe the movement of small coils fixed to the speakers' tongue and jaw during the production of German V[sub 1]CV[sub 2] sequences [P. Hoole and S. Gfoerer, J. Acoust. Soc. Am. Suppl. 1 87, S123 (1990)]. Using the coordinates of the coil positions as an articulatory representation, acoustic and articulatory features were combined to make up an acoustic--articulatory feature vector. The discriminant power of this combined representation was evaluated for two subjects on a speaker-independent isolated word recognition task. When the articulatory measurements were used both for training and testing the HMMs, the articulatory representation was capable of reducing the error rate of comparable acoustic-based HMMs by a relative percentage of more than 60%. In a separate experiment, the articulatory movements during the testing phase were estimated using a multilayer perceptron that performed an acoustic-to-articulatory mapping. Under these more realistic conditions, when articulatory measurements are only available during the training, the error rate could be reduced by a relative percentage of 18% to 25%.