2pSC22. The relationship between lip shapes and acoustical characteristics during speech.

Session: Tuesday Afternoon, December 3

Time:


Author: Keisuke Mori
Location: Faculty of Eng., Kumamoto Univ., Kurokami 2-39-1, Kumamoto, 860 Japan
Author: Yorinobu Sonoda
Location: Faculty of Eng., Kumamoto Univ., Kurokami 2-39-1, Kumamoto, 860 Japan

Abstract:

This paper studied recognition tests jointly using articulatory behavior of lips and sound pattern during speech. Lip movements were monitored by using a high-speed video system with a frame rate of 5 ms. At the same time, speech sounds were recorded on a digital recorder. Two parameters, lip opening area and width of lip edges, were obtained as a time pattern from approximated contours with a fourth-degree polynomial [ICSLP90, 11.6.1 (1990)]. Recognition tests were examined by using acoustical parameters of three formant patterns (F1, F2, and F3), as well as articulatory parameters. The speech materials used in the paper were nonsense words of symmetrical form |eCVCe| (V: a, i, u, e, o; C: p, b, m) from four males. A neural network was used for recognition tests of identifying (1) morae (CV), (2) vowel (V), and (3) consonant (C) from the time patterns of articulatory movements and acoustical sounds in --CVC--. The recognition rate of vowels using both articulatory and acoustical patterns was 91%, and it was higher than that using these patterns separately. In recognition of the consonant test, the score uisng the shape pattern was 96%, and it was higher than that using both acoustical and shape patterns.


ASA 132nd meeting - Hawaii, December 1996