ASA 130th Meeting - St. Louis, MO - 1995 Nov 27 .. Dec 01

5aSC7. Investigating the role of specific facial information in audio-visual speech perception.

P. M. T. Smeele

Lisa D. Hahnlen

Erica B. Stevens

Patricia K. Kuhl

Dept. of Speech and Hearing Sci., Univ. of Washington, Seattle, WA 98195

Andrew N. Meltzoff

Univ. of Washington, Seattle, WA 98195

When hearing and seeing a person speak, people receive both auditory and visual speech information. The contribution made by visual speech information has been demonstrated in a wide variety of conditions, most clearly when conflicting auditory and visual information is presented. In this study an investigation was performed to determine which aspects of the face most strongly influence audio-visual speech perception. The visual stimulus was manipulated using special effects techniques to isolate three specific ``articulatory parts:'' lips only, oral cavity only, or jaw only. These ``parts'' and their combinations were dubbed with auditory tokens to create ``fusion'' stimuli (A/aba/ + V/aga/) and ``combination'' stimuli (A/aga/+ V/aba/). Results indicated that visual information from jaw-only movements was not sufficient to induce illusory effects. However, for the combination condition, seeing moving lips or the inside of the speaker's mouth produced substantial audio-visual effects. Additional visual information from other articulators did not significantly increase the effect. In the fusion situation, both the lips and oral cavity were necessary to obtain illusory responses, whereas individually they produced very few. The results suggest that visual information from the lips and oral cavity together are sufficient to influence auditory speech processing. [Research supported by NICHD.]