ASA 127th Meeting M.I.T. 1994 June 6-10

5pSP3. The contribution of a reduced visual image to speech perception in noise.

Jennifer A. Johnson

Lawrence D. Rosenblum

Helena M. Saldana

Dept. of Psychol., Univ. of California, Riverside, CA 92521

It has long been known that seeing a talker's face can improve the perception of speech in noise [A. MacLeod and Q. Summerfield, Br. J. Audiol. 21, 131--141]. Yet little is known about which characteristics of the face are useful for embellishing the degraded signal. Recently, a point-light technique has been adopted to help isolate the salient aspects of a visible articulating face [Saldana et al., J. Acoust. Soc. Am. 92, 2340(A) (1992)]. In this technique, a speaker's face is darkened and reflective dots are arranged on the lips, teeth, tongue, cheeks, and jaw. The actor is videotaped speaking in the dark so that when shown to subjects, only the moving dots are seen. In order to determine whether these reduced images could contribute to the perception of degraded speech, noise-embedded sentences were dubbed with point-light images at various signal-to-noise ratios. It was found that these images could improve comprehension depending on the number and location of points used. Implications of these results for theories of audiovisual integration, models of lip-reading, and telecommunications systems will be discussed. [Work supported by NSE.]