ASA 129th Meeting - Washington, DC - 1995 May 30 .. Jun 06

2aSC2. Measures of auditory-visual integration.

Ken W. Grant

John L. Clay

Brian E. Walden

Walter Reed Army Med. Ctr., Army Audiol. and Speech Ctr., Washington, DC 20307-5001

The easiest and most effective way to improve speech recognition for hearing-impaired individuals, or for normal-hearing individuals listening in noisy or reverberant environments, is to have them watch the talker's face. Auditory-visual (AV) speech recognition has been shown consistently to be better than either hearing alone or speechreading alone for all but the most profoundly hearing-impaired individuals. When AV recognition is less than perfect, several factors need to be considered. The most obvious of these are poor auditory (A) and poor visual (V) speech recognition skills. However, even when differences in unimodal skill levels are taken into account, differences among individual AV recognition scores persist. At least part of these individual differences may be attributable to differing abilities to integrate A and V cues. Unfortunately, there is no widely accepted measure of AV integration ability. Recent models of AV integration offer a quantitative means for estimating individual integration abilities for phoneme recognition. In this study, we compare several possible integration measures, along with model predictions, using both congruent and discrepant AV phoneme and sentence recognition tasks. The focus of this talk will address the need for independent measures of AV integration for individual subjects. [Work supported by NIH Grant DC00792.]