ASA 126th Meeting Denver 1993 October 4-8

5pSP3. Evaluating the articulation index for auditory-visual consonant recognition.

Ken W. Grant Brian E. Walden

Walter Reed Army Med. Ctr., Army Audiol. and Speech Ctr., Washington, DC 20307-5001

The ANSI standard for calculating the articulation index [ANSI S3.5-1969 (R1986)] includes a procedure for estimating the effects of visual cues on speech intelligibility. This procedure assumes that listening conditions with the same auditory articulation index (AI[sub A]) will have the same auditory-visual AI (AI[sub AV]) regardless of the spectral composition of the signal. In contrast, other studies have suggested that the redundancy between A and V speech cues might be a better predictor of AV performance than either the AI[sub A] or the overall (e.g., percent correct) auditory recognition score. In the present study, the ANSI procedure is evaluated by measuring A, V, and AV consonant recognition under a variety of different signal-to-noise and bandpass-filtered speech conditions. The results indicate that auditory conditions having the same AI[sub A] do not necessarily result in the same AI[sub AV], and that low-frequency bands of speech tend to provide more benefit to speechreading than high-frequency bands of speech. Analyses of the auditory error patterns produced by the different filter conditions showed a strong negative correlation between the degree of A and V redundancy and the AV benefit obtained. These data indicate that the ANSI procedure is inadequate for predicting AV consonant recognition performance under conditions of severe spectral shaping. [Work supported by NIH Grant No. DC00792.]