ASA 124th Meeting New Orleans 1992 October

1aSP12. The transmission of prosodic information via selected spectral regions of speech.

Ken W. Grant

Brian E. Walden

Walter Reed Army Med. Ctr., Army Audiol. and Speech Ctr., Washington, DC 20307-5001

In a recent article [K. W. Grant and L. D. Braida, J. Acoust. Soc. Am. 89, 2952--2960 (1991)] it was demonstrated that spectrally different bands of speech with equal articulation index (AI) scores provided approximately equal auditory-visual sentence recognition when combined with speechreading. Given that different parts of the frequency spectrum provide different segmental cues for consonant and vowel recognition, current models of auditory-visual integration [e.g., L. D. Braida, J. Exp. Psychol. 43A, 647--677 (1991)] would predict that some spectral regions of speech are more complementary to speechreading than others. This raises the possibility that the findings of Grant and Braida are attributable, at least in part, to suprasegmental cues that were transmitted equally well by the different spectral bands tested. To test this possibility, the identification of syllable number, syllable stress, sentence intonation, and phrase boundary location was assessed under six approximately equal AI filter conditions similar to those evaluated by Grant and Braida. The results indicate that syllable number and syllable stress are perceived best through high-frequency bands, intonation is perceived best through low-frequency bands, and phrase boundary location is perceived equally well throughout the speech spectrum. These results are discussed in terms of the importance of different spectral regions of speech for the recognition of suprasegmental cues, and how this may relate to overall speech intelligibility. [Work supported by NIH Grant No. DC00792.]