ASA 128th Meeting - Austin, Texas - 1994 Nov 28 .. Dec 02

3pSP1. Recognition of speech separated from acoustic mixtures.

Martin Cooke

Phil Green

Dept. of Comput. Sci., Univ. of Sheffield, 211 Portobello St., Sheffield S1 4DU, U.K.

A perceptually plausible solution to the problem of automatic recognition of speech in arbitrary noise backgrounds involves computational auditory scene analysis (ASA) followed by recognition of the separated patterns. However, it is not generally possible to recover a complete representation of individual acoustic sources, so a new approach is required to recognize partial descriptions. Suitable modifications to the powerful stochastic framework of hidden Markov models (HMM) have recently been described [M. P. Cooke, P. D. Green, and M. D. Crawford, Proc. Int. Conf. Spoken Language Processing (1994)]. The studies reported here demonstrate HMM-based digit recognition in noise. An auditory-nerve firing rate representation undergoes auditory scene analysis, producing a mask of time-frequency locations where the speech is dominant. Each mask frame defines a marginal distribution for the HMM probability calculation. Results show robust performance even when the mask has most of its elements removed. Further, these studies suggest a solution to the problem of sensitivity to F0 in matching auditory representations of speech in which F1 is represented by a set of resolved harmonics. The new approach ensures that the matching process operates on a partial description consisting largely of harmonic peaks.