4pSC11. Affine transformation-based feature normalization for speaker-independent speech recognition.

Session: Thursday Afternoon, December 5

Time:


Author: Ping Luo
Location: Dept. of Comput. Sci. and Information Mathematics, Univ. of Electro-Commun., 1-5-1 Chofugaoka, Chofu, Tokyo, 182 Japan
Author: Kazuhiko Ozeki
Location: Dept. of Comput. Sci. and Information Mathematics, Univ. of Electro-Commun., 1-5-1 Chofugaoka, Chofu, Tokyo, 182 Japan

Abstract:

In statistical speech recognition, speaker-independent models are usually trained by using speech samples from a large number of speakers. Those models have a problem in that they have wider feature distributions and hence greater overlaps between different phones than adequately trained speaker-dependent models. In order to cope with the interspeaker variability, a method of speech feature normalization based on affine transformation has been presented [P. Luo and K. Ozeki, Tech. Rep. of IEICE, SP96-10 (1996)]. Prior to HMM training, feature vectors of each speaker are mapped to those of a reference speaker by an affine transformation estimated with a small amount of training data. The transformation, which is phone independent and speaker dependent, is also applied to feature vectors of unknown speakers in the recognition stage. It has been shown experimentally that this method is effective in reducing interspeaker variations in the cepstral domain. In this paper, further discussions about the performance and limitations of the method are given within the framework of continuous HMM. Practical issues related to the selection of an appropriate reference speaker will also be discussed.


ASA 132nd meeting - Hawaii, December 1996