[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: About importance of "phase" in sound recognition
Early experiments had suggested that the ear is "phase deaf", but those experiments (over 100 years ago!) were conducted with simple, mechanically produced tones.
However, if you randomize the phases of partials in a speech recording, you will find that the perceptual characteristics change slightly. The speech becomes more "artificial", as if the voice is less immediate and far away from the microphone, possibly because the glottal pulses are not aligned any more. The ear is very sensitive to characteristics of human speech. I found that the effect is strong for low-pitched male voices (80 Hz), probably because the glottal pulses are very distinct, almost to the point that you can hear them individually. For female voices, the effect in my tests was is much smaller. For a piano tone, there was no perceptual difference to me. Unfortunately, I cannot recommend any publications, and I'm not sure whether things like this are regarded common knowledge in the community.
Also, if this really matters much in the distinction of phonemes, I have no idea. Probably not that much.
Am 05.10.2010 um 17:23 schrieb emad burke:
> Dear List,
> I've been confused about the role of "phase" information of the sound (eg speech) signal in speech recognition and more generally human's perception of audio signals. I've been reading conflicting arguments and publications regarding the extent of importance of phase information. if there is a border between short and long-term phase information that clarifies this extent of importance, can anybody please introduce me any convincing reference in that respect. In summary I just want to know what is the consensus in the community about phase role in speech recognition, of course if there is any at all.