[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: effect of phase on pitch



                                            February 5, 1998


Richard Parncutt wrote a very interesting discussion / essay
on "phase deafness", and seems to make a distinction between
artificial and natural sounds

> In an ecological approach, the existence of phase sensitivity in such
> stimuli (or such comparisons between stimuli) might be explained as follows.
> These stimuli (or stimulus comparisons) do not normally occur in the human
> environment. So the auditory system has not had a chance to'learn' (e.g.,
> through natural selection) to ignore the phase effects. As hard as the ear
> might 'try' to be phase deaf in the above cases, some phase sensitivity will
> always remain, for unavoidable physiological reasons.

I have a complex-sound generating application, so far based on
my assumption that phases may be neglected: phases are random.
Also, the sound components are normally not harmonic, so any
momentary phase relations will change over time. However, these
sounds, derived from spectrographic synthesis of environmental
images instead of spectrographic (re)synthesis of spectrograms,
definitely ``do not normally occur in the human environment,''
and involve both ``tens of ms'' bursts as well as sounds of
much longer duration. So, should or should I not have tried to
exploit phase sensitivity and enforce certain, e.g., short-term,
phase relations? Or should I hope (in vain?) that people can
"un-learn" to hear most of the (if any) phase effects? Any advice?

See

   http://ourworld.compuserve.com/homepages/Peter_Meijer/winvoice.htm

for the video sonification application I refer to.

In other words, my question relates to how to optimize auditory
perception / resolution in complex information-carrying sounds,
and I wonder if I should "do something" with phases or not.

There is non-evolutionary survival value at stake here.

Best wishes,

Peter Meijer