[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Uncertainty principle debate



Greetings Alwan,

Why not considering Figs 1 and 10 of your 1993 paper an indication for
transition from cochlear resolution, extending down to 10 and 20 ms and
strongly depending on both bandwidth and duration, towards resolution of
brain for larger duration? In the latter case, I do not expect bandwidth
and duration to matter much.

Incidentally, in your earlier paper I found twice the word spectogram. Was
this just a typo or did you mean something else than a spectrogram?
Admittedly, my English is very shaky.

Concerning the question by Ramdas Kumaresan for cochlear delay, I would
like to add that polarity makes a clearly audible difference in particular
below 200 Hz. This difference is to be seen in data by Nelson Kiang, Yidao
Cai, Mario Ruggero and many others as well as in the natural spectrogram.
Of course, there is  considerable variability with respect to CF map among
animals with or without an acoustic fovea. Also, at least 0.1 ms synaptic
delay plus, in case of very low SPL, a few "warm-up" cycles of the OHC
motors have to be taken into account. Nonetheless cochlear delay seems to
roughly depend on the reciprocal of CF, as given by Steven Greenberg. This
is not just a plausible strong additional argument in favor of seeing the
traveling wave an epiphenomenon but also, as Armand Dancer stated, it
simplifies modeling of cochlea.

Therefore I would not entirely exclude the possibility that phonetics will
immediately a little bit benefit from replacing the traditional spectrogram
by the "natural" one, in particular if one deals not just with bilabial
plosives but also with dental, alveolar, retroflex, palatal, glottal, and
velar ones, with implosives, and with ejectives. However, my main concern
is a sound basis for an understanding of the dominating mechanism of
hearing as a cepstrum-like joint analysis involving both the fundamental
cochlear frequency analysis and based on it a second analysis within brain.
Your preference for a place/rate code is not very convincing to me because
it cannot account for the wealth of audible nuances, robustness against
noise, etc.
A lot of evidence for the second analysis was given by Gerald Langner:
http://www.swets.nl/JNMR/vol26_2.html #Langner26.2
http://eos.bio.tu-darmstadt.de/aglangner/langner.html

What about the uncertainty, you and others are quite right: Hearing
performs multiple recognition in parallel. I guess, the perception of
different autocorrelation lags seemingly 'at a time' largely relates to the
brain's prudent unability to resolve its internal frequencies much in
excess of 40 Hz.

Regards,
Eckard