[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: place pitch and temporal pitch
As an engineer who deals with signal processing, I would like to support
Martin Braun's support for Oxenham p.1425: "place-time transformations".
> ... one should look for adaptation at a much higher
> level of statistical structure (e.g. Nelken et al., Nature 1999).
Engineers have to learn from structure of brain rather tha vice versa.
However, all function of neurons is bound to physical principles. Please
correct me if your 'statistical structure' is not clearly restricted to
> Regarding a previous comment by Christian Kaernbach:
>> That is an important achievement of our brain: To present to us as
>> unitary perception what is deduced from different cues.
Christian misused this matter of course. I would rather like to trust in
David Poeppel who recently compared perception with some sort of active
sampling on temporal excitation patterns.
>> Another example would be spatial hearing: There are intensity
differences, >> interaural time (!) differences, spectral (!) filtering by
the outer ear,
>> and even cues due to involuntary small head movements that interact
>> perfectly so as to give a single percept of stereolocation.
ILDs, ITDs, and what Benedikt Grothe calls interaural phase differences are
definitely temporal cues. The common language of tactile, visual, and other
stimuli is perhaps also a temporal pattern. Why should we take the
traditional notion of spectral pitch as a gospel if there are indications
for retranslation of place into a temporal pattern? Was there any reason
for evolution not to use this favorable option? I see Grothe's denotation
'phase difference' as an indication for a deeply rooted fallacy imagining
hearing like a complex Fourier transform. The natural spectrogram is based
on Fourier cosine transform, and it is distinguished from traditional
FT-based spectrogram by not splitting the signal in magnitude and phase
while omitting the latter but by providing a temporal frequency pattern
>> So pitch being a unitary percept does not rule out its relying on separate
I see this correct and false at a time. A unitary percept does not rule out
translation from temporal to spectral/place code and return. It is however
incorrect to exclude the possibility that place code is the most important
preliminary stage of unitary temporal cortical signal processing.
> I think that one should very carefully
> discriminate between the 'features' that are used to build an auditory
> percept, and the resulting perception.
On this, I absolutely agree.
>In the same sense that ITDs and
>ILDs are not 'space' but rather parts of an integrated percept, spectral
>and temporal cues for pitch are not 'pitch' but probably the building
>blocks that are unified higher up.
I would appreciate you being more precise about HOW the spectral code is
built in. Even the brain cannot unify two languages by just mixing them
together. I see the spectral/spatial code a clever detour from temporal code.
Do not exclude that the common neural language eventually is temporal, and
retranslation might already start within CN.
>The same low-level - high-level perceptual difficulties are also
>encountered in vision. For example, faces are perceived as whole things
>- there's quite a good evidence for that today, but nobody would claim
>that faces are extracted in the LGN or in V1. A psychological model that
>tries to account for this discrepancy between immediate perception on
>the one hand and the hierarchical, integrative processing of signal
>features on the other hand was developed by Merav Ahissar and Shaul
>Hochstein - I think this is worthwhile reading:
>Hochstein and Ahissar, Neuron 36(5):791-804, 2002
Could vision benefit from something similar to cepstral analysis? Mammal
hearing presumably benefits from it with respect to frequency range and
accuracy reached with a comparatively little number of hair cells. I begun
to wonder when I was told that the number of neurons in some nuclei of
midbrain were too small as to account for accuracy according to spectral as
well as temporal models of hearing. Joint autocorrelation resolves this