[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Importance of "phase" in short frequency chirps



Hello again Emad,

On reading the last paragraph of your latest message (below), it occurred to me that you may want to look at a different line of phase perception research, namely, the perception of repeating short frequency chirps. "Up chirps" were originally designed to compensate for spatial dispersion in the cochlea and produce super clicks by coordinating neural firing across frequency in the auditory nerve. They are contrasted with "Down chirps" that increase dispersion in the cochlea. I think these chirp stimuli might help you cast your problem in terms that auditory research can help to answer. The closest paper to your interests would appear to be

Uppenkamp, S., Fobel, S. and Patterson, R.D. (2001). The effects of temporal asymmetry on the detection and perception of short chirps. Hearing Research 158 71-83.

Below are the abstract and the later part of the Conclusions.

Regards Roy P

Abstract
There is an intriguing contrast between the physiological response to short frequency sweeps in the brainstem and the perception
produced by these sounds. Dau et al. (2000) demonstrated that optimised chirps with increasing instantaneous frequency (up-chirps),
designed to compensate for spatial dispersion along the cochlea, enhance wave V of the auditory brainstem response (ABR), by
synchronising excitation of all frequency channels across the basilar membrane. Down-chirps, that is up-chirps reversed in time,
increase cochlear phase delays and therefore result in a poor ABR wave V. In this study, a set of psychoacoustical experiments with
up-chirps and down-chirps has been performed to investigate how these phase changes affect what we hear. The perceptual contrast
is different from what was reported at the brainstem level. It is the down-chirp that sounds more compact, despite the poor
synchronisation across channels and phase delays up to 20 ms. The perceived `compactness' of a sound is apparently more
determined by the fine structure of excitation within each peripheral channel than by between-channel phase differences. This
suggests an additional temporal integration mechanism at a higher stage of auditory processing, which effectively removes phase
differences between channels.

Later part of "Summary and Conclusions"
(2) The differences in the BM motion
produced by up-chirps and down-chirps do affect
the masking period patterns that they produce. The concentrated motion
produced in response to down-chirps at the start of the
period is followed by deeper valleys later in the period
and thus threshold is lower over a portion of the cycle.
[[This means that spike order is preserved in the auditory nerve and it affects detection of click signals.]]

(3) There is a big difference in the perceived sound
quality of up-chirps and down-chirps. Down-chirps
are perceived as more compact, or more click-like,
than up-chirps. This occurs despite the synchronisation
of activity across frequency channels produced by upchirps,
and the extended phase delay produced by
down-chirps. The sound quality of short frequency
sweeps appears to reflect within-channel fine structure
rather than between-channel phase differences.

Time-domain models of auditory perception (Meddis
and Hewitt, 1991; Patterson et al., 1992) which convert
the phase-locked NAP flowing from the cochlea into an
array of time-interval histograms can explain these effects,
at least, qualitatively, because the time-interval
calculation removes between-channel phase differences.
The apparent contrast between the effect of sweep direction
on the ABR and the perception of compactness
suggests that the temporal integration that removes between-
channel phase differences is located beyond the
input to the inferior colliculus where wave V of the
ABR is thought to be generated.


On 08/10/2010 14:33, emad burke wrote:
Dear All,

Thanks a lot for all of this great and informative comments, I appreciate them all. like stated in a lot of other comments I had done that randomization of phase of fft coefficients and usually for long signal lengths the distortion was pretty clear, however there was always the stereotype of deafness to short-term phase information as Ken stated.

However based on the comments from Roy and Ken, I wanted to reformulate my question more practically. If the relative phase of activity in different channels doesn't matter, does it mean that if you shift the spike trains in auditory nerve fibers of different CFs, with respect to each other it would not have any effect ? or in other words if you change the order of spikes among different channels in the auditory nerve, it wouldn't have any effect on the perceived input ?
If so, this just seems strange because then it means that apparently there is not collective temporal structure at the auditory nerve level, and perhaps the spike trains in each fiber are at least in a short term sense (as Ken refers to) independent of each other ?
the answer to these questions would be crucial for me as i am trying to design a learning scheme for learning such spike trains and i need to know what is and what is not important.

Also regarding Ken's argument if the processing needs to be short-term is there any, precise definition for this short time period in a biological system ? I mean do you think that in a specific integration window of a typical neuron in the auditory pathway, is adjusted in away that phase (mainly across different afferents ) would not matter in that window ? i.e is it "spike order insensitive" ?!

By the way, I apologize for not providing the reference for the mathematical article that I referred to in the previous email. here is a link to it :

http://www.math.missouri.edu/~pete/pdf/132-painless.pdf

Thanks a lot again, I just can't wait see receive more comments and gain more information from this interesting thread.

Best
Emad


2010/10/8 Laszlo Toth <tothl@xxxxxxxxxxxxxxx>
On Thu, 7 Oct 2010, Kevin Austin wrote:

> When I first learned that phase was not important to hearing, it was in
> a specific context. It was, (for example) with a sawtooth wave, invert
> it. It will sound the same, even though it is 180° out of phase.
> Therefore [?] the phase of the signal can be shifted by 180° (or
> inverted), and the ear is insensitive to this.

Interestingly, a parallel thread is running on the music-dsp mailing list
just about the same topic. You can find the archive here:
http://music.columbia.edu/pipermail/music-dsp/2010-October.txt
In particular, Sampo Syreeni wrote:
"But perhaps the most shocking discovery here is that with sufficiently
large test sets, 180 degree phase differences can be distinguished at a
statistically significant level, monaurally only. That I think was what
lead to the periodicity/rectifier models of pitch perception in the
first place, long before e.g. missing fundamental kind of phenomena were
propertly attributed to the nonlinearity of the cochlear cilia."



-- 
Roy Patterson
Centre for the Neural Basis of Hearing
Department of Physiology, Development and Neuroscience
University of Cambridge, Downing Street, Cambridge, CB2 3EG
  phone     +44 (1223) 333819    fax 333840
  email:        rdp1@xxxxxxxxx
  http://www.pdn.cam.ac.uk/groups/cnbh/  
  http://www.AcousticScale.org