Re: Can a timbre affected by a shifted virtual pitch evoked by (Eckard Blumschein )


Subject: Re: Can a timbre affected by a shifted virtual pitch evoked by
From:    Eckard Blumschein  <Eckard.Blumschein(at)E-TECHNIK.UNI-MAGDEBURG.DE>
Date:    Mon, 24 Feb 2003 12:06:13 +0100

Martin, I very much appreciate both your well founded criticism and your courage to frankly utter it. However, I feel we have to be a bit more cautious. Pitch isn't already the whole hearing, and evolution didn't follow the need for good musicians. I guess, survival of an animal rather depends on localization. How does it work? Except for very low or high frequency, cochlea performs a Fourier Cosine Transform. This is only the first step. In order to map location, the brain has then to perform some kind of inverse transform as do radar and sonar. Of course, the only way for neurons to do so is not Fourier analysis but a correlational, i.e. coincidence analysis. Where might this happen? I learned: A first coarse but fast cross-correlation is very likely to occur within the SOC. Auto-correlation might be one of the most important cortical functions. If this speculation of mine is true, it could hopefully be the starting point to a fist understanding of other (presumably younger) patterns than location. Anyway, I wouldn't invest any Euro into the old stupid idea that tonotopy is already the whole story. Perhaps we don't have our gray cells in vain. If I recall correctly, Bob Fendrich told me that there are people who lost hearing but paradoxically they are able to indicate the location of a sound source. Just a queerness in addition to the high-low discussion. Medival churches only owned one big hymn book for all singers together. So the melody was written on top for the taller singers having deeper voices and standing behind the higher pitched smaller singers. So high was low. Eckard http://home.arcor.de/eckard.blumschein At 20:01 22.02.2003 +0100, Martin Braun wrote: >Chen-Gia Tsai wrote: > >> I have used a program (see >>http://www.dcs.shef.ac.uk/~martin/MAD/auto/auto.htm ) and seen that the >>shifted virtual pitches evoked by inharmonic components are precisely >>predicted. > >>This success of autocorrelation in modeling pitch extraction is, at least >>for me, very impressive. > >Chen-gia, it does not help, if something is predicted by a model, but not >heard by human ears. Your sound files showed that the "predicted" pitches >did not match the pseudo-pitches of your examples. > >As to the pitch model of autocorrelation, also this is a model that is >anatomically and physiologically unrealistic. Ray Meddis, one of the major >advocates of this model has now given it up, in favor of a new model (see >below) that is based on anatomical and physiological data that were >described in detail by Gerald Langner and me. > >[By the way, the term "virtual pitch" should no longer be used, because >pitch is real, and "virtual", an extremely "fuzzy" term from the beginning, >has in recent years gained a new meaning in IT contexts. Of the 1077 >abstracts of the presentations at the current ARO meeting only one still >uses this old term.] > >You are lucky that at this year's ARO meeting, which has started today, two >presentations explicitly deal with your issue. Here are the two abstracts: > >[300] >Pitch Shifts For Unresolved Complex Tones And The Implications For Models Of >Pitch Perception > >*Rebecca Kensey Watkinson, Christopher John Plack >Department of Psychology, University of Essex, Colchester, United Kingdom > >This experiment compared the pitches of complex tones consisting of >unresolved harmonics with fundamental frequencies (F0s) of 100, 125, >166.67, and 250 Hz. The complexes were bandpass-filtered between the >22nd and the 30th harmonic to produce a set of unresolved harmonics >with distinct envelope peaks ("pitch pulses"). Each tone burst had a >duration of 5 waveform cycles and two tone bursts were presented >consecutively, separated by a brief gap of either 0, 1, or 2 waveform >periods. The envelope phase of the second tone burst in each pair was >advanced or delayed by 0.25, 0.5, or 0.75 periods. Effectively, this >resulted in a variation in the inter-pulse interval (IPI) between the two >tone bursts. A no-shift control was also included, in which the IPI was >fixed at an integer number of periods. Pitch matches were obtained by >varying the F0 of a comparison complex tone with the same temporal >parameters as the standard, but without the phase shift. Relative to the >no-shift control, the variations in IPI produced substantial pitch shifts >when there was no gap between the bursts, but no effect was seen for >gaps of 1 or 2 periods. This is consistent with a pitch mechanism >employing a long integration time for continuous stimuli that is reset in >response to temporal discontinuities of greater than 1 period of the >waveform. The results were inconsistent with the autocorrelation model >of Meddis and O'Mard (1997), but a modification of the weighted >mean-rate model of Carlyon et al. (2002) could account for the data. > > >[376] >A Model of the Physiological basis of Pitch Perception. > >*Raymond Meddis >Psychology, University of Essex, Colchester, United Kingdom > >Little is known about how pitch is processed by the auditory nervous >system. Autocorrelation models of pitch extraction have been successful >in simulating a large number of psychophysical results in this area but >there is little support for the idea that the nervous system acts as an >explicit autocorrelation device. To address this issue, this poster >presents a design for a new model of pitch perception based upon >known neural architecture and also presents some preliminary pitch >analyses using the model. The model offers a physiologically plausible >system for periodicity coding that avoids the need for long delay lines >required by autocorrelation. The system incorporates a model of the >human auditory periphery including outer/middle ear transfer >characteristics, nonlinear frequency analysis and mechanical-electrical >transduction by inner hair cells. The resulting 'auditory nerve' spike >train is used as the input to three further stages of signal processing >thought to be located in the cochlear nucleus, central nucleus and the >external cortex of the inferior colliculus, respectively. The complete >model is implemented using DSAM, a development system for auditory >modelling. The output from the system is the activity of a single array >of neurons each sensitive to different periodicities. The pattern of >activity across this array is uniquely related to the fundamental >frequency of a harmonic complex. The testing of the model is still in its >early stages but has so far been successfully tested using a range of >harmonic stimuli and iterated ripple noise stimuli. The poster will report >on current progress in testing and refining the model. > >Martin > >------------------------------------------- >Martin Braun >Neuroscience of Music >S-671 95 Klassbol >Sweden >e-mail: nombraun(at)telia.com >web site: http://w1.570.telia.com/~u57011259/index.htm >


This message came from the mail archive
http://www.auditory.org/postings/2003/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University