[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Laminar cortical dynamics of speech perception

Dear Matt,

Our article does not provide a detailed model of the inner ear. My email responded to your comment about "high level processes".

Earlier work from our group does model some aspects of auditory filtering and streaming; e.g., http://cns.bu.edu/~steve/CohGroWyse1995JAcoustSocAm.pdf .


On Aug 8, 2011, at 11:00 PM, Matt Flax wrote:

> Steve,
> Thanks for linking us to your paper, here is a direct link.
> http://cns.bu.edu/~steve/GroKazJASA2011.pdf
> It appears that you are modelling the high level by extracting acoustic
> features. There appears to be physiological gaps between the high level
> lamina structures and the beginning of the inner ear (the stapes).
> I don't mind the physiological gaps.
> Can you tell us to what degree you can model hearing ?
> Are there certain phenomena of hearing which your physiological model
> can't explain ?
> Matt
> On Mon, 2011-08-08 at 21:14 -0400, Stephen Grossberg wrote:
>> Dear Matt et al.,
>> About your claim that "we are a long way off having physiological models explain high level processes": One's view of this claim would depend on how sweeping it is intended to be. However, I just today got reprints of an article with Sohrob Kazerounian that just appeared in JASA which illustrates how high-level cortical processes may generate certain auditory illusions. The title of the article is:
>> Laminar cortical dynamics of conscious speech perception:
>> Neural model of phonemic restoration using subsequent context in noise.
>> JASA, 2011, 130, 440 - 460.
>> Its abstract says:
>> How are laminar circuits of neocortex organized to generate conscious speech and language percepts? How does the brain restore information that is occluded by noise, or absent from an acoustic signal, by integrating contextual information over many milliseconds to disambiguate noise-occluded acoustical signals? How are speech and language heard in the correct temporal order, despite the influence of context that may occur many milliseconds before or after each perceived word? A neural model describes key mechanisms in forming conscious speech percepts, and quantitatively simulates a critical example of contextual disambiguation of speech and language; namely, phonemic restoration. Here, a phoneme deleted from a speech stream is perceptually restored when it is replaced by broadband noise, even when the disambiguating context occurs after the phoneme was presented. The model describes how the laminar circuits within a hierarchy of cortical processing stages may interact to generate a conscious speech percept that is embodied by a resonant wave of activation that occurs between acoustic features, acoustic item chunks, and list chunks. Chunk-mediated gating allows speech to be heard in the correct temporal order, even what what is heard depends upon future context. 
>> Steve Grossberg
>> http://cns.bu.edu/~steve
>> On Aug 8, 2011, at 6:06 PM, Matt Flax wrote:
>>> John,
>>> I agree with you ... however I think we are a long way off having
>>> physiological models explain high level processes ...
>>> There has been a considerable amount of work into physiological models
>>> of hearing. Such models don't necessarily assume anything other then a
>>> travelling wave in the Cochlea. Other models don't believe in the
>>> passive travelling wave ... all of them however have some form of
>>> resonance ... and this is tuned in some way ... but as you say, not a FT
>>> type of tuning.
>>> Once you leave the realm of psychoacoustics, you are so low level, that
>>> you are no longer concerned with high level 'illusions'. Before you can
>>> even start to model these high level illusions, you must model the
>>> physiological elements of the Cochlea.
>>> As such when you are on the physiological level you are now worried
>>> about things like two tone suppression (not masking), emissions,
>>> distortion products and so on. Depending how you go about modelling, you
>>> may also me modelling how the membranes and cells move in the inner ear.
>>> People do take elements of the physiological models and simplify them as
>>> their basis for high level psychoacoustics work ... this is where the
>>> gammatone and gammachirp physiological basis comes from ... Richard Lyon
>>> - as another example - has recently re-worked his physiological bases
>>> for his high level CASA system ...
>>> Matt
>>> On Mon, 2011-08-08 at 10:52 -0400, John Bates wrote:
>>>> Hello Randy,
>>>> I think you are totally correct in your belief that a new paradigm for
>>>> auditory perception is needed.
>>>> This thread, which began with a question on how auditory illusions
>>>> might affect hearing aid operation, has missed the main point. In my 
>>>> estimation
>>>> the greatest auditory illusion of all is that the ear operates by spectrum
>>>> analysis.
>>>> To Ohm and Helmhotz the Fourier transform was a mathematical delicacy that
>>>> they could use to justify any observation of tonal perception. Since then,
>>>> researchers have followed their lead by cherry-picking "ineresting problems"
>>>> and proposing solutions involving impressive Fourierian mathematics but
>>>> always ending with ineffective, qualified results. There is now not even the
>>>> slightest hint of an applicable auditory model.
>>>> The reason for this stalemate is that there has been no complete system
>>>> analysis to establish the functional and physical requirements for a way to 
>>>> extract
>>>> meaning from environmental sounds.  Why else could it be that all animals
>>>> have some kind of sonic perception? And, in terms of biological survival,
>>>> speech and music would be at the bottom of the list. Yet, with us, these 
>>>> are,
>>>> and have been, the primary goals of auditory research.
>>>> An analysis of auditory requirements should reveal that there is no physical
>>>> way that a spectrum analyzer can respond to the temporal, spatial, and
>>>> physiological requirements that are easily accomplished by the ear-brain
>>>> system. Given that our ears do things that cannot be done in the Fourier
>>>> paradigm, it is only logical that a different paradigm can be discovered:
>>>> one that can explain those baffling psychophysical illusions without having
>>>> to make wild assumptions.
>>>> Good luck,
>>>> John Bates
>>>> ----- Original Message ----- 
>>>> From: "Ranjit Randhawa" <rsran@xxxxxxxxxxx>
>>>> To: <AUDITORY@xxxxxxxxxxxxxxx>
>>>> Sent: Sunday, August 07, 2011 11:50 AM
>>>> Subject: Re: Non-linear additions to linear models. (was On pitch and
>>>> periodicity (was "correction to post"))
>>>>> Hi Dick,
>>>>> My last observation is on your suggestion of adding non-linearity to some
>>>>> linear model to cover what some people may call illusions. As an tran
>>>>> aside, I believe Helmholtz was forced to add in the quadratic function
>>>>> only because experimentalists (Seebeck I believe) were breathing down his
>>>>> neck proving the existence of the missing fundamental problem. I would
>>>>> have to strongly disagree with some of the conclusions reached from such
>>>>> quadratic and cubic  expansions. In my opinion, I think when people say
>>>>> that a new paradigm was needed I assumed that it meant a totally new
>>>>> approach to signal analysis that did not necessarily adhere to any
>>>>> assumptions of linearity. Take for example a system based on rate of
>>>>> change of signal  energy, it could right away explain some minor
>>>>> psycho-acoustic phenomenon associated with changes in static pressure or
>>>>> that tricky bias term that comes up when one is analyzing sounds like
>>>>> speech. But as I am sure you would point out, much more would be needed
>>>>> before such a statement could have any validity as the basis for a system
>>>>> theory. I agree. On that note, I believe that till some such system is
>>>>> offered for review, non-linear additions to linear models will have to do
>>>>> for rest of us who are appalled by the associated mathematics. Regards,
>>>>> Randy Randhawa
>>>>> On 8/4/2011 1:42 PM, Richard F. Lyon wrote:
>>>>>> Randy,
>>>>>> I'll be the first to agree that linear systems theory is sometimes
>>>>>> stretched beyond where it makes sense, and that you need to use nonlinear
>>>>>> descriptions to describe pitch perception and most other aspects of
>>>>>> hearing, and more so when you get up to cognitive levels.
>>>>>> I'm sorry to hear that you "gave up on linear systems", because I don't
>>>>>> think it's possible to do much sensible with nonlinear systems when you
>>>>>> don't have linear systems as a solid base to build on. Certainly at the
>>>>>> level of HRFTs, cochlear function, and pitch perception models, a solid
>>>>>> understanding of linear systems theory is in indispensible prerequisite.
>>>>>> Then, the nonlinear modifications needed to make better models will seem
>>>>>> less "tortured".
>>>>>> Dick
>>>>>> At 10:33 AM -0400 8/4/11, Ranjit Randhawa wrote:
>>>>>>> Dear Dick,
>>>>>>> While linear system theories seem to work reasonably well with
>>>>>>> mechanical systems, I believe they fail when applied to Biological
>>>>>>> systems. Consider that even Helmoholtz had to appeal to non-linear
>>>>>>> processes (never really described) in the auditory system to account for
>>>>>>> the "missing fundamental" and "combination tones". Both of these
>>>>>>> psycho-acoustical phenomenon have been well established and explanations
>>>>>>> for pitch perception are either spectral based or time based with some
>>>>>>> throwing in learning and cognition to avoid having to make the harder
>>>>>>> decision that maybe this field needs a new paradigm. This new paradigm
>>>>>>> should be able to provide a better model that explains frequency
>>>>>>> (sound!) analysis in a fashion such that the nothing is missing and
>>>>>>> parameter values can be calculated to explain pitch salience, a subject
>>>>>>> that seems to be never discussed in pitch perception models.
>>>>>>> Furthermore, such a new approach should also be able to explain why the
>>>>>>> cochlear is the shape it is, which as far as I can see has never been
>>>>>>> touched upon by existing signal processing methods. Finally, are these
>>>>>>> missing components "illusions" that are filled in so to speak by our
>>>>>>> higher level cognitive capabilities? It is remarkable that this so
>>>>>>> called filling in process is as robust as it is, to  be more or less
>>>>>>> common to everyone, and therefore one wonders if all the other illusions
>>>>>>> are really not illusions but may have a perfectly good basis for their
>>>>>>> existence. If they were "illusions" one would expect a fair amount of
>>>>>>> variation in the psycho-acoustic experimental results I would think.
>>>>>>> I myself gave up on linear systems early in my study of this field and
>>>>>>> have felt that other systems, e.g. switching, may offer a better future
>>>>>>> explanatory capability, especially when it comes to showing some
>>>>>>> commonality of signal processing between the visual and the auditory
>>>>>>> system. To this end, I am quite happy to accept that I do not consider
>>>>>>> myself an expert in linear system theory.
>>>>>>> Regards,
>>>>>>> Randy Randhawa