Re: AUDITORY Digest - 15 Dec 2005 to 16 Dec 2005 (#2005-254) (RAJKISHORE PRASAD )


Subject: Re: AUDITORY Digest - 15 Dec 2005 to 16 Dec 2005 (#2005-254)
From:    RAJKISHORE PRASAD  <profrkishore(at)YAHOO.COM>
Date:    Fri, 16 Dec 2005 22:19:08 -0800

--0-665066220-1134800348=:76404 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit Hello let me introduce myself I come from India and have been workingin the area of Blind separation of speech. Now I am interested in quantization of timbre can somebody give some reference Thanking you yours Rajkishore AUDITORY automatic digest system <LISTSERV(at)lists.mcgill.ca> wrote: There are 3 messages totalling 251 lines in this issue. Topics of the day: 1. The Auditory Continuity Illusion/Temporal Induction: Expanding the Discussion 2. PEAQ Advanced model 3. R-SPIN transcript ---------------------------------------------------------------------- Date: Fri, 16 Dec 2005 11:25:13 +0100 From: Christopher Petkov Subject: The Auditory Continuity Illusion/Temporal Induction: Expanding the Discussion Dear all, It's been great to see the recent discussions on the continuity illusion and temporal induction from everyone. I figured since we got called by name it was probably time to at least introduce ourselves to the list. The rest are just some of my own thoughts on the previous discussions. As Eli (Nelken) wrote, Mitch Sutter, myself and Kevin O'Connor have been working on understanding a neurophysiological basis for the continuity illusion in A1 in awake macaques. In our J. Neurosci 2003 paper we found behavioral support for the illusion occurring with macaques which gave us the green flag (so to say) to pursue it neurophysiologically. (In hindsight I think we recorded from A1 because we thought it was named after Al Bregman.) Anyhow, we're hoping you'll have something to read on this soon enough, so I won't go into details here. But the enthusiasm is nice to see. In any case we had a poster at APAN this year in D.C. that gave a glimpse of some of the findings. My impression is that the neurophysiological bases behind many processes of auditory scene analysis are not that well known, or are at least beginning to be understood. Behavior provides a great basis to guide these approaches and Dick (Warren) and Al (Bregman) along with many others (Bob Carlyon, etc.) have an incredible body of work on this. But as for the other methods, Mitch Steinschneider and his group (neurophysiology in macaques) and Christophe Micheyl and collaborators (MEG in humans and neurophysiology in macaques) have been tackling perceptual streaming in primates. Christophe Micheyl and Bob Carlyon also have a very nice paper on the continuity illusion using EEG and the mismatch-negativity in humans. Eli and his group also neurophysiologically address aspects of segregating sound 'objects', as he noted. And others (too many to mention) have been addressing aspects of these or related scene analysis issues using fMRI, EEG and MEG in humans and behavior or electrophysiology in various species. Certainly this work could be more extensive, but clearly many of the current neuroscientific techniques are being used. We'll, of course, in the future go a long way toward addressing some of the issues that were brought up by Dick and others. Simply, we need more detail from the various techniques on how all levels of the auditory system contribute toward segregating sound mixtures and how perceptions are shaped during different processes (illusory or not). The discussion so far has centered specifically on auditory continuity, but streaming and continuity are just two models or descriptions of natural abilities of a working (dare we say 'normal'?) auditory system. Even Al himself might tell you that there's a relationship between streaming and continuity (he's got work on this). Thus many of the questions and issues that were brought up are certainly more generally applicable to scene analysis. Additionally, the better that we know how the typical auditory system solves these problems, the better in position we'll be to understand how perceptions differ for impaired listeners. In this direction there's some behavioral literature on scene analysis, including our work on dyslexics (Sutter et al., 2000, Petkov et al., 2005). In those studies, we used a modified perceptual streaming paradigm to approximate the source of dyslexics' perceptual grouping impairments. Here, saying the impairment is all over the periphery and brain is not so useful since even if everything is affected, different areas are likely differently functionally affected. Behavioral results can address this to some extent but then other methods will have to step in. It will be nice to see how groups come together on these issues since each technique provides its own description (and inherent bias) of what is going on. Each method (including behavioral work) has a different scope on what is going on in the brain, with its own advantages (see Chris Stecker's and subsequent discussion on this for fMRI) and limitations. From the perspective of electrophysiology, however, considering how long physiology from one auditory area takes I'm hoping (and gambling) that something like fMRI can help guide the approach for us or at least provide a more direct comparison to human fMRI data. Thus I'm excited about the modeling of auditory continuity by Fatima Husain, Barry Horwitz and their group. I do see Dick's point about how subcortical auditory areas also need to be considered in the modeling. But in regards to modeling for guiding human fMRI (I think a main objective of their work), imaging subcortically is a hurdle fMRI has yet to overcome. There's of course much to be done. Yet if enthusiasm is a gauge of things to come, then we will undoubtedly see further work (using everyone's favorite technique) on many issues of auditory scene analysis in general, including, of course, further discussion of what each method contributes. I look forward to this. Best wishes to everyone and happy holidays, -Chris =================================== Christopher I. Petkov Max Planck Institute for Biological Cybernetics Spemannstrasse 38 72076 Tuebingen, Germany Ph: +49-7071-601-659 Fx: +49-7071-601-652 http://www.kyb.mpg.de/~chrisp > Date: Wed, 14 Dec 2005 08:28:35 +0200 > From: Israel Nelken > Subject: Re: The Auditory Continuity Illusion/Temporal Induction > > Dear all, > There's some electrophysiological work in animals that has bearing > on the issue of continuity. Mitch Sutter has strong evidence that the > illusion is operative in macaques, and he has some accompanying > electrophysiology (that has not been published yet to the best of my > knowledge) showing correlates of induction in primary auditory cortex. > We (Las et al. J. Neurosci. 2005) published data related to the coding > of a pure tone in fluctuating masker. Although our main emphasis was on > comodulation masking release, the results can be interpreted in terms of > continuity. In short, the responses of neurons in A1 of cats to the > interrupted noise were very strong and locked to the noise envelope. > Adding a low-level tone close to the BF of the neurons suppressed the > envelope locking, resulting in responses that were similar to those > evoked by tones in silence. Thus, these neurons seem to reflect the > perceived continuity of the tone, ignoring the noise. We have further > demonstrated that neurons with these responses are present in the > auditory thalamus but not in the inferior colliculus. All of this would > suggest that activity that reflects the continuity of the tone is > already present in thalamus/primary auditory cortex (although > anesthetized cats are certainly not awake humans). We don't know however > whether this activity is generated there or whether we see a reflection > of processing at higher brain areas. > Eli > > -- > ================================================================== > Israel Nelken > Dept. of Neurobiology > The Alexander Silberman Institute of Life Sciences > Edmond Safra Campus, Givat Ram | Tel: Int-972-2-6584229 > Hebrew University | Fax: Int-972-2-6586077 > Jerusalem 91904, ISRAEL | Email: israel(at)md.huji.ac.il > ================================================================== ------------------------------ Date: Fri, 16 Dec 2005 18:47:06 +0100 From: Goran Bozidar Markovic Subject: PEAQ Advanced model Hello to all who are reading this. I have implemented advanced model of PEAQ (ITU BS.1387-1) as part of my master thesis and checked it many times. But I am unable to get ODG for conformance test as it is given in table 23 of BS.1387. Values of MOVs from FFT ear model match very closely, but values of MOVs from filterbank differ a lot, especially AvgLinDistA. In order to test my code, I have also implemented some parts of Basic model - AvgModDiff1B and RmsNoiseLoudB. Values of those 2 MOVs differ significantly from reference values in conformance test, but are almost identical to values from EAQUAL and PQevalAudio. Please help me to identify the problem. ------------------------------ Date: Fri, 16 Dec 2005 16:31:24 -0800 From: asaram Subject: R-SPIN transcript This is a multi-part message in MIME format. ------=_NextPart_000_007B_01C6025E.28151710 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Dear list, Does anyone have a text file with the transcribed sentences of the R-SPIN test they would be willing to share? Cheers, Tassos ------=_NextPart_000_007B_01C6025E.28151710 Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable charset=3Dus-ascii"> 6.5.7036.0"> Dear list, Does anyone have a text file = with the transcribed sentences of the R-SPIN test they would be willing = to share? Cheers, Tassos ------=_NextPart_000_007B_01C6025E.28151710-- ------------------------------ End of AUDITORY Digest - 15 Dec 2005 to 16 Dec 2005 (#2005-254) *************************************************************** __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com --0-665066220-1134800348=:76404 Content-Type: text/html; charset=iso-8859-1 Content-Transfer-Encoding: 8bit <DIV>Hello</DIV> <DIV>let me introduce myself</DIV> <DIV>I come from India and have been workingin the area of Blind separation of speech. Now I am interested in quantization of timbre can somebody give some reference</DIV> <DIV>Thanking you</DIV> <DIV>yours</DIV> <DIV>Rajkishore<BR><BR><B><I>AUDITORY automatic digest system &lt;LISTSERV(at)lists.mcgill.ca&gt;</I></B> wrote:</DIV> <BLOCKQUOTE class=replbq style="PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: #1010ff 2px solid">There are 3 messages totalling 251 lines in this issue.<BR><BR>Topics of the day:<BR><BR>1. The Auditory Continuity Illusion/Temporal Induction: Expanding the<BR>Discussion<BR>2. PEAQ Advanced model<BR>3. R-SPIN transcript<BR><BR>----------------------------------------------------------------------<BR><BR>Date: Fri, 16 Dec 2005 11:25:13 +0100<BR>From: Christopher Petkov <CHRIS.PETKOV(at)TUEBINGEN.MPG.DE><BR>Subject: The Auditory Continuity Illusion/Temporal Induction: Expanding the Discussion<BR><BR >Dear all,<BR><BR>It's been great to see the recent discussions on the continuity illusion<BR>and temporal induction from everyone. I figured since we got called by name<BR>it was probably time to at least introduce ourselves to the list. The rest<BR>are just some of my own thoughts on the previous discussions.<BR><BR><BR><BR>As Eli (Nelken) wrote, Mitch Sutter, myself and Kevin O'Connor have been<BR>working on understanding a neurophysiological basis for the continuity<BR>illusion in A1 in awake macaques. In our J. Neurosci 2003 paper we found<BR>behavioral support for the illusion occurring with macaques which gave us<BR>the green flag (so to say) to pursue it neurophysiologically. (In hindsight<BR>I think we recorded from A1 because we thought it was named after Al<BR>Bregman.) Anyhow, we're hoping you'll have something to read on this soon<BR>enough, so I won't go into details here. But the enthusiasm is nice to see.<BR>In any case we had a poster at APAN this year in D.C. tha t gave a glimpse of<BR>some of the findings.<BR><BR><BR><BR>My impression is that the neurophysiological bases behind many processes<BR>of auditory scene analysis are not that well known, or are at least<BR>beginning to be understood. Behavior provides a great basis to guide these<BR>approaches and Dick (Warren) and Al (Bregman) along with many others (Bob<BR>Carlyon, etc.) have an incredible body of work on this. But as for the<BR>other methods, Mitch Steinschneider and his group (neurophysiology in<BR>macaques) and Christophe Micheyl and collaborators (MEG in humans and<BR>neurophysiology in macaques) have been tackling perceptual streaming in<BR>primates. Christophe Micheyl and Bob Carlyon also have a very nice paper on<BR>the continuity illusion using EEG and the mismatch-negativity in humans.<BR>Eli and his group also neurophysiologically address aspects of segregating<BR>sound 'objects', as he noted. And others (too many to mention) have been<BR>addressing aspects of these or r elated scene analysis issues using fMRI, EEG<BR>and MEG in humans and behavior or electrophysiology in various species.<BR>Certainly this work could be more extensive, but clearly many of the current<BR>neuroscientific techniques are being used.<BR><BR><BR><BR>We'll, of course, in the future go a long way toward addressing some of<BR>the issues that were brought up by Dick and others. Simply, we need more<BR>detail from the various techniques on how all levels of the auditory system<BR>contribute toward segregating sound mixtures and how perceptions are shaped<BR>during different processes (illusory or not). The discussion so far has<BR>centered specifically on auditory continuity, but streaming and continuity<BR>are just two models or descriptions of natural abilities of a working (dare<BR>we say 'normal'?) auditory system. Even Al himself might tell you that<BR>there's a relationship between streaming and continuity (he's got work on<BR>this). Thus many of the questions and issue s that were brought up are<BR>certainly more generally applicable to scene analysis.<BR><BR><BR><BR>Additionally, the better that we know how the typical auditory system<BR>solves these problems, the better in position we'll be to understand how<BR>perceptions differ for impaired listeners. In this direction there's some<BR>behavioral literature on scene analysis, including our work on dyslexics<BR>(Sutter et al., 2000, Petkov et al., 2005). In those studies, we used a<BR>modified perceptual streaming paradigm to approximate the source of<BR>dyslexics' perceptual grouping impairments. Here, saying the impairment is<BR>all over the periphery and brain is not so useful since even if everything<BR>is affected, different areas are likely differently functionally affected.<BR>Behavioral results can address this to some extent but then other methods<BR>will have to step in.<BR><BR><BR><BR>It will be nice to see how groups come together on these issues since<BR>each technique provides its own description (and inherent bias) of what is<BR>going on. Each method (including behavioral work) has a different scope on<BR>what is going on in the brain, with its own advantages (see Chris Stecker's<BR>and subsequent discussion on this for fMRI) and limitations. From the<BR>perspective of electrophysiology, however, considering how long physiology<BR>from one auditory area takes I'm hoping (and gambling) that something like<BR>fMRI can help guide the approach for us or at least provide a more direct<BR>comparison to human fMRI data. Thus I'm excited about the modeling of<BR>auditory continuity by Fatima Husain, Barry Horwitz and their group. I do<BR>see Dick's point about how subcortical auditory areas also need to be<BR>considered in the modeling. But in regards to modeling for guiding human<BR>fMRI (I think a main objective of their work), imaging subcortically is a<BR>hurdle fMRI has yet to overcome.<BR><BR><BR><BR>There's of course much to be done. Yet if enthusiasm is a gauge of<BR>things to come, then we will undoubtedly see further work (using everyone's<BR>favorite technique) on many issues of auditory scene analysis in general,<BR>including, of course, further discussion of what each method contributes. I<BR>look forward to this.<BR><BR><BR><BR>Best wishes to everyone and happy holidays,<BR><BR>-Chris<BR><BR><BR><BR>===================================<BR><BR><BR><BR>Christopher I. Petkov<BR>Max Planck Institute for Biological Cybernetics<BR>Spemannstrasse 38<BR>72076 Tuebingen, Germany<BR><BR><BR><BR>Ph: +49-7071-601-659<BR>Fx: +49-7071-601-652<BR><BR>http://www.kyb.mpg.de/~chrisp<BR><BR><BR><BR><BR><BR><BR><BR>&gt; Date: Wed, 14 Dec 2005 08:28:35 +0200<BR>&gt; From: Israel Nelken <ISRAEL(at)MD.HUJI.AC.IL><BR>&gt; Subject: Re: The Auditory Continuity Illusion/Temporal Induction<BR>&gt;<BR>&gt; Dear all,<BR>&gt; There's some electrophysiological work in animals that has bearing<BR>&gt; on the issue of continuity. Mitch Sutter has strong evidence that the<BR>&gt; illusion is operative in macaques, and he has some accompanying<BR>&gt; electrophysiology (that has not been published yet to the best of my<BR>&gt; knowledge) showing correlates of induction in primary auditory cortex.<BR>&gt; We (Las et al. J. Neurosci. 2005) published data related to the coding<BR>&gt; of a pure tone in fluctuating masker. Although our main emphasis was on<BR>&gt; comodulation masking release, the results can be interpreted in terms of<BR>&gt; continuity. In short, the responses of neurons in A1 of cats to the<BR>&gt; interrupted noise were very strong and locked to the noise envelope.<BR>&gt; Adding a low-level tone close to the BF of the neurons suppressed the<BR>&gt; envelope locking, resulting in responses that were similar to those<BR>&gt; evoked by tones in silence. Thus, these neurons seem to reflect the<BR>&gt; perceived continuity of the tone, ignoring the noise. We have further<BR>&gt; demonstrated that neurons with these responses a re present in the<BR>&gt; auditory thalamus but not in the inferior colliculus. All of this would<BR>&gt; suggest that activity that reflects the continuity of the tone is<BR>&gt; already present in thalamus/primary auditory cortex (although<BR>&gt; anesthetized cats are certainly not awake humans). We don't know however<BR>&gt; whether this activity is generated there or whether we see a reflection<BR>&gt; of processing at higher brain areas.<BR>&gt; Eli<BR>&gt;<BR>&gt; -- <BR>&gt; ==================================================================<BR>&gt; Israel Nelken<BR>&gt; Dept. of Neurobiology<BR>&gt; The Alexander Silberman Institute of Life Sciences<BR>&gt; Edmond Safra Campus, Givat Ram | Tel: Int-972-2-6584229<BR>&gt; Hebrew University | Fax: Int-972-2-6586077<BR>&gt; Jerusalem 91904, ISRAEL | Email: israel(at)md.huji.ac.il<BR>&gt; ==================================================================<BR><BR>------------------------------<BR><BR>Date: Fri, 16 Dec 2005 18:47: 06 +0100<BR>From: Goran Bozidar Markovic <MR97411(at)ALAS.MATF.BG.AC.YU><BR>Subject: PEAQ Advanced model<BR><BR>Hello to all who are reading this.<BR>I have implemented advanced model of PEAQ (ITU BS.1387-1) as part of my<BR>master thesis and checked it many times. But I am unable to get ODG for<BR>conformance test as it is given in table 23 of BS.1387.<BR>Values of MOVs from FFT ear model match very closely, but values of MOVs<BR>from filterbank differ a lot, especially AvgLinDistA.<BR>In order to test my code, I have also implemented some parts of Basic<BR>model - AvgModDiff1B and RmsNoiseLoudB. Values of those 2 MOVs differ<BR>significantly from reference values in conformance test, but are almost<BR>identical to values from EAQUAL and PQevalAudio.<BR>Please help me to identify the problem.<BR><BR>------------------------------<BR><BR>Date: Fri, 16 Dec 2005 16:31:24 -0800<BR>From: asaram <ASARAM(at)BERKELEY.EDU><BR>Subject: R-SPIN transcript<BR><BR>This is a multi-part message in MIME format.<BR><BR>------=_NextPart_000_007B_01C6025E.28151710<BR>Content-Type: text/plain;<BR>charset="us-ascii"<BR>Content-Transfer-Encoding: 7bit<BR><BR><BR><BR>Dear list,<BR>Does anyone have a text file with the transcribed sentences of the R-SPIN<BR>test they would be willing to share?<BR><BR>Cheers,<BR><BR>Tassos<BR><BR>------=_NextPart_000_007B_01C6025E.28151710<BR>Content-Type: text/html;<BR>charset="us-ascii"<BR>Content-Transfer-Encoding: quoted-printable<BR><BR><BR><BR><BR> <META http-equiv='3D"Content-Type"' content='3D"text/html;' ="<br">charset=3Dus-ascii"&gt;<BR> <META content='3D"MS' name='3D"Generator"' version="<br" Server Exchange>6.5.7036.0"&gt;<BR><BR><BR><BR><!-- Converted from text/rtf format --><BR><BR><BR><BR><BR><BR> <div><FONT face='3D"Century' size=3 Gothic?>Dear list,</FONT><BR><BR><BR><FONT face='3D"Century' size=3 Gothic?>Does anyone have a text file =<BR>with the transcribed sentences of the R-SPIN test they would be willing =<BR>to share?</FONT><BR></div><BR><BR> <div><FONT face='3D"Century' size=3 Gothic?>Cheers,</FONT><BR></div><BR><BR> <div><FONT face='3D"Century' size=3 Gothic?>Tassos</FONT><BR></div><BR><BR><BR><BR>------=_NextPart_000_007B_01C6025E.28151710--<BR><BR>------------------------------<BR><BR>End of AUDITORY Digest - 15 Dec 2005 to 16 Dec 2005 (#2005-254)<BR>***************************************************************<BR><BR></BLOCKQUOTE> <DIV><BR></DIV><p>__________________________________________________<br>Do You Yahoo!?<br>Tired of spam? Yahoo! Mail has the best spam protection around <br>http://mail.yahoo.com --0-665066220-1134800348=:76404--


This message came from the mail archive
http://www.auditory.org/postings/2005/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University