[AUDITORY] Seminar Announcement - May 10 - E.A.R.S. (Electronic Auditory Research Seminars) ("Vogler, Nathan" )


Subject: [AUDITORY] Seminar Announcement - May 10 - E.A.R.S. (Electronic Auditory Research Seminars)
From:    "Vogler, Nathan"  <Nathan.Vogler@xxxxxxxx>
Date:    Mon, 9 May 2022 21:17:28 +0000

--_000_MN2PR04MB6127B676EC30C24D6532BF66C8C69MN2PR04MB6127namp_ Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable Dear fellow neuroscientists, We would like to invite you to join us Tomorrow, May 10 at 1:00 pm EDT (UTC= -4) for the next edition of E.A.R.S. (Electronic Auditory Research Seminars= ), a monthly auditory seminar series with the focus on central auditory pro= cessing and circuits. Please pre-register (for free) and tune in via Crowdc= ast (enter your email to receive the link for the talk): https://www.crowdc= ast.io/e/ears/18 (Note: for optimal performance, we recommend using Google Chrome as your br= owser). Speakers: * Diego Elgueda (University of Chile): =93Sound and behavioral meaning = encoding in the auditory cortex=94 * Animals adapt to their environment by analyzing sensory informatio= n and integrating it with internal representations (such as behavioral goal= s, memories of past stimulus-event associations and expectations) and linki= ng perception with appropriate adaptive responses. The mechanisms by which = the brain integrates acoustic feature information with these internal repre= sentations are not yet clear. We are interested in understanding how audito= ry representations are transformed in the areas of the auditory cortex and = how these areas interact with higher-order association areas of the cerebra= l cortex. We have shown that neurons in non-primary areas in the auditory c= ortex of the ferret, while responsive to auditory stimuli, can greatly enha= nce their responses to sounds when these become behaviorally-relevant to th= e animal. Interestingly, tertiary area VPr can display responses that share= similarities with those previously shown in ferret frontal cortex, in whic= h attended sounds are selectively enhanced during performance of auditory t= asks, and also show long sustained short-term memory activity after stimulu= s offset, which correlates with the task response timing. In order to expan= d on these findings, we are currently training rats in a 2AFC task in order= to record from primary and non-primary areas of the auditory cortex, as we= ll as from medial prefrontal cortex, in order to explore how these areas re= present sounds and interact during selective attention and decision-making. * Narayan Sankaran (University of California San Francisco): =93Intracr= anial recordings reveal the encoding of melody in the human superior tempor= al gyrus=94 * With cultural exposure across our lives, humans experience sequenc= es of pitches as melodies that convey emotion and meaning. The perception o= f melody operates along three fundamental dimensions: (1) the pitch of each= note, (2) the intervals in pitch between adjacent notes, and (3) how expec= ted each note is within its musical context. To date, it is unclear how the= se dimensions are collectively represented in the brain and whether their e= ncoding is specialized for music. I=92ll present recent work in which we us= ed high-density electrocorticography to record local population activity di= rectly from the human brain while participants listened to continuous Weste= rn melodies. Across the superior temporal gyrus (STG), separate populations= selectively encoded pitch, intervals, and expectations, demonstrating a sp= atial code for independently representing each melodic dimension. The same = participants also listened to naturally spoken English sentences. Whereas p= revious work suggests cortical selectivity for broad sound categories like = =91music=92, here we demonstrate that music-selectivity is systematically d= riven by the encoding of expectations, suggesting neural specialization for= representing a specific sequence property of music. In contrast, the pitch= and interval dimensions of melody were represented by neural populations t= hat also responded to speech and encoded similar acoustic content across th= e two domains. Melodic perception thus arises from the extraction of multip= le streams of statistical and acoustic information via specialized and doma= in-general mechanisms, respectively, within distinct sub-populations of hig= her-order auditory cortex. With kind wishes, Maria Geffen Yale Cohen Steve Eliades Stephen David Alexandria Lesicko Nathan Vogler Jean-Hugues Lestang Huaizhen Cai --_000_MN2PR04MB6127B676EC30C24D6532BF66C8C69MN2PR04MB6127namp_ Content-Type: text/html; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1= 252"> <style type=3D"text/css" style=3D"display:none;"> P {margin-top:0;margin-bo= ttom:0;} </style> </head> <body dir=3D"ltr"> <div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size= : 12pt; color: rgb(0, 0, 0);" class=3D"elementToProof"> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif"> <span style=3D"margin:0px;color:black !important">Dear fellow neuroscientis= ts,&nbsp;</span></p> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif"> <span style=3D"margin:0px;color:black !important"><br> </span></p> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif"> <span style=3D"margin:0px;color:black !important">We would like to invite y= ou to join us&nbsp;<b>Tomorrow, May 10 at 1:00 pm EDT</b>&nbsp;(UTC-4) for = the next edition of E.A.R.S. (Electronic Auditory Research Seminars), a mon= thly auditory seminar series with the focus on central auditory processing and circuits. Please pre-register (for free= ) and tune in via Crowdcast (enter your email to receive the link for the t= alk):&nbsp;<a href=3D"https://www.crowdcast.io/e/ears/18" target=3D"_blank"= rel=3D"noopener noreferrer" data-auth=3D"NotApplicable" data-linkindex=3D"= 0" style=3D"margin:0px">https://www.crowdcast.io/e/ears/18</a></span></p> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif"> <span style=3D"color: black;"><br> </span></p> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif"> <span style=3D"color: black;">(Note: for optimal performance, we recommend = using Google Chrome as your browser). &nbsp;</span></p> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif"> <span style=3D"margin:0px;color:black !important">&nbsp;</span></p> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif"> <span style=3D"margin:0px;color:black !important">Speakers:</span></p> <ul type=3D"square" style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, = 30);font-family:&quot;Segoe UI&quot;, &quot;Segoe UI Web (West European)&qu= ot;, &quot;Segoe UI&quot;, -apple-system, BlinkMacSystemFont, Roboto, &quot= ;Helvetica Neue&quot;, sans-serif;font-size:15px"> <li style=3D"color:black !important;font-size:11pt;font-family:Calibri, san= s-serif"> <b>Diego Elgueda</b><span class=3D"Apple-converted-space">&nbsp;</span>(Uni= versity of Chile): =93<i>Sound and behavioral meaning encoding in the audit= ory cortex=94</i></li><ul type=3D"square"> <li style=3D"color:black !important;font-size:11pt;font-family:Calibri, san= s-serif"> Animals adapt to their environment by analyzing sensory information and int= egrating it with internal representations (such as behavioral goals, memori= es of past stimulus-event associations and expectations) and linking percep= tion with appropriate adaptive responses. The mechanisms by which the brain integrates acoustic feature information = with these internal representations are not yet clear. We are interested in= understanding how auditory representations are transformed in the areas of= the auditory cortex and how these areas interact with higher-order association areas of the cerebral cortex.= We have shown that neurons in non-primary areas in the auditory cortex of = the ferret, while responsive to auditory stimuli, can greatly enhance their= responses to sounds when these become behaviorally-relevant to the animal. Interestingly, tertiary area V= Pr can display responses that share similarities with those previously show= n in ferret frontal cortex, in which attended sounds are selectively enhanc= ed during performance of auditory tasks, and also show long sustained short-term memory activity after stimu= lus offset, which correlates with the task response timing. In order to exp= and on these findings, we are currently training rats in a 2AFC task in ord= er to record from primary and non-primary areas of the auditory cortex, as well as from medial prefrontal cortex, in= order to explore how these areas represent sounds and interact during sele= ctive attention and decision-making.</li></ul> </ul> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif;margin-left:36pt"> <span style=3D"margin:0px;color:black !important">&nbsp;</span></p> <ul type=3D"square" style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, = 30);font-family:&quot;Segoe UI&quot;, &quot;Segoe UI Web (West European)&qu= ot;, &quot;Segoe UI&quot;, -apple-system, BlinkMacSystemFont, Roboto, &quot= ;Helvetica Neue&quot;, sans-serif;font-size:15px"> <li style=3D"color:black !important;font-size:11pt;font-family:Calibri, san= s-serif"> <b>Narayan Sankaran</b><span class=3D"Apple-converted-space">&nbsp;</span>(= University of California San Francisco): =93<i>Intracranial recordings reve= al the encoding of melody in the human superior temporal gyrus=94</i></li><= ul type=3D"square"> <li style=3D"color:black !important;font-size:11pt;font-family:Calibri, san= s-serif"> With cultural exposure across our lives, humans experience sequences of pit= ches as melodies that convey emotion and meaning. The perception of melody = operates along three fundamental dimensions: (1) the pitch of each note, (2= ) the intervals in pitch between adjacent notes, and (3) how expected each note is within its musical conte= xt. To date, it is unclear how these dimensions are collectively represente= d in the brain and whether their encoding is specialized for music. I=92ll = present recent work in which we used high-density electrocorticography to record local population activity dire= ctly from the human brain while participants listened to continuous Western= melodies. Across the superior temporal gyrus (STG), separate populations s= electively encoded pitch, intervals, and expectations, demonstrating a spatial code for independently represent= ing each melodic dimension. The same participants also listened to naturall= y spoken English sentences. Whereas previous work suggests cortical selecti= vity for broad sound categories like =91music=92, here we demonstrate that music-selectivity is systematic= ally driven by the encoding of expectations, suggesting neural specializati= on for representing a specific sequence property of music. In contrast, the= pitch and interval dimensions of melody were represented by neural populations that also responded to speech and e= ncoded similar acoustic content across the two domains. Melodic perception = thus arises from the extraction of multiple streams of statistical and acou= stic information via specialized and domain-general mechanisms, respectively, within distinct sub-populatio= ns of higher-order auditory cortex.</li></ul> </ul> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif"> <span style=3D"margin:0px;color:black !important">&nbsp;</span></p> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif"> <span style=3D"margin:0px;color:black !important">With kind wishes,&nbsp;</= span></p> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif"> <span style=3D"margin:0px;color:black !important">&nbsp;&nbsp;</span></p> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif"> <span style=3D"margin:0px;color:black !important">Maria Geffen&nbsp;</span>= </p> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif"> <span style=3D"margin:0px;color:black !important">Yale Cohen&nbsp;</span></= p> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif"> <span style=3D"margin:0px;color:black !important">Steve Eliades&nbsp;</span= ></p> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif"> <span style=3D"margin:0px;color:black !important">Stephen David&nbsp;</span= ></p> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif"> <span style=3D"margin:0px;color:black !important">Alexandria Lesicko&nbsp;<= /span></p> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif"> <span style=3D"margin:0px;color:black !important">Nathan Vogler&nbsp;</span= ></p> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif"> <span style=3D"margin:0px;color:black !important">Jean-Hugues Lestang&nbsp;= </span></p> <p style=3D"caret-color:rgb(32, 31, 30);color:rgb(32, 31, 30);font-size:11p= t;font-family:Calibri, sans-serif"> <span style=3D"margin:0px;color:black !important">Huaizhen Cai&nbsp;</span>= </p> <br> </div> </body> </html> --_000_MN2PR04MB6127B676EC30C24D6532BF66C8C69MN2PR04MB6127namp_--


This message came from the mail archive
src/postings/2022/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University