Re: music listening styles ("Rachel M. Theodore" )


Subject: Re: music listening styles
From:    "Rachel M. Theodore"  <r.theodore@xxxxxxxx>
Date:    Tue, 1 Apr 2008 20:01:52 -0400
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

------=_Part_33838_1342649.1207094512154 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline Dear Dr. Jacobster --- I appreciate your query - I should have been more explicit. I do not notice any difference between music with and without vocals --- which is why I've thought that perhaps music processing, for me, uses some sort of language resources. This is purely observational, of course, and we would need to control for those pieces that just require more attention than others... Thanks - R On Tue, Apr 1, 2008 at 7:41 PM, Dr. Harriet Jacobster <hjacobster@xxxxxxxx> wrote: > Your observation bring ups some interesting thoughts.... > When you say that you have difficulty with language tasks, do you notice > any difference when you listen to purely instrumental versus vocal music? I > can see the difficulty with listening to vocal music and still trying to > engage in another linguistic task, but not much with purely instrumental. I > personally do not have that difficulty, but I have "mixed dominance" and > have no problem listening to several auditory tasks at once. > There was research done at Stanford Medical where MRIs were done while > subjects were listening to symphonic music and investigate the neural > dynamics of event segmentation. > > Here is the reference: > "Neural Dynamics of Event Segmentation in Music: Converging Evidence for > Dissociable Ventral and Dorsal Networks." Devarajan Sridharan, Daniel J. > Levitin, Chris H. Chafe, Jonathan Berger and Vinod Menon. *Neuron* > Volume 55, Issue 3, 2 August 2007, Pages 521-532 > > > ~~~~~~~~~~~~~~~~~~~~~ > Harriet B. Jacobster, Au.D. > Board Certified Doctor of Audiology > hjacobster@xxxxxxxx > > > > > > Rachel M. Theodore wrote: > > Dear All --- > I share Professor Repp's concern that personal anecdotes will interfere > with the empirical basis of the original query - so I will follow my > anecdote with a potential framework in which to quantify emotional > experience during musical listening. > > As a classical pianist, I too find it quite difficult to engage in > cognitive tasks when music is in the environment. Most notably, I have > difficulty with language tasks including writing - but also talking and > listening (a problem indeed when music is played at departmental functions). > I've often wondered if this interference stems from the fact that my formal > training as a pianist began at the same time I began formal training in > reading - around the age of 5 years. I wish I could remember how I was able > to separate learning a label such as "b" and knowing that it was a symbol > not only for a pitch but also a character in the alphabet. I wonder if the > perceptual phenomena experienced by early bilinguals would hold if the > bilingual in question was fluent in spoken language and in music. > > In returning to the original post - I think that Larry Barsalou at Emory > University is doing some neat work on situated conceptualization - his > framework might provide a basis for exploring the experience of emotion > during the perception of music. If conceptual knowledge is grounded in the > brain's modal systems for perception and action, then we should be able to > observe activity across emotion centers and auditory perception centers when > listening to music "emotionally". > > Thank you --- > > R > > -- > _____________________________ > > Rachel M. Theodore > NIH Ruth L. Kirschstein Predoctoral Fellow > Psychology Department - 125 NI > Northeastern University > 360 Huntington Ave. > Boston, MA 02115 > > 617.373.5551 (phone) > 617.373.8714 (fax) > > http://web.mac.com/r.theodore > _____________________________ > > > -- _____________________________ Rachel M. Theodore NIH Ruth L. Kirschstein Predoctoral Fellow Psychology Department - 125 NI Northeastern University 360 Huntington Ave. Boston, MA 02115 617.373.5551 (phone) 617.373.8714 (fax) http://web.mac.com/r.theodore _____________________________ ------=_Part_33838_1342649.1207094512154 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline Dear Dr. Jacobster ---<div><br></div><div>I appreciate your query - I should have been more explicit. &nbsp;I do not notice any difference between music with and without vocals --- which is why I&#39;ve thought that perhaps music processing, for me, uses some sort of language resources. &nbsp;This is purely observational, of course, and we would need to control for those pieces that just require more attention than others...</div> <div><br></div><div>Thanks -&nbsp;</div><div><br></div><div>R&nbsp;</div><br><div class="gmail_quote">On Tue, Apr 1, 2008 at 7:41 PM, Dr. Harriet Jacobster &lt;<a href="mailto:hjacobster@xxxxxxxx">hjacobster@xxxxxxxx</a>&gt; wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"> <div bgcolor="#ccffff" text="#000066"> Your observation bring ups some interesting thoughts....<br> When you say that you have difficulty with language tasks, do you notice any difference when you listen to purely instrumental versus vocal music?&nbsp; I can see the difficulty with listening to vocal music and still trying to engage in another linguistic task, but not much with purely instrumental.&nbsp; I personally do not have that difficulty, but I have &quot;mixed dominance&quot; and have no problem listening to several auditory tasks at once. <br> There was research done at Stanford Medical where MRIs were done while subjects were listening to symphonic music and investigate the neural dynamics of event segmentation.<br> <br> Here is the reference:<br> &quot;Neural Dynamics of Event Segmentation in Music: Converging Evidence for Dissociable Ventral and Dorsal Networks.&quot;&nbsp; Devarajan Sridharan, Daniel J. Levitin, Chris H. Chafe, Jonathan Berger and Vinod Menon.&nbsp; <i>Neuron</i>&nbsp; Volume 55, Issue 3, 2 August 2007, Pages 521-532 <br><div class="Ih2E3d"> <br> <br> ~~~~~~~~~~~~~~~~~~~~~<br> Harriet B. Jacobster, Au.D.<br> Board Certified Doctor of Audiology<br> </div><a href="mailto:hjacobster@xxxxxxxx" target="_blank">hjacobster@xxxxxxxx</a><div><div></div><div class="Wj3C7c"><br> <br> &nbsp;<br> <br> <br> Rachel M. Theodore wrote: <blockquote type="cite">Dear All --- <div><br> </div> <div>I share Professor Repp&#39;s concern that personal anecdotes will interfere with the empirical basis of the original query - so I will follow my anecdote with a potential framework in which to quantify emotional experience during musical listening.</div> <div><br> </div> <div>As a classical pianist, I too find it quite difficult to engage in cognitive tasks when music is in the environment. &nbsp;Most notably, I have difficulty with language tasks including writing - but also talking and listening (a problem indeed when music is played at departmental functions). &nbsp;I&#39;ve often wondered if this interference stems from the fact that my formal training as a pianist began at the same time I began formal training in reading - around the age of 5 years. &nbsp;I wish I could remember how I was able to separate learning a label such as &quot;b&quot; and knowing that it was a symbol not only for a pitch but also a character in the alphabet. &nbsp;I wonder if the perceptual phenomena experienced by early bilinguals would hold if the bilingual in question was fluent in spoken language and in music.</div> <div><br> </div> <div>In returning to the original post - I think that Larry Barsalou at Emory University is doing some neat work on situated conceptualization - his framework might provide a basis for exploring the experience of emotion during the perception of music. &nbsp;If conceptual knowledge is grounded in the brain&#39;s modal systems for perception and action, then we should be able to observe activity across emotion centers and auditory perception centers when listening to music &quot;emotionally&quot;. &nbsp;</div> <div><br> </div> <div>Thank you ---</div> <div><br> </div> <div>R</div> <div> <div class="gmail_quote"><br> </div> -- <br> _____________________________<br> <br> Rachel M. Theodore<br> NIH Ruth L. Kirschstein Predoctoral Fellow<br> Psychology Department - 125 NI<br> Northeastern University<br> 360 Huntington Ave.<br> Boston, MA 02115<br> <br> 617.373.5551 (phone)<br> 617.373.8714 (fax)<br> <br> <a href="http://web.mac.com/r.theodore" target="_blank">http://web.mac.com/r.theodore</a><br> _____________________________ </div> </blockquote> <br> </div></div></div> </blockquote></div><br><br clear="all"><br>-- <br>_____________________________<br><br>Rachel M. Theodore<br>NIH Ruth L. Kirschstein Predoctoral Fellow<br>Psychology Department - 125 NI<br>Northeastern University<br>360 Huntington Ave.<br> Boston, MA 02115<br><br>617.373.5551 (phone)<br>617.373.8714 (fax)<br><br><a href="http://web.mac.com/r.theodore">http://web.mac.com/r.theodore</a><br>_____________________________ ------=_Part_33838_1342649.1207094512154--


This message came from the mail archive
http://www.auditory.org/postings/2008/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University