Re: AUDITORY Digest - 20 Dec 2010 to 21 Dec 2010 (#2010-293) (Nikolai Novitski )


Subject: Re: AUDITORY Digest - 20 Dec 2010 to 21 Dec 2010 (#2010-293)
From:    Nikolai Novitski  <nikolai.novitski@xxxxxxxx>
Date:    Wed, 22 Dec 2010 10:31:44 +0100
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--001517511848e4a6960497fc6b8b Content-Type: text/plain; charset=ISO-8859-1 Dear list administrator, Please, unsubscribe me from this list. Yours, Nikolay Novitskiy On Wed, Dec 22, 2010 at 6:01 AM, AUDITORY automatic digest system < LISTSERV@xxxxxxxx> wrote: > There are 3 messages totalling 353 lines in this issue. > > Topics of the day: > > 1. Signals and Systems in Speech and Hearing, 2nd edition > 2. SV: [AUDITORY] Rhythmic discrimination fovea? > 3. Forum Acusticum 2011 abstract deadline soon! > > ---------------------------------------------------------------------- > > Date: Tue, 21 Dec 2010 09:22:43 +0000 > From: Stuart Rosen <stuart@xxxxxxxx> > Subject: Signals and Systems in Speech and Hearing, 2nd edition > > Peter Howell and I are pleased to announce that, just about exactly 20=20 > years after its first appearance, a new edition of 'Signals and Systems=20 > in Speech and Hearing' has appeared. This book aims to present the=20 > essentials of signals and systems analysis required by audiologists,=20 > phoneticians, speech and language therapists and psychologists=20 > interested in almost any aspect of speech and hearing. It will also be=20 > of use to people working in acoustic aspects of animal communication. > > Although much of the main thrust of the book remains unchanged (no=20 > modifications to Fourier's theorem have appeared recently!), many=20 > changes have been made to reflect the nearly total reliance in the field=20 > now concerning digital means for the recording, manipulation, storage=20 > and transmission of signals. > > This is most strongly reflected in two chapters. Chapter 11, dealing=20 > with spectrograms, has been much extended and describes the two=20 > different ways in which spectrograms can be constructed =96 through filte= > r=20 > banks and time windowing =96 and the relationship between them. Chapter=20 > 14, dealing explicitly with digital signals and systems, has been=20 > expanded greatly to give concrete examples of digital systems and=20 > digital signal processing, including the notion of infinite impulse=20 > response (IIR) and finite impulse response (FIR) filters. > > Finally, more in response to our teaching experience than any change in=20 > instrumentation, Chapter 12 now focuses on the notion of the auditory=20 > periphery as a set of systems, showing how its function is analogous to=20 > that of making a spectrogram. > > More information about the book can be found at: > http://www.phon.ucl.ac.uk/home/stuart/S&S_2010.html > > You can order an inspection copy here: > http://info.emeraldinsight.com/promo/signals.htm > > On sale at Amazon: > http://www.amazon.com/Signals-Systems-Speech-Hearing-2nd/dp/1848552262/re= > f=3Dsr_1_1?ie=3DUTF8&s=3Dbooks&qid=3D1292923151&sr=3D8-1<http://www.amazon.com/Signals-Systems-Speech-Hearing-2nd/dp/1848552262/re=%0Af=3Dsr_1_1?ie=3DUTF8&s=3Dbooks&qid=3D1292923151&sr=3D8-1> > > UK site: > http://www.amazon.co.uk/Signals-Systems-Speech-Hearing-2nd/dp/1848552262/= > ref=3Dsr_1_1?ie=3DUTF8&s=3Dbooks&qid=3D1292923151&sr=3D8-1<http://www.amazon.co.uk/Signals-Systems-Speech-Hearing-2nd/dp/1848552262/=%0Aref=3Dsr_1_1?ie=3DUTF8&s=3Dbooks&qid=3D1292923151&sr=3D8-1> > > I have used this book for some years now in a basic course aimed mostly=20 > at audiologists, the details of which can be found here: > http://www.phon.ucl.ac.uk/courses/spsci/sigsys/ > > --=20 > /*------------------------------------------------*/ > Stuart Rosen, PhD > Professor of Speech and Hearing Science > Co-director of the UCL Centre for Human Communication > Speech, Hearing and Phonetic Sciences > UCL Division of Psychology & Language Sciences > 2 Wakefield Street > London WC1N 1PF > England > > Tel: internal x24077 > (+ 44 [0]20) 7679 4077 > Admin: (+ 44 [0]20) 7679 4050 > Fax: (+ 44 [0]20) 7679 4238 > > Email: stuart@xxxxxxxx > > Home page: http://www.phon.ucl.ac.uk/home/stuart > /*------------------------------------------------*/ > > ------------------------------ > > Date: Tue, 21 Dec 2010 11:07:00 +0100 > From: Leon van Noorden <leonvannoorden@xxxxxxxx> > Subject: Re: SV: [AUDITORY] Rhythmic discrimination fovea? > > --Boundary_(ID_gmZebpO3ONJjcAX0S+4mmQ) > Content-type: text/plain; CHARSET=US-ASCII > Content-transfer-encoding: 7BIT > > Dear Eliot, > > I don't know of direct measurements of the discrimination of such patterns. > One hypothesis that I can think of is that for such discriminations it is > necessary to count the beats. This would be easiest if the > duration of the temporal unit of which the pattern is constructed is in > between > 375 and 750 ms, with a shift to the latter for untrained people. > But perceiving directly the beat in such sequences could be difficult. > > Cf: > Van Noorden, L., & Moelants, D. (1999). Resonance in the Perception of > Musical Pulse. Journal of New Music Research, 28(1), 43--66. > For discrimination of tempo as such you should look at work of Michon. > > Kind regards, > > Leon van Noorden > www.ipem.ugent.be > www.unescog.org > > > > On 20 Dec 2010, at 18:31, Eliot Handelman wrote: > > > On 20/12/2010 10:52 AM, Guy Madison wrote: > >> Hi Eliot, > >> > >> there are virtually countless variations of short rhythms like these. > It's not clear to me what scientific question you want to address with them, > and that determines to a large extent which references that may be relevant. > > > > Sorry to be unclear, thanks for speedy reply. I am asking specifically > about the effect of tempo on rhythmic discrimination, > > and the example I gave was only intended to illustrate. I selected it > because it is especially simple: > > > > 2 1 1 can be divided into two parts, a long, and two shorts which add up > to the long. Now vary the rhythm such that > > the shorts are all the same size but don't quite add up to the long, eg > 10 6 6. > > > > My question is: at what tempo will such variations tend to be perceived > as being just the same as 2 1 1? > > > > If, eg, the tempo is extremely slow (1= 1 day, or maybe 8 seconds). then > I guess we do not perceive any difference. > > If the tempo is extremely fast, then some variations will certainly also > be indistinguishable from 2 1 1 (eg, 1000, 499, 499). > > > > To be clear: I'm asking about the effect of tempo/rate of discrimination. > I am guessing that there's some window > > with optimal discrimination. > > > > The first of the references you gave below, for example, found tempo to > be a complex variable to control. The author > > also seems to be working with rather complex rhythms of the sort that > occur in serial music and probably wanted to > > know whether anyone can hear these. Sorry if I munged this, as I only > looked rather quickly. In contrast, I'm asking > > about very simple rhythms and what happens to simple inequalities as the > tempo is varied from very slow to very fast. > > > > The research problem behind this has to do with representations of music > at various levels of rhythmic approximation, > > in particular I am studying patterns of alternation that be induced over > rhythmic groups, given segmentation > > criteria. In order to construct different quantal levels, I'm just using > clustering algorithms on IOIs to generate base > > structures used for further analysis, but it occurred to me that there's > one area roughly between 80 & 800ms > > where (I think) very fine discriminations can be made -- to which the > clustering algorithm should be sensitive. > > > > This is all part of my Jack & Jill automatic composition system: for more > information see my home page. > > > > best, > > > > -- eliot > > > > > > > > > > > >> However, here are a few papers that should be relevant. Please mail me > directly if you can provide more detailed description of your goal, in which > case I might be able to give more specific tips. > >> > >> Best, Guy > >> > >> 1. Carson, B. (2007). Perceiving and distinguishing simple > timespan ratios without metric reinforcement. Journal of New Music Research, > 36, 313-336. > > > --Boundary_(ID_gmZebpO3ONJjcAX0S+4mmQ) > Content-type: text/html; CHARSET=US-ASCII > Content-transfer-encoding: quoted-printable > > <html><head></head><body style=3D"word-wrap: break-word; = > -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; = > "><div>Dear Eliot,</div><div><br></div><div>I don't know of direct = > measurements of the discrimination of such patterns.</div><div>One = > hypothesis that I can think of is that for such discriminations it = > is</div><div>necessary to count the beats. This would be easiest if = > the</div><div>duration of the temporal unit of which the pattern is = > constructed is in between</div><div>375 and 750 ms, with a shift to the = > latter for untrained people.</div><div>But perceiving directly the beat = > in such sequences could be = > difficult.</div><div><br></div><div>Cf:</div><div>Van Noorden, L., > &amp; Moelants, D. (1999). Resonance in the Perception of Musical Pulse. = > <i style=3D"mso-bidi-font-style:normal">Journal of New Music Research, = > 28</i>(1), > 43--66.</div><div>For discrimination of tempo as such you should look at = > work of Michon.</div><div><br></div><div>Kind = > regards,</div><div><br></div><div>Leon van Noorden</div><div><a = > href=3D"http://www.ipem.ugent.be">www.ipem.ugent.be</a></div><div><a = > href=3D"http://www.unescog.org">www.unescog.org > </a></div><div><br></div><d= > iv><br></div> > > <!--EndFragment--> > > > > <br><div><div>On 20 Dec 2010, at 18:31, Eliot Handelman wrote:</div><br = > class=3D"Apple-interchange-newline"><blockquote type=3D"cite"><div>On = > 20/12/2010 10:52 AM, Guy Madison wrote:<br><blockquote type=3D"cite">Hi = > Eliot,<br></blockquote><blockquote = > type=3D"cite"><br></blockquote><blockquote type=3D"cite">there are = > virtually countless variations of short rhythms like these. It's not = > clear to me what scientific question you want to address with them, and = > that determines to a large extent which references that may be = > relevant.<br></blockquote><br>Sorry to be unclear, thanks for speedy = > reply. I am asking specifically about the effect of tempo on rhythmic = > discrimination,<br>and the example I gave was only intended to = > illustrate. &nbsp;I selected it because it is especially = > simple:<br><br>2 1 1 can be divided into two parts, a long, and two = > shorts which add up to the long. &nbsp;Now vary the rhythm such = > that<br>the shorts are all the same size but don't quite add up to the = > long, eg 10 6 6.<br><br>My question is: at what tempo will such = > variations tend to be perceived as being just the same as 2 1 = > 1?<br><br>If, eg, the tempo is extremely slow (1=3D 1 day, or maybe 8 = > seconds). then I guess we do not perceive any difference.<br>If the = > tempo is extremely fast, then some variations will certainly also be = > indistinguishable from 2 1 1 (eg, 1000, 499, 499).<br><br>To be clear: = > I'm asking about the effect of tempo/rate of discrimination. I am = > guessing that there's some window<br>with optimal = > discrimination.<br><br>The first of the references you gave below, for = > example, found tempo to be a complex variable to control. The = > author<br>also seems to be working with rather complex rhythms of the = > sort that occur in serial music and probably wanted to<br>know whether = > anyone can hear these. Sorry if I munged this, as I only looked rather = > quickly. In contrast, I'm asking<br>about very simple rhythms and what = > happens to simple inequalities as the tempo is varied from very slow to = > very fast.<br><br>The research problem behind this has to do with = > representations of music at various levels of rhythmic = > approximation,<br>in particular I am studying patterns of alternation = > that be induced over rhythmic groups, given segmentation<br>criteria. In = > order to construct different quantal levels, I'm just using clustering = > algorithms on IOIs to generate base<br>structures used for further = > analysis, but it occurred to me that there's one area roughly between 80 = > &amp; 800ms<br>where (I think) very fine discriminations can be made -- = > to which the clustering algorithm should be sensitive.<br><br>This is = > all part of my Jack &amp; Jill automatic composition system: for more = > information see my home page.<br><br>best,<br><br>-- = > eliot<br><br><br><br><br><br><blockquote type=3D"cite">However, here are = > a few papers that should be relevant. Please mail me directly if you can = > provide more detailed description of your goal, in which case I might be = > able to give more specific tips.<br></blockquote><blockquote = > type=3D"cite"><br></blockquote><blockquote type=3D"cite">Best, = > Guy<br></blockquote><blockquote type=3D"cite"><br></blockquote><blockquote= > type=3D"cite"><span class=3D"Apple-tab-span" style=3D"white-space:pre"> > = > </span>1. <span class=3D"Apple-tab-span" style=3D"white-space:pre"> = > </span>Carson, B. (2007). Perceiving and distinguishing simple timespan = > ratios without metric reinforcement. Journal of New Music Research, 36, = > 313-336.<br></blockquote></div></blockquote></div><br></body></html>= > > --Boundary_(ID_gmZebpO3ONJjcAX0S+4mmQ)-- > > ------------------------------ > > Date: Tue, 21 Dec 2010 14:57:06 +0200 > From: Ville Pulkki <ville@xxxxxxxx> > Subject: Forum Acusticum 2011 abstract deadline soon! > > Dear colleague, > > It is with great pleasure that we hereby invite you to submit your abstra= > cts=20 > for the Forum Acusticum 2011 - the triennial European conference of the=20 > European Acoustics Association (EAA), which will take place in Aalborg, D= > enmark=20 > from June 27 to July 1, 2011 (Previous Forum Acusticums were organized in= > =20 > - 1996: Antwerp, Belgium,=20 > - 1999: Berlin, Germany (jointly with ASA and DEGA), > - 2002: Sevilla, Spain, > - 2005: Budapest, Hungary,=20 > - 2008: Paris, France (jointly with ASA and SFA)). > > The Forum Acusticum embraces all fields of acoustics, and the technical p= > rogram=20 > will among other include the following keynote lectures: > > ROOM AND BUILDING ACOUSTICS: What do we know in room acoustics?=20 > by Michael Vorl=E4nder, RWTH Aachen University, Germany > > COMPUTATIONAL ACOUSTICS: Modern Numerical Methods to Solve Real Life Acou= > stic Problems > by Otto von Estorff, Hamburg University of Technology, Germany > > PSYCHOLOGICAL AND PHYSIOLOGICAL ACOUSTICS: Pitch > by Alain de Cheveigne, CNRS, Ecole Normale Sup=E9rieure, Universit=E9 Par= > is Descartes,=20 > and University College London > > NOISE: Reduction of Tyre/Road Noise =96 A complex challenge needing techn= > ological development > by Wolfgang Kropp, Chalmers University of Technology, Sweden > > MUSICAL ACOUSTICS: Modeling and simulation of musical instruments > by Antoine Chaigne, ENSTA ParisTech, France > > ULTRASOUND: Laser Ultrasonics: Recent Achievements and Perspectives > by Vitalyi Gusev, Universit=E9 du Maine, France > > HYDROACOUSTICS:=20 > by Henrik Schmidt, Laboratory of Autonomous Marine Sensing Systems, MIT. > > OTHER TOPICS: Advanced statistical analysis of perceptual audio evaluatio= > n data > by Per Brockhoff, Technical University of Denmark, Denmark > > The technical program will also include invited and contributed papers in= > structured=20 > parallel sessions and poster presentations. During the past months many p= > roposals for=20 > structured sessions have been accepted, and we now hereby invite abstract= > s for these=20 > sessions, as well as for all other scientific areas of the Forum Acusticu= > m in general.=20 > If you have been invited to participate in a given structured session, we= > kindly ask=20 > that you indicate that session as the area of you submission. We also wel= > come=20 > submissions within the general EAA TC areas (capital letters in the list)= > , and in the=20 > field of acoustics generally. > > Abstracts shall be of 100 to 250 words, and the deadline for abstracts is= > January 9th 2011.=20 > Notification of acceptance will be given by February 21st 2011, and the d= > eadline for the=20 > four-page conference papers is March 21st 2011. > > The EAA offers free participation for a total of 10 East European Ph.D. s= > tudents. Detailed=20 > instructions for this will be posted on the conference website soon, but = > please tick the box=20 > of 'EAA grant applicant', when submitting your abstract, if you intend to= > apply for free=20 > participation through the EAA. > > Information for exhibitors and sponsors will soon follow. If you wish to = > be notified=20 > directly on options for the exhibition, booth sizes, prices etc., then pl= > ease send your=20 > contact information to &lt;exhibition@xxxxxxxx<lt%3Bexhibition@xxxxxxxx> > &gt;. > > We hope that the Forum Acusticum conference will be a fruitful meeting po= > int for researchers=20 > and practitioners dealing with all fields of acoustics and sound-related = > research. > > Welcome to Forum Acusticum 2011 in Denmark! > > Ville Pulkki, Sessions Chairman > > Flemming Christensen, General Secretary > > Dorte Hammersh=F8i, General Chairman > > ____________________________________________________________________= > ______ > powered by Conference Accelerator - http://www.intellagence.eu/ > > ------------------------------ > > End of AUDITORY Digest - 20 Dec 2010 to 21 Dec 2010 (#2010-293) > *************************************************************** > --001517511848e4a6960497fc6b8b Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Dear list administrator,<br><br>Please, unsubscribe me from this list.<br><= br>Yours,<br>Nikolay Novitskiy<br><br><br><div class=3D"gmail_quote">On Wed= , Dec 22, 2010 at 6:01 AM, AUDITORY automatic digest system <span dir=3D"lt= r">&lt;<a href=3D"mailto:LISTSERV@xxxxxxxx">LISTSERV@xxxxxxxx= </a>&gt;</span> wrote:<br> <blockquote class=3D"gmail_quote" style=3D"margin: 0pt 0pt 0pt 0.8ex; borde= r-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">There are 3 messa= ges totalling 353 lines in this issue.<br> <br> Topics of the day:<br> <br> =A01. Signals and Systems in Speech and Hearing, 2nd edition<br> =A02. SV: [AUDITORY] Rhythmic discrimination fovea?<br> =A03. Forum Acusticum 2011 abstract deadline soon!<br> <br> ----------------------------------------------------------------------<br> <br> Date: =A0 =A0Tue, 21 Dec 2010 09:22:43 +0000<br> From: =A0 =A0Stuart Rosen &lt;<a href=3D"mailto:stuart@xxxxxxxx"= >stuart@xxxxxxxx</a>&gt;<br> Subject: Signals and Systems in Speech and Hearing, 2nd edition<br> <br> Peter Howell and I are pleased to announce that, just about exactly 20=3D20= <br> years after its first appearance, a new edition of &#39;Signals and Systems= =3D20<br> in Speech and Hearing&#39; has appeared. This book aims to present the=3D20= <br> essentials of signals and systems analysis required by audiologists,=3D20<b= r> phoneticians, speech and language therapists and psychologists=3D20<br> interested in almost any aspect of speech and hearing. It will also be=3D20= <br> of use to people working in acoustic aspects of animal communication.<br> <br> Although much of the main thrust of the book remains unchanged (no=3D20<br> modifications to Fourier&#39;s theorem have appeared recently!), many=3D20<= br> changes have been made to reflect the nearly total reliance in the field=3D= 20<br> now concerning digital means for the recording, manipulation, storage=3D20<= br> and transmission of signals.<br> <br> This is most strongly reflected in two chapters. Chapter 11, dealing=3D20<b= r> with spectrograms, has been much extended and describes the two=3D20<br> different ways in which spectrograms can be constructed =3D96 through filte= =3D<br> r=3D20<br> banks and time windowing =3D96 and the relationship between them. Chapter= =3D20<br> 14, dealing explicitly with digital signals and systems, has been=3D20<br> expanded greatly to give concrete examples of digital systems and=3D20<br> digital signal processing, including the notion of infinite impulse=3D20<br= > response (IIR) and finite impulse response (FIR) filters.<br> <br> Finally, more in response to our teaching experience than any change in=3D2= 0<br> instrumentation, Chapter 12 now focuses on the notion of the auditory=3D20<= br> periphery as a set of systems, showing how its function is analogous to=3D2= 0<br> that of making a spectrogram.<br> <br> More information about the book can be found at:<br> <a href=3D"http://www.phon.ucl.ac.uk/home/stuart/S&amp;S_2010.html" target= =3D"_blank">http://www.phon.ucl.ac.uk/home/stuart/S&amp;S_2010.html</a><br> <br> You can order an inspection copy here:<br> <a href=3D"http://info.emeraldinsight.com/promo/signals.htm" target=3D"_bla= nk">http://info.emeraldinsight.com/promo/signals.htm</a><br> <br> On sale at Amazon:<br> <a href=3D"http://www.amazon.com/Signals-Systems-Speech-Hearing-2nd/dp/1848= 552262/re=3D%0Af=3D3Dsr_1_1?ie=3D3DUTF8&amp;s=3D3Dbooks&amp;qid=3D3D1292923= 151&amp;sr=3D3D8-1" target=3D"_blank">http://www.amazon.com/Signals-Systems= -Speech-Hearing-2nd/dp/1848552262/re=3D<br> f=3D3Dsr_1_1?ie=3D3DUTF8&amp;s=3D3Dbooks&amp;qid=3D3D1292923151&amp;sr=3D3D= 8-1</a><br> <br> UK site:<br> <a href=3D"http://www.amazon.co.uk/Signals-Systems-Speech-Hearing-2nd/dp/18= 48552262/=3D%0Aref=3D3Dsr_1_1?ie=3D3DUTF8&amp;s=3D3Dbooks&amp;qid=3D3D12929= 23151&amp;sr=3D3D8-1" target=3D"_blank">http://www.amazon.co.uk/Signals-Sys= tems-Speech-Hearing-2nd/dp/1848552262/=3D<br> ref=3D3Dsr_1_1?ie=3D3DUTF8&amp;s=3D3Dbooks&amp;qid=3D3D1292923151&amp;sr=3D= 3D8-1</a><br> <br> I have used this book for some years now in a basic course aimed mostly=3D2= 0<br> at audiologists, the details of which can be found here:<br> <a href=3D"http://www.phon.ucl.ac.uk/courses/spsci/sigsys/" target=3D"_blan= k">http://www.phon.ucl.ac.uk/courses/spsci/sigsys/</a><br> <br> --=3D20<br> /*------------------------------------------------*/<br> Stuart Rosen, PhD<br> Professor of Speech and Hearing Science<br> Co-director of the UCL Centre for Human Communication<br> Speech, Hearing and Phonetic Sciences<br> UCL Division of Psychology &amp; Language Sciences<br> 2 Wakefield Street<br> London WC1N 1PF<br> England<br> <br> Tel: =A0 internal x24077<br> =A0 =A0 =A0 =A0(+ 44 [0]20) 7679 4077<br> Admin: (+ 44 [0]20) 7679 4050<br> Fax: =A0 (+ 44 [0]20) 7679 4238<br> <br> Email: <a href=3D"mailto:stuart@xxxxxxxx">stuart@xxxxxxxx</a><b= r> <br> Home page: <a href=3D"http://www.phon.ucl.ac.uk/home/stuart" target=3D"_bla= nk">http://www.phon.ucl.ac.uk/home/stuart</a><br> /*------------------------------------------------*/<br> <br> ------------------------------<br> <br> Date: =A0 =A0Tue, 21 Dec 2010 11:07:00 +0100<br> From: =A0 =A0Leon van Noorden &lt;<a href=3D"mailto:leonvannoorden@xxxxxxxx"= >leonvannoorden@xxxxxxxx</a>&gt;<br> Subject: Re: SV: [AUDITORY] Rhythmic discrimination fovea?<br> <br> --Boundary_(ID_gmZebpO3ONJjcAX0S+4mmQ)<br> Content-type: text/plain; CHARSET=3DUS-ASCII<br> Content-transfer-encoding: 7BIT<br> <br> Dear Eliot,<br> <br> I don&#39;t know of direct measurements of the discrimination of such patte= rns.<br> One hypothesis that I can think of is that for such discriminations it is<b= r> necessary to count the beats. This would be easiest if the<br> duration of the temporal unit of which the pattern is constructed is in bet= ween<br> 375 and 750 ms, with a shift to the latter for untrained people.<br> But perceiving directly the beat in such sequences could be difficult.<br> <br> Cf:<br> Van Noorden, L., &amp; Moelants, D. (1999). Resonance in the Perception of = Musical Pulse. Journal of New Music Research, 28(1), 43--66.<br> For discrimination of tempo as such you should look at work of Michon.<br> <br> Kind regards,<br> <br> Leon van Noorden<br> <a href=3D"http://www.ipem.ugent.be" target=3D"_blank">www.ipem.ugent.be</a= ><br> <a href=3D"http://www.unescog.org" target=3D"_blank">www.unescog.org</a><br= > <br> <br> <br> On 20 Dec 2010, at 18:31, Eliot Handelman wrote:<br> <br> &gt; On 20/12/2010 10:52 AM, Guy Madison wrote:<br> &gt;&gt; Hi Eliot,<br> &gt;&gt;<br> &gt;&gt; there are virtually countless variations of short rhythms like the= se. It&#39;s not clear to me what scientific question you want to address w= ith them, and that determines to a large extent which references that may b= e relevant.<br> &gt;<br> &gt; Sorry to be unclear, thanks for speedy reply. I am asking specifically= about the effect of tempo on rhythmic discrimination,<br> &gt; and the example I gave was only intended to illustrate. =A0I selected = it because it is especially simple:<br> &gt;<br> &gt; 2 1 1 can be divided into two parts, a long, and two shorts which add = up to the long. =A0Now vary the rhythm such that<br> &gt; the shorts are all the same size but don&#39;t quite add up to the lon= g, eg 10 6 6.<br> &gt;<br> &gt; My question is: at what tempo will such variations tend to be perceive= d as being just the same as 2 1 1?<br> &gt;<br> &gt; If, eg, the tempo is extremely slow (1=3D 1 day, or maybe 8 seconds). = then I guess we do not perceive any difference.<br> &gt; If the tempo is extremely fast, then some variations will certainly al= so be indistinguishable from 2 1 1 (eg, 1000, 499, 499).<br> &gt;<br> &gt; To be clear: I&#39;m asking about the effect of tempo/rate of discrimi= nation. I am guessing that there&#39;s some window<br> &gt; with optimal discrimination.<br> &gt;<br> &gt; The first of the references you gave below, for example, found tempo t= o be a complex variable to control. The author<br> &gt; also seems to be working with rather complex rhythms of the sort that = occur in serial music and probably wanted to<br> &gt; know whether anyone can hear these. Sorry if I munged this, as I only = looked rather quickly. In contrast, I&#39;m asking<br> &gt; about very simple rhythms and what happens to simple inequalities as t= he tempo is varied from very slow to very fast.<br> &gt;<br> &gt; The research problem behind this has to do with representations of mus= ic at various levels of rhythmic approximation,<br> &gt; in particular I am studying patterns of alternation that be induced ov= er rhythmic groups, given segmentation<br> &gt; criteria. In order to construct different quantal levels, I&#39;m just= using clustering algorithms on IOIs to generate base<br> &gt; structures used for further analysis, but it occurred to me that there= &#39;s one area roughly between 80 &amp; 800ms<br> &gt; where (I think) very fine discriminations can be made -- to which the = clustering algorithm should be sensitive.<br> &gt;<br> &gt; This is all part of my Jack &amp; Jill automatic composition system: f= or more information see my home page.<br> &gt;<br> &gt; best,<br> &gt;<br> &gt; -- eliot<br> &gt;<br> &gt;<br> &gt;<br> &gt;<br> &gt;<br> &gt;&gt; However, here are a few papers that should be relevant. Please mai= l me directly if you can provide more detailed description of your goal, in= which case I might be able to give more specific tips.<br> &gt;&gt;<br> &gt;&gt; Best, Guy<br> &gt;&gt;<br> &gt;&gt; =A0 =A0 =A01. =A0 =A0 =A0Carson, B. (2007). Perceiving and disting= uishing simple timespan ratios without metric reinforcement. Journal of New= Music Research, 36, 313-336.<br> <br> <br> --Boundary_(ID_gmZebpO3ONJjcAX0S+4mmQ)<br> Content-type: text/html; CHARSET=3DUS-ASCII<br> Content-transfer-encoding: quoted-printable<br> <br> &lt;html&gt;&lt;head&gt;&lt;/head&gt;&lt;body style=3D3D&quot;word-wrap: br= eak-word; =3D<br> -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; =3D<br> &quot;&gt;&lt;div&gt;Dear Eliot,&lt;/div&gt;&lt;div&gt;&lt;br&gt;&lt;/div&g= t;&lt;div&gt;I don&#39;t know of direct =3D<br> measurements of the discrimination of such patterns.&lt;/div&gt;&lt;div&gt;= One =3D<br> hypothesis that I can think of is that for such discriminations it =3D<br> is&lt;/div&gt;&lt;div&gt;necessary to count the beats. This would be easies= t if =3D<br> the&lt;/div&gt;&lt;div&gt;duration of the temporal unit of which the patter= n is =3D<br> constructed is in between&lt;/div&gt;&lt;div&gt;375 and 750 ms, with a shif= t to the =3D<br> latter for untrained people.&lt;/div&gt;&lt;div&gt;But perceiving directly = the beat =3D<br> in such sequences could be =3D<br> difficult.&lt;/div&gt;&lt;div&gt;&lt;br&gt;&lt;/div&gt;&lt;div&gt;Cf:&lt;/d= iv&gt;&lt;div&gt;Van Noorden, L.,<br> &amp;amp; Moelants, D. (1999). Resonance in the Perception of Musical Pulse= . =3D<br> &lt;i style=3D3D&quot;mso-bidi-font-style:normal&quot;&gt;Journal of New Mu= sic Research, =3D<br> 28&lt;/i&gt;(1),<br> 43--66.&lt;/div&gt;&lt;div&gt;For discrimination of tempo as such you shoul= d look at =3D<br> work of Michon.&lt;/div&gt;&lt;div&gt;&lt;br&gt;&lt;/div&gt;&lt;div&gt;Kind= =3D<br> regards,&lt;/div&gt;&lt;div&gt;&lt;br&gt;&lt;/div&gt;&lt;div&gt;Leon van No= orden&lt;/div&gt;&lt;div&gt;&lt;a =3D<br> href=3D3D&quot;<a href=3D"http://www.ipem.ugent.be" target=3D"_blank">http:= //www.ipem.ugent.be</a>&quot;&gt;<a href=3D"http://www.ipem.ugent.be" targe= t=3D"_blank">www.ipem.ugent.be</a>&lt;/a&gt;&lt;/div&gt;&lt;div&gt;&lt;a = =3D<br> href=3D3D&quot;<a href=3D"http://www.unescog.org" target=3D"_blank">http://= www.unescog.org</a>&quot;&gt;<a href=3D"http://www.unescog.org" target=3D"_= blank">www.unescog.org</a>&lt;/a&gt;&lt;/div&gt;&lt;div&gt;&lt;br&gt;&lt;/d= iv&gt;&lt;d=3D<br> iv&gt;&lt;br&gt;&lt;/div&gt;<br> <br> &lt;!--EndFragment--&gt;<br> <br> <br> <br> &lt;br&gt;&lt;div&gt;&lt;div&gt;On 20 Dec 2010, at 18:31, Eliot Handelman w= rote:&lt;/div&gt;&lt;br =3D<br> class=3D3D&quot;Apple-interchange-newline&quot;&gt;&lt;blockquote type=3D3D= &quot;cite&quot;&gt;&lt;div&gt;On =3D<br> 20/12/2010 10:52 AM, Guy Madison wrote:&lt;br&gt;&lt;blockquote type=3D3D&q= uot;cite&quot;&gt;Hi =3D<br> Eliot,&lt;br&gt;&lt;/blockquote&gt;&lt;blockquote =3D<br> type=3D3D&quot;cite&quot;&gt;&lt;br&gt;&lt;/blockquote&gt;&lt;blockquote ty= pe=3D3D&quot;cite&quot;&gt;there are =3D<br> virtually countless variations of short rhythms like these. It&#39;s not = =3D<br> clear to me what scientific question you want to address with them, and =3D= <br> that determines to a large extent which references that may be =3D<br> relevant.&lt;br&gt;&lt;/blockquote&gt;&lt;br&gt;Sorry to be unclear, thanks= for speedy =3D<br> reply. I am asking specifically about the effect of tempo on rhythmic =3D<b= r> discrimination,&lt;br&gt;and the example I gave was only intended to =3D<br= > illustrate. &amp;nbsp;I selected it because it is especially =3D<br> simple:&lt;br&gt;&lt;br&gt;2 1 1 can be divided into two parts, a long, and= two =3D<br> shorts which add up to the long. &amp;nbsp;Now vary the rhythm such =3D<br> that&lt;br&gt;the shorts are all the same size but don&#39;t quite add up t= o the =3D<br> long, eg 10 6 6.&lt;br&gt;&lt;br&gt;My question is: at what tempo will such= =3D<br> variations tend to be perceived as being just the same as 2 1 =3D<br> 1?&lt;br&gt;&lt;br&gt;If, eg, the tempo is extremely slow (1=3D3D 1 day, or= maybe 8 =3D<br> seconds). then I guess we do not perceive any difference.&lt;br&gt;If the = =3D<br> tempo is extremely fast, then some variations will certainly also be =3D<br= > indistinguishable from 2 1 1 (eg, 1000, 499, 499).&lt;br&gt;&lt;br&gt;To be= clear: =3D<br> I&#39;m asking about the effect of tempo/rate of discrimination. I am =3D<b= r> guessing that there&#39;s some window&lt;br&gt;with optimal =3D<br> discrimination.&lt;br&gt;&lt;br&gt;The first of the references you gave bel= ow, for =3D<br> example, found tempo to be a complex variable to control. The =3D<br> author&lt;br&gt;also seems to be working with rather complex rhythms of the= =3D<br> sort that occur in serial music and probably wanted to&lt;br&gt;know whethe= r =3D<br> anyone can hear these. Sorry if I munged this, as I only looked rather =3D<= br> quickly. In contrast, I&#39;m asking&lt;br&gt;about very simple rhythms and= what =3D<br> happens to simple inequalities as the tempo is varied from very slow to =3D= <br> very fast.&lt;br&gt;&lt;br&gt;The research problem behind this has to do wi= th =3D<br> representations of music at various levels of rhythmic =3D<br> approximation,&lt;br&gt;in particular I am studying patterns of alternation= =3D<br> that be induced over rhythmic groups, given segmentation&lt;br&gt;criteria.= In =3D<br> order to construct different quantal levels, I&#39;m just using clustering = =3D<br> algorithms on IOIs to generate base&lt;br&gt;structures used for further = =3D<br> analysis, but it occurred to me that there&#39;s one area roughly between 8= 0 =3D<br> &amp;amp; 800ms&lt;br&gt;where (I think) very fine discriminations can be m= ade -- =3D<br> to which the clustering algorithm should be sensitive.&lt;br&gt;&lt;br&gt;T= his is =3D<br> all part of my Jack &amp;amp; Jill automatic composition system: for more = =3D<br> information see my home page.&lt;br&gt;&lt;br&gt;best,&lt;br&gt;&lt;br&gt;-= - =3D<br> eliot&lt;br&gt;&lt;br&gt;&lt;br&gt;&lt;br&gt;&lt;br&gt;&lt;br&gt;&lt;blockq= uote type=3D3D&quot;cite&quot;&gt;However, here are =3D<br> a few papers that should be relevant. Please mail me directly if you can = =3D<br> provide more detailed description of your goal, in which case I might be = =3D<br> able to give more specific tips.&lt;br&gt;&lt;/blockquote&gt;&lt;blockquote= =3D<br> type=3D3D&quot;cite&quot;&gt;&lt;br&gt;&lt;/blockquote&gt;&lt;blockquote ty= pe=3D3D&quot;cite&quot;&gt;Best, =3D<br> Guy&lt;br&gt;&lt;/blockquote&gt;&lt;blockquote type=3D3D&quot;cite&quot;&gt= ;&lt;br&gt;&lt;/blockquote&gt;&lt;blockquote=3D<br> =A0type=3D3D&quot;cite&quot;&gt;&lt;span class=3D3D&quot;Apple-tab-span&quo= t; style=3D3D&quot;white-space:pre&quot;&gt; =A0 =A0 =A0 =A0=3D<br> &lt;/span&gt;1. &lt;span class=3D3D&quot;Apple-tab-span&quot; style=3D3D&qu= ot;white-space:pre&quot;&gt; =A0 =A0 =3D<br> &lt;/span&gt;Carson, B. (2007). Perceiving and distinguishing simple timesp= an =3D<br> ratios without metric reinforcement. Journal of New Music Research, 36, =3D= <br> 313-336.&lt;br&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;/di= v&gt;&lt;br&gt;&lt;/body&gt;&lt;/html&gt;=3D<br> <br> --Boundary_(ID_gmZebpO3ONJjcAX0S+4mmQ)--<br> <br> ------------------------------<br> <br> Date: =A0 =A0Tue, 21 Dec 2010 14:57:06 +0200<br> From: =A0 =A0Ville Pulkki &lt;<a href=3D"mailto:ville@xxxxxxxx">vil= le@xxxxxxxx</a>&gt;<br> Subject: Forum Acusticum 2011 abstract deadline soon!<br> <br> Dear colleague,<br> <br> It is with great pleasure that we hereby invite you to submit your abstra= =3D<br> cts=3D20<br> for the Forum Acusticum 2011 - the triennial European conference of the=3D2= 0<br> European Acoustics Association (EAA), which will take place in Aalborg, D= =3D<br> enmark=3D20<br> from June 27 to July 1, 2011 (Previous Forum Acusticums were organized in= =3D<br> =3D20<br> - 1996: Antwerp, Belgium,=3D20<br> - 1999: Berlin, Germany (jointly with ASA and DEGA),<br> - 2002: Sevilla, Spain,<br> - 2005: Budapest, Hungary,=3D20<br> - 2008: Paris, France (jointly with ASA and SFA)).<br> <br> The Forum Acusticum embraces all fields of acoustics, and the technical p= =3D<br> rogram=3D20<br> will among other include the following keynote lectures:<br> <br> ROOM AND BUILDING ACOUSTICS: What do we know in room acoustics?=3D20<br> by Michael Vorl=3DE4nder, RWTH Aachen University, Germany<br> <br> COMPUTATIONAL ACOUSTICS: Modern Numerical Methods to Solve Real Life Acou= =3D<br> stic Problems<br> by Otto von Estorff, Hamburg University of Technology, Germany<br> <br> PSYCHOLOGICAL AND PHYSIOLOGICAL ACOUSTICS: Pitch<br> by Alain de Cheveigne, CNRS, Ecole Normale Sup=3DE9rieure, Universit=3DE9 P= ar=3D<br> is Descartes,=3D20<br> and University College London<br> <br> NOISE: Reduction of Tyre/Road Noise =3D96 A complex challenge needing techn= =3D<br> ological development<br> by Wolfgang Kropp, Chalmers University of Technology, Sweden<br> <br> MUSICAL ACOUSTICS: Modeling and simulation of musical instruments<br> by Antoine Chaigne, ENSTA ParisTech, France<br> <br> ULTRASOUND: Laser Ultrasonics: Recent Achievements and Perspectives<br> by Vitalyi Gusev, Universit=3DE9 du Maine, France<br> <br> HYDROACOUSTICS:=3D20<br> by Henrik Schmidt, Laboratory of Autonomous Marine Sensing Systems, MIT.<br= > <br> OTHER TOPICS: Advanced statistical analysis of perceptual audio evaluatio= =3D<br> n data<br> by Per Brockhoff, =A0Technical University of Denmark, Denmark<br> <br> The technical program will also include invited and contributed papers in= =3D<br> =A0structured=3D20<br> parallel sessions and poster presentations. During the past months many p= =3D<br> roposals for=3D20<br> structured sessions have been accepted, and we now hereby invite abstract= =3D<br> s for these=3D20<br> sessions, as well as for all other scientific areas of the Forum Acusticu= =3D<br> m in general.=3D20<br> If you have been invited to participate in a given structured session, we= =3D<br> =A0kindly ask=3D20<br> that you indicate that session as the area of you submission. We also wel= =3D<br> come=3D20<br> submissions within the general EAA TC areas (capital letters in the list)= =3D<br> , and in the=3D20<br> field of acoustics generally.<br> <br> Abstracts shall be of 100 to 250 words, and the deadline for abstracts is= =3D<br> =A0January 9th 2011.=3D20<br> Notification of acceptance will be given by February 21st 2011, and the d= =3D<br> eadline for the=3D20<br> four-page conference papers is March 21st 2011.<br> <br> The EAA offers free participation for a total of 10 East European Ph.D. s= =3D<br> tudents. Detailed=3D20<br> instructions for this will be posted on the conference website soon, but = =3D<br> please tick the box=3D20<br> of &#39;EAA grant applicant&#39;, when submitting your abstract, if you int= end to=3D<br> =A0apply for free=3D20<br> participation through the EAA.<br> <br> Information for exhibitors and sponsors will soon follow. If you wish to = =3D<br> be notified=3D20<br> directly on options for the exhibition, booth sizes, prices etc., then pl= =3D<br> ease send your=3D20<br> contact information to &amp;<a href=3D"mailto:lt%3Bexhibition@xxxxxxxx">l= t;exhibition@xxxxxxxx</a>&amp;gt;.<br> <br> We hope that the Forum Acusticum conference will be a fruitful meeting po= =3D<br> int for researchers=3D20<br> and practitioners dealing with all fields of acoustics and sound-related = =3D<br> research.<br> <br> Welcome to Forum Acusticum 2011 in Denmark!<br> <br> Ville Pulkki, Sessions Chairman<br> <br> Flemming Christensen, General Secretary<br> <br> Dorte Hammersh=3DF8i, General Chairman<br> <br> =A0 =A0 __________________________________________________________________= __=3D<br> ______<br> =A0 =A0 =A0 =A0 =A0powered by Conference Accelerator - <a href=3D"http://w= ww.intellagence.eu/" target=3D"_blank">http://www.intellagence.eu/</a><br> <br> ------------------------------<br> <br> End of AUDITORY Digest - 20 Dec 2010 to 21 Dec 2010 (#2010-293)<br> ***************************************************************<br> </blockquote></div><br><div style=3D"visibility: hidden; left: -5000px; pos= ition: absolute; z-index: 9999; padding: 0px; margin-left: 0px; margin-top:= 0px; overflow: hidden; word-wrap: break-word; color: black; font-size: 10p= x; text-align: left; line-height: 130%;" id=3D"avg_ls_inline_popup"> </div> --001517511848e4a6960497fc6b8b--


This message came from the mail archive
/home/empire6/dpwe/public_html/postings/2010/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University