Re: [AUDITORY] Serial dependence in auditory perception (Matthew Winn )


Subject: Re: [AUDITORY] Serial dependence in auditory perception
From:    Matthew Winn  <0000011b522b2e6a-dmarc-request@xxxxxxxx>
Date:    Wed, 26 Jan 2022 00:02:59 -0600

--0000000000009fa4f805d675f536 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable You=E2=80=99ll find a lot of studies that explore this type of phenomenon u= nder the general label =E2=80=9Cauditory context effects=E2=80=9D, =E2=80=9Cspectral= contrast effects=E2=80=9D or =E2=80=9Cauditory enhancement effects=E2=80=9D. Christian Stilp has written= a review of how these effects emerge in the perception of speech sounds, and his paper is available here: https://louisville.edu/psychology/stilp/lab/publication-pdf/stilp-2019-wire= s-1 the abstract: *The extreme acoustic variability of speech is well established, which makes the proficiency of human speech perception all the more impressive. Speech perception, like perception in any modality, is relative to context, and this provides a means to normalize the acoustic variability in the speech signal. Acoustic context effects in speech perception have been widely documented, but a clear understanding of how these effects relate to each other across stimuli, timescales, and acoustic domains is lacking. Here we review the influences that spectral context, temporal context, and spectrotemporal context have on speech perception. Studies are organized in terms of whether the context precedes the target (forward effects) or follows it (backward effects), and whether the context is adjacent to the target (proximal) or temporally removed from it (distal). Special cases where proximal and distal contexts have competing influences on perception are also considered. Across studies, a common theme emerges: acoustic differences between contexts and targets are perceptually magnified, producing contrast effects that facilitate perception of target sounds and words. This indicates enhanced sensitivity to changes in the acoustic environment, which maximizes the amount of potential information that can be transmitted to the perceiver.* Many other studies exist under this heading as well, including some in listeners with hearing impairment/cochlear implants by Feng, Oxenham and others. Best of luck, Matt On Tue, Jan 25, 2022 at 11:34 PM Haren, Jorie van (PSYCHOLOGY) < jjg.vanharen@xxxxxxxx> wrote: > Dear all, > > > > I am a new PhD student, examining predictive and contextual processes in > auditory perception. > > Before, my studies were mostly oriented at visual perception, just now > making the switch to auditory. > > > > In the visual domain there is the concept of serial dependence (in gratin= g > studies). > > Where orientation perception is repelled away from previous stimulation > (N-1) for stimuli that are relatively similar, and attracted towards > previous stimulation (N-1) for stimuli that are relatively different. > > > > I was wondering whether there is a auditory equivalent to this phenomenon > (in e.g. tune perception). > > So far my search is without any luck, it would be greatly appreciated if > someone could direct me to the relevant literature. > > > > Thank you all in advance, > > Jorie van Haren > > > --=20 Matthew Winn, AuD, PhD Associate Professor Speech-Language-Hearing Sciences University of Minnesota --0000000000009fa4f805d675f536 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><p class=3D"MsoNormal" style=3D"margin:0in 0in 8pt;line-he= ight:107%;font-size:11pt"><font face=3D"arial, sans-serif">You=E2=80=99ll f= ind a lot of studies that explore this type of phenomenon under the general label =E2=80=9Cauditory context effects=E2=80= =9D, =E2=80=9Cspectral contrast effects=E2=80=9D or =E2=80=9Cauditory enhancement effects=E2=80=9D= . Christian Stilp has written a review of how these effects emerge in the perception of speech sounds, and his paper is available here:</font></p> <p class=3D"MsoNormal" style=3D"margin:0in 0in 8pt;line-height:107%;font-si= ze:11pt"><a href=3D"https://louisville.edu/psychology/stilp/lab/publication= -pdf/stilp-2019-wires-1" style=3D"color:rgb(5,99,193)"><font face=3D"arial,= sans-serif">https://louisville.edu/psychology/stilp/lab/publication-pdf/st= ilp-2019-wires-1</font></a></p> <p class=3D"MsoNormal" style=3D"margin:0in 0in 8pt;line-height:107%;font-si= ze:11pt"><span style=3D"font-family:arial,sans-serif;font-size:11pt">the ab= stract:</span><br></p> <p class=3D"MsoNormal" style=3D"margin:0in 0in 8pt;line-height:107%;font-si= ze:11pt"><font face=3D"arial, sans-serif"><i>The extreme acoustic variabili= ty of speech is well established, which makes the proficiency of human speech perception all the more impressive. Speech perception, like perception in any modality, is relative to context, and this provides a means to normalize the acoustic variability in the speech signal. Acoustic context effects in speech percep= tion have been widely documented, but a clear understanding of how these effects relate to each other across stimuli, timescales, and acoustic domains is lacking. Here we review the influences that spectral context, temporal cont= ext, and spectrotemporal context have on speech perception. Studies are organize= d in terms of whether the context precedes the target (forward effects) or follo= ws it (backward effects), and whether the context is adjacent to the target (proximal) or temporally removed from it (distal). Special cases where prox= imal and distal contexts have competing influences on perception are also considered. Across studies, a common theme emerges: acoustic differences between contexts and targets are perceptually magnified, producing contrast effects that facilitate perception of target sounds and words. This indicat= es enhanced sensitivity to changes in the acoustic environment, which maximize= s the amount of potential information that can be transmitted to the perceive= r.</i></font></p> <p class=3D"MsoNormal" style=3D"margin:0in 0in 8pt;line-height:107%;font-si= ze:11pt"><font face=3D"arial, sans-serif">=C2=A0</font></p> <p class=3D"MsoNormal" style=3D"margin:0in 0in 8pt;line-height:107%;font-si= ze:11pt"><font face=3D"arial, sans-serif">Many other studies exist under th= is heading as well, including some in listeners with hearing impairment/cochlear implants by Fe= ng, Oxenham and others. </font></p> <p class=3D"MsoNormal" style=3D"margin:0in 0in 8pt;line-height:107%;font-si= ze:11pt"><font face=3D"arial, sans-serif">Best of luck,</font></p> <p class=3D"MsoNormal" style=3D"margin:0in 0in 8pt;line-height:107%;font-si= ze:11pt"><font face=3D"arial, sans-serif">Matt</font><span style=3D"font-fa= mily:Calibri,sans-serif">=C2=A0</span></p></div><br><div class=3D"gmail_quo= te"><div dir=3D"ltr" class=3D"gmail_attr">On Tue, Jan 25, 2022 at 11:34 PM = Haren, Jorie van (PSYCHOLOGY) &lt;<a href=3D"mailto:jjg.vanharen@xxxxxxxx= university.nl">jjg.vanharen@xxxxxxxx</a>&gt; wrote:<br></div= ><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border= -left:1px solid rgb(204,204,204);padding-left:1ex"> <div lang=3D"NL" style=3D"overflow-wrap: break-word;"> <div class=3D"gmail-m_6250867834945015193WordSection1"> <div> <p class=3D"MsoNormal"><span lang=3D"EN-GB">Dear all,<u></u><u></u></span><= /p> <p class=3D"MsoNormal"><span lang=3D"EN-GB"><u></u>=C2=A0<u></u></span></p> <p class=3D"MsoNormal"><span lang=3D"EN-GB">I am a new PhD student, examini= ng predictive and contextual processes in auditory perception. <u></u><u></u></span></p> <div> <p class=3D"MsoNormal"><span lang=3D"EN-GB">Before, my studies were mostly = oriented at visual perception, just now making the switch to auditory. <u></u><u></u></span></p> </div> <p class=3D"MsoNormal"><span lang=3D"EN-GB"><u></u>=C2=A0<u></u></span></p> <p class=3D"MsoNormal"><span lang=3D"EN-GB">In the visual domain there is t= he concept of serial dependence (in grating studies). <u></u><u></u></span></p> <p class=3D"MsoNormal"><span lang=3D"EN-GB">Where orientation perception is= repelled away from previous stimulation (N-1) for stimuli that are relativ= ely similar, and attracted towards previous stimulation (N-1) for stimuli t= hat are relatively different.<u></u><u></u></span></p> <p class=3D"MsoNormal"><span lang=3D"EN-GB"><u></u>=C2=A0<u></u></span></p> <p class=3D"MsoNormal"><span lang=3D"EN-GB">I was wondering whether there i= s a auditory equivalent to this phenomenon (in e.g. tune perception). <u></u><u></u></span></p> <p class=3D"MsoNormal"><span lang=3D"EN-GB">So far my search is without any= luck, it would be greatly appreciated if someone could direct me to the re= levant literature.<u></u><u></u></span></p> <p class=3D"MsoNormal"><span lang=3D"EN-GB"><u></u>=C2=A0<u></u></span></p> <p class=3D"MsoNormal"><span lang=3D"EN-GB">Thank you all in advance,<u></u= ><u></u></span></p> <p class=3D"MsoNormal"><span lang=3D"EN-GB">Jorie van Haren <u></u><u></u><= /span></p> </div> <p class=3D"MsoNormal"><u></u>=C2=A0<u></u></p> </div> </div> </blockquote></div><br clear=3D"all"><div><br></div>-- <br><div dir=3D"ltr"= class=3D"gmail_signature"><div dir=3D"ltr"><div><div dir=3D"ltr">Matthew W= inn, AuD, PhD<div>Associate Professor</div><div>Speech-Language-Hearing Sci= ences</div><div>University of Minnesota</div></div></div></div></div> --0000000000009fa4f805d675f536--


This message came from the mail archive
src/postings/2022/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University