[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: AUDITORY Digest - 23 Oct 2011 to 24 Oct 2011 (#2011-246)



Could you please post the call for submissions to the following conference?
 
thank you,

Sandra Quinn
 
________________________________________________________________
PREDICTING PERCEPTIONS: The 3rd International Conference on Appearance
Edinburgh, 17-19 April 2012

Following on from two highly successful cross-disciplinary conferences
in Ghent and Paris we are very happy to invite submissions for the
above event.

IMPORTANT DATES
- 05 December 2011: Submission deadline
- 19 December 2011: Review allocation to reviewers
- 09 January 2012: Review upload deadline
- 14 January 2012: Authors informed
- 17-19 April 2012: Conference


CONFERENCE WEBSITE
www.perceptions.macs.hw.ac.uk


INVITED SPEAKERS
- Larry Maloney, Dept. Psychology, New York University, USA.
- Françoise Viénot, Muséum National d'Histoire Naturelle, Paris.

CONFERENCE CHAIRS
Mike Chantler, Julie Harris, Mike Pointer

SCOPE
Originally focused on the perception of texture and translucency and
particularly gloss, and colour we wish to extend the conference to
include other senses not just sight (e.g. how does sound affect our
perception of the qualities of a fabric) and to emotive as well as
objective qualities (e.g. desirability and engagement) and to digital
as well as physical media.

CALL FOR PAPERS
This conference addresses appearance in its broadest sense and seeks
to be truly cross-disciplinary. Papers related, but not restricted to
the following are welcomed:

- Prediction and measurement of human perceptions formed by sensory
input of the physical and digital worlds
- New methods for estimating psychometric transfer functions
- Methods for measuring perceived texture, translucency and form.
- Effects of lighting and other environmental factors on perception
- Effects of binocular viewing, motion parallax, and depth from focus
- Methods for measuring engagement and emotions such as desirability
- Effects of other sensory input (e.g. audio, smell, touch)
- Effects of user control of media
- Colour fidelity, colour harmony, colour and emotion
- Methods for measuring inferred qualities including expensiveness,
quality, wearability etc
- Techniques for encouraging and facilitating observer participation
(design games, gamification of experiments, crowd sourcing etc.)
- Saliency

_______________________________________________

Predicting Perceptions: the 3rd International Conference on Appearance

http://www.perceptions.macs.hw.ac.uk/
 
> Date: Tue, 25 Oct 2011 00:12:54 -0400
> From: LISTSERV@xxxxxxxxxxxxxxx
> Subject: AUDITORY Digest - 23 Oct 2011 to 24 Oct 2011 (#2011-246)
> To: AUDITORY@xxxxxxxxxxxxxxx
>
> There are 5 messages totalling 546 lines in this issue.
>
> Topics of the day:
>
> 1. Glitch-free presentations with Windows 7 and Matlab
> 2. question about streaming (3)
> 3. Workshop announcement: The Listening Talker
>
> ----------------------------------------------------------------------
>
> Date: Mon, 24 Oct 2011 12:16:04 +0200
> From: Martin Hansen <martin.hansen@xxxxxxxxxx>
> Subject: Re: Glitch-free presentations with Windows 7 and Matlab
>
> Hi all,
>
> Trevor has mentioned PortAudio as one solution (and so has Matlab
> themselves told to a colleague of mine in a recent email).
>
> Already some years before this Matlab-2011 problem popped up, we have
> used PortAudio to create our "msound" tool, which is a wrapper for
> PortAudio for block-wise audio input and ouput, for unlimited duration
> (in principle). You can download it freely from here:
> http://www.hoertechnik-audiologie.de/web/file/Forschung/Software.php#msou=
> nd:
>
>
> It is written as a mex file and published under the free LGPL license.
> It contains the precompiled mex-files "msound" for windows (dll, mexw32)
> and linux and also some example functions, e.g. one called
> "msound_play_record.m" which does simultaneous output and input to and
> from your soundcard for as long as your output lasts. This functions
> also handles all intialization automatically for you. Another function,
> called "msound_play.m", does what it is named after.
> We have had msound running for several years now, and a large number of
> our students have used it successfully for their assignments, projects
> and theses as well.
> which
>
> Best regards,
> Martin
>
>
> --=20
> Prof. Dr. Martin Hansen
> Jade Hochschule Wilhelmshaven/Oldenburg/Elsfleth
> Studiendekan H=C3=B6rtechnik und Audiologie
> Ofener Str. 16/19
> D-26121 Oldenburg
> Tel. (+49) 441 7708-3725 Fax -3777
> http://www.hoertechnik-audiologie.de/
>
>
>
>
>
> On 18.10.2011 19:29, David Magezi wrote:
> > Many thanks for that review Trevor.
> >=20
> > Am not sure, if the following has been mentioned: There appears to be a=
> matlab-ASIO interface from University of Birmingham (UK), using ActiveX.
> >=20
> >=20
> > http://www.eee.bham.ac.uk/collinst/asio.html
> >=20
> > I would also be keen to hear of other solutions found,
> >=20
> > D
> >=20
> > =20
> > ***************************************************
> > David Magezi
> >=20
> > ***************************************************
> >=20
> >=20
> > ________________________________
> > From: Trevor Agus <Trevor.Agus@xxxxxx>
> > To: AUDITORY@xxxxxxxxxxxxxxx
> > Sent: Tuesday, October 18, 2011 5:52 PM
> > Subject: [AUDITORY] Glitch-free presentations with Windows 7 and Matlab
> >=20
> > I've found it surprisingly difficult to present glitch-free sounds with
> > Windows 7.
> >=20
> > The short answer is that Padraig Kitterick's "asioWavPlay" seems to be =
> the
> > simplest reliable method (remembering to buffer the waveforms with 256 =
> samples
> > of silence to avoid truncation issues). For those with more complex nee=
> ds,
> > perhaps soundmexpro or PsychToolbox would be better. I'd value any seco=
> nd
> > opinions and double-checking, so a review of the options follows, with =
> all the
> > gory details.
> >=20
> > I've been using a relatively old version of Matlab (R2007b) with a Fire=
> face UC
> > soundcard. If the problems are fixed in another version or soundcard, I=
> 'd love
> > to know about it.
> >=20
> > =3D=3D=3DMatlab's native functions (sound, wavplay, audioplayer)
> > Large, unpredictable truncations were the least of our problems. We als=
> o often
> > got mid-sound glitches, ranging from sporadic (just a few subtle glitch=
> es per
> > minute) to frequent (making the sound barely recognisable). The magic f=
> ormula
> > for eliminating the glitches seemed to be to keep the soundcard turned =
> off
> > until
> > the desktop was ready, with all background programs loaded. (Restarting=
> either
> > the soundcard or the computer alone guaranteed some glitches.) So this=
> formula
> > seems to work, but it's a bit too Harry Potter for my liking, and the s=
> pell
> > might change with the next Windows update. I think I read that Firefac=
> e were
> > no longer supporting Microsoft's vagaries, and they recommended using A=
> SIO. I'm
> > not sure if other high-end soundcard manufacturers are any different. S=
> ince
> > Matlab's native functions don't support ASIO (unless the new versions d=
> o?),
> > I think we're forced to look at the ASIO options.
> >=20
> > =3D=3D=3Dplayrec
> > This seems to be potentially the most flexible method of presenting sou=
> nds but
> > I've hit a brick wall compiling it for Windows 7. I think its author st=
> opped
> > providing support for it a few years ago. Has anyone had more success t=
> han me?
> >=20
> > =3D=3D=3DasioWavPlay
> > This simply presents a .wav file using ASIO. It's a little annoying tha=
> t you
> > have to save your sound to disk before presenting it, but as Joachim po=
> inted
> > out, it's not too difficult to automate this process. While doing that,=
> I add
> > 256 samples of silence to the end to work around the truncation problem=
> .
> >=20
> > =3D=3D=3Dpa_wavplay
> > This is nearly the perfect solution except (1) the number of samples tr=
> uncated
> > from the end is slightly unpredictable and (2) it prints a message on t=
> he
> > screen after every sound ("Playing on device 0"). For these two reasons=
> , I
> > prefer asioWavPlay.
> >=20
> > =3D=3D=3Dsoundmexpro
> > This might be best choice for the high-end user (I've just had a quick =
> look at
> > the demo version today). It's easy to install and there are good tutori=
> als, but
> > it involves initialising sound objects, etc. -- it's not just a replace=
> ment for
> > Matlab's "sound" command. Also it looks like it's =E2=82=AC500+.
> >=20
> > =3D=3D=3DPsychToolbox
> > Originally designed for visual experiments, PsychToolbox has now got qu=
> ite
> > extensive low-latency sound functions, including realtime continuous
> > playing/recording. It's also free. However, it's slightly challenging t=
> o
> > install Like soundmexpro, it's object-oriented -- so don't expect to p=
> lay a
> > sound with a simple one-liner.
> >=20
> > =3D=3D=3DPortAudio
> > Most of above programs are based on this C library. If you're an experi=
> enced
> > programmer, perhaps you'd prefer to go direct the source? And while you=
> 're
> > there, perhaps you could write the perfect Matlab-ASIO interfaces for t=
> he rest
> > of us? (Please!)
> >=20
> > Has anyone found a simpler solution? I'd be glad to hear it.
> >=20
> > Trevor
>
> ------------------------------
>
> Date: Mon, 24 Oct 2011 14:06:26 +0100
> From: A Davidson <pspc1d@xxxxxxxxxxxx>
> Subject: question about streaming
>
> Hello everyone,
>
> I was wondering if anyone could point me in the direction of some
> clear and relatively simple tutorial information and/or good review
> papers about streaming and the problems of trying to discern between
> two auditory stimuli presented to two different ears concurrently.
>
> Many thanks,
>
> Alison
>
> ----------------------------------------------------------------
> This message was sent using IMP, the Internet Messaging Program.
>
> ------------------------------
>
> Date: Mon, 24 Oct 2011 17:37:39 +0100
> From: Etienne Gaudrain <egaudrain.cam@xxxxxxxxx>
> Subject: Re: question about streaming
>
> Dear Alison,
>
> First because this is so recent, a paper by Stainsby et al. on
> sequential streaming :
>
> Sequential streaming due to manipulation of interaural time differences.
> Stainsby TH, Fullgrabe C, Flanagan HJ, Waldman SK, Moore BC.
> J Acoust Soc Am. 2011 Aug;130(2):904-14.
> PMID: 21877805
>
> Otherwise two papers that include a fairly comprehensive review of the
> literature:
>
> Spatial release from energetic and informational masking in a divided
> speech identification task.
> Ihlefeld A, Shinn-Cunningham B.
> J Acoust Soc Am. 2008 Jun;123(6):4380-92.
> PMID: 18537389
>
> Spatial release from energetic and informational masking in a selective
> speech identification task.
> Ihlefeld A, Shinn-Cunningham B.
> J Acoust Soc Am. 2008 Jun;123(6):4369-79.
> PMID: 18537388
>
> These are not review papers, but you might find what you're looking for.
>
> -Etienne
>
>
> On 24/10/2011 14:06, A Davidson wrote:
> > Hello everyone,
> >
> > I was wondering if anyone could point me in the direction of some
> > clear and relatively simple tutorial information and/or good review
> > papers about streaming and the problems of trying to discern between
> > two auditory stimuli presented to two different ears concurrently.
> >
> > Many thanks,
> >
> > Alison
> >
> > ----------------------------------------------------------------
> > This message was sent using IMP, the Internet Messaging Program.
>
>
> --
> Etienne Gaudrain, PhD
> MRC Cognition and Brain Sciences Unit
> 15 Chaucer Road
> Cambridge, CB2 7EF
> UK
> Phone: +44 1223 355 294, ext. 645
> Fax (unit): +44 1223 359 062
>
> ------------------------------
>
> Date: Mon, 24 Oct 2011 12:31:38 -0700
> From: Diana Deutsch <ddeutsch@xxxxxxxx>
> Subject: Re: question about streaming
>
> --Apple-Mail-2--272401268
> Content-Transfer-Encoding: quoted-printable
> Content-Type: text/plain;
> charset=us-ascii
>
> Hi Alson,
>
> You might want to read my review chapter:
>
> Deutsch, D. Grouping mechanisms in music. In D. Deutsch (Ed.). The =
> psychology of music, 2nd Edition, 1999, 299-348, Academic Press. [PDF =
> Document]
>
> The book is going into a third edition, and the updated chapter should =
> be available in a few months.
>
> Cheers,
>
> Diana Deutsch
>
>
> Professor Diana Deutsch
> Department of Psychology =20
> University of California, San Diego
> 9500 Gilman Dr. #0109 =20
> La Jolla, CA 92093-0109, USA
>
> 858-453-1558 (tel)
> 858-453-4763 (fax)
>
> http://deutsch.ucsd.edu
> http://www.philomel.com
>
>
>
>
> On Oct 24, 2011, at 6:06 AM, A Davidson wrote:
>
> > Hello everyone,
> >=20
> > I was wondering if anyone could point me in the direction of some =
> clear and relatively simple tutorial information and/or good review =
> papers about streaming and the problems of trying to discern between two =
> auditory stimuli presented to two different ears concurrently.
> >=20
> > Many thanks,
> >=20
> > Alison
> >=20
> > ----------------------------------------------------------------
> > This message was sent using IMP, the Internet Messaging Program.
>
>
>
> --Apple-Mail-2--272401268
> Content-Transfer-Encoding: quoted-printable
> Content-Type: text/html;
> charset=us-ascii
>
> <html><head></head><body style=3D"word-wrap: break-word; =
> -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hi =
> Alson,<div><br></div><div>You might want to read my review =
> chapter:</div><div><br></div><div><span class=3D"Apple-style-span" =
> style=3D"font-family: Arial, Helvetica, sans-serif; font-size: 12px; =
> ">Deutsch, D. Grouping mechanisms in music. In D. Deutsch =
> (Ed.).&nbsp;<i>The psychology of music, 2nd Edition</i>, 1999, 299-348, =
> Academic Press.&nbsp;<nobr>[<a =
> href="" target=3D"_blank">PDF =
> Document</a>]</nobr><br></span><div><div><div><br></div><div>The book is =
> going into a third edition, and the updated chapter should be available =
> in a few =
> months.</div><div><br></div><div>Cheers,</div><div><br></div><div>Diana =
> Deutsch</div><div><br></div><div><div><br =
> class=3D"webkit-block-placeholder"></div><div><span =
> class=3D"Apple-style-span" style=3D"border-collapse: separate; color: =
> rgb(0, 0, 0); font-family: Helvetica; font-size: medium; font-style: =
> normal; font-variant: normal; font-weight: normal; letter-spacing: =
> normal; line-height: normal; orphans: 2; text-indent: 0px; =
> text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; =
> -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: =
> 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
> auto; -webkit-text-stroke-width: 0px; "><span class=3D"Apple-style-span" =
> style=3D"font-size: 12px; "><div>Professor Diana =
> Deutsch</div><div>Department of Psychology&nbsp; &nbsp; &nbsp; &nbsp; =
> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; =
> &nbsp;&nbsp;</div><div>University of California, San =
> Diego</div><div>9500 Gilman Dr. #0109&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; =
> &nbsp;&nbsp;</div><div>La Jolla, CA 92093-0109, USA</div><div><br =
> class=3D"khtml-block-placeholder"></div><div>858-453-1558 =
> (tel)</div><div>858-453-4763 (fax)</div><div><br =
> class=3D"khtml-block-placeholder"></div><div><a =
> href="" =
> href="">> class=3D"khtml-block-placeholder"></div></span></span><br =
> class=3D"Apple-interchange-newline"></div></div><div><br></div><div><br></=
> div><div>On Oct 24, 2011, at 6:06 AM, A Davidson wrote:</div><br =
> class=3D"Apple-interchange-newline"><blockquote type=3D"cite"><div>Hello =
> everyone,<br><br>I was wondering if anyone could point me in the =
> direction of some clear and relatively simple tutorial information =
> and/or good review papers about streaming and the problems of trying to =
> discern between two auditory stimuli presented to two different ears =
> concurrently.<br><br>Many =
> thanks,<br><br>Alison<br><br>---------------------------------------------=
> -------------------<br>This message was sent using IMP, the Internet =
> Messaging Program.<br></div></blockquote></div><br><div>
> <span class=3D"Apple-style-span" style=3D"border-collapse: separate; =
> color: rgb(0, 0, 0); font-family: Helvetica; font-size: medium; =
> font-style: normal; font-variant: normal; font-weight: normal; =
> letter-spacing: normal; line-height: normal; orphans: 2; text-align: =
> auto; text-indent: 0px; text-transform: none; white-space: normal; =
> widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; =
> -webkit-border-vertical-spacing: 0px; =
> -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: =
> auto; -webkit-text-stroke-width: 0px; "><span class=3D"Apple-style-span" =
> style=3D"font-size: 12px; "><div><span class=3D"Apple-style-span" =
> style=3D"font-size: medium;"><br =
> class=3D"Apple-interchange-newline"></span></div></span></span></div></div=
> ></div></body></html>=
>
> --Apple-Mail-2--272401268--
>
> ------------------------------
>
> Date: Tue, 25 Oct 2011 00:04:32 +0200
> From: Martin <m.cooke@xxxxxxxxxxxxxx>
> Subject: Workshop announcement: The Listening Talker
>
> --Apple-Mail=_CE921D79-D8BC-43C1-8355-D3CA57A705E4
> Content-Transfer-Encoding: quoted-printable
> Content-Type: text/plain;
> charset=us-ascii
>
> The Listening Talker: an interdisciplinary workshop on natural and =
> synthetic=20
> modification of speech in response to listening conditions
>
> Edinburgh, 2-3 May 2012
>
> http://listening-talker.org/workshop
>
> When talkers speak, they also listen. Talkers routinely adapt to their =
> interlocutors=20
> and environment, maintaining intelligibility and dialogue fluidity in a =
> way that=20
> promotes efficient exchange of information. In contrast, current speech =
> output=20
> technology is largely deaf, incapable of adapting to the listener's =
> context,
> inefficient in use and lacking the naturalness that comes from rapid =
> appreciation
> of the speaker-listener environment. A key scientific challenge is to =
> better=20
> understand how "talker-listeners" respond to context and to apply these=20=
>
> findings to the modification of natural (live/recorded) and generated =
> (synthetic)
> speech. The ISCA-supported Listening Talker (LISTA) workshop brings=20
> together linguists, psychologists, neuroscientists, engineers and others =
> working=20
> on human and machine speech perception and production, to explore new=20
> approaches to context-sensitive speech generation.
>
> The workshop will be single-track, with invited talks and contributed =
> oral=20
> and poster presentations. An open call for a special issue of Computer=20=
>
> Speech and Language on the theme of the listening talker will follow the =
> workshop.
>
> Contributions are invited on any aspect of the listening talker, =
> including but not limited to:
>
> - theories and models of human communication involving the listening =
> talker
> - human speech production modifications induced by noise
> - speech production changes with manipulated feedback
> - algorithms/vocoders for speech modification
> - transformations from casual to clear speech
> - characterisation of the listening context
> - intelligibility and quality metrics for modified speech
> - application to natural dialogues, PA, teleconferencing
>
> Invited speakers
>
> Torsten Dau (Danish Technical University)
> Valerie Hazan (University College, London)
> Richard Heusdens (Technical University Delft)
> Hideki Kawahara (Wakayama University)
> Roger Moore (University of Sheffield)
> Martin Pickering (University of Edinburgh)
> Peter Vary (Aachen University)
> Junichi Yamagishi (University of Edinburgh)
> =09
> Important dates
>
> 30th January 2012: Submission of 4-page papers=20
> 27th February 2012: Notification of acceptance/rejection
>
> Co-chairs
>
> Martin Cooke (University of the Basque Country)
> Simon King (University of Edinburgh)
> Bastiaan Kleijn (Victoria University of Wellington)
> Yannis Stylianou (University of Crete)=
>
> --Apple-Mail=_CE921D79-D8BC-43C1-8355-D3CA57A705E4
> Content-Transfer-Encoding: quoted-printable
> Content-Type: text/html;
> charset=us-ascii
>
> <html><head></head><body style=3D"word-wrap: break-word; =
> -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">The =
> Listening Talker: an interdisciplinary workshop on natural and =
> synthetic&nbsp;<div>modification of speech in response to listening =
> conditions<br><br>Edinburgh, 2-3 May 2012<br><br><a =
> href="">> workshop</a><br><br>When talkers speak, they also listen. Talkers =
> routinely adapt to their interlocutors&nbsp;</div><div>and environment, =
> maintaining intelligibility and dialogue fluidity in a way =
> that&nbsp;</div><div>promotes efficient exchange of information. In =
> contrast, current speech output&nbsp;</div><div>technology is largely =
> deaf, incapable of adapting to the listener's =
> context,</div><div>inefficient in use and lacking the naturalness that =
> comes from rapid appreciation</div><div>of the speaker-listener =
> environment. &nbsp;A key scientific challenge is to =
> better&nbsp;</div><div>understand how "talker-listeners" respond to =
> context and to apply these&nbsp;</div><div>findings to the modification =
> of natural (live/recorded) and generated (synthetic)</div><div>speech. =
> The ISCA-supported Listening Talker (LISTA) workshop =
> brings&nbsp;</div><div>together linguists, psychologists, =
> neuroscientists, engineers and others working&nbsp;</div><div>on human =
> and machine speech perception and production, to explore =
> new&nbsp;</div><div>approaches to context-sensitive speech =
> generation.<br><br>The workshop will be single-track, with invited talks =
> and contributed oral&nbsp;</div><div>and poster presentations. An open =
> call for a special issue of Computer&nbsp;</div><div>Speech and Language =
> on the theme of the listening talker will follow the =
> workshop.<br><br>Contributions are invited on any aspect of the =
> listening talker, including but not limited to:<br><br>- theories and =
> models of human communication involving the listening talker<br>- human =
> speech production modifications induced by noise<br>- speech production =
> changes with manipulated feedback<br>- algorithms/vocoders for speech =
> modification<br>- transformations from casual to clear speech<br>- =
> characterisation of the listening context<br>- intelligibility and =
> quality metrics for modified speech<br>- application to natural =
> dialogues, PA, teleconferencing<br><br>Invited =
> speakers<br><br>&nbsp;Torsten Dau &nbsp;(Danish Technical =
> University)<br>&nbsp;Valerie Hazan &nbsp;(University College, =
> London)<br>&nbsp;Richard Heusdens &nbsp;(Technical University =
> Delft)<br>&nbsp;Hideki Kawahara &nbsp;(Wakayama =
> University)<br>&nbsp;Roger Moore &nbsp;(University of =
> Sheffield)<br>&nbsp;Martin Pickering &nbsp;(University of =
> Edinburgh)<br>&nbsp;Peter Vary &nbsp;(Aachen =
> University)<br>&nbsp;Junichi Yamagishi &nbsp;(University of =
> Edinburgh)<br><span class=3D"Apple-tab-span" style=3D"white-space: pre; =
> "> </span><br>Important dates<br><br>&nbsp;30th January 2012: =
> Submission of 4-page papers&nbsp;<br>&nbsp;27th February 2012: =
> Notification of =
> acceptance/rejection<br><br>Co-chairs<br><br>&nbsp;Martin Cooke =
> &nbsp;(University of the Basque Country)<br>&nbsp;Simon King =
> &nbsp;(University of Edinburgh)<br>&nbsp;Bastiaan Kleijn &nbsp;(Victoria =
> University of Wellington)<br>&nbsp;Yannis Stylianou &nbsp;(University of =
> Crete)</div></body></html>=
>
> --Apple-Mail=_CE921D79-D8BC-43C1-8355-D3CA57A705E4--
>
> ------------------------------
>
> End of AUDITORY Digest - 23 Oct 2011 to 24 Oct 2011 (#2011-246)
> ***************************************************************