[AUDITORY] Announcing the First Clarity Enhancement Challenge for Hearing Aid Signal Processing (Jon Barker )


Subject: [AUDITORY] Announcing the First Clarity Enhancement Challenge for Hearing Aid Signal Processing
From:    Jon Barker  <j.p.barker@xxxxxxxx>
Date:    Wed, 3 Feb 2021 10:09:29 +0000

--0000000000000b518605ba6bc93d Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable We are pleased to announce the launch of the first Clarity Enhancement Challenge for Hearing Aid Signal Processing. Details are available at the Clarity Challenge website (www.claritychallenge.org). Sample data is available now for download; the full dataset and tools will be released on 15th February 2021. *Important Dates* - 1st Feb - Challenge launch and release of sample data - 15th Feb - Release of full dataset, tools and baseline system - 1st May - Evaluation data released - 1st June - Submission deadline - June-August 2021 - Listening test evaluation period - September 2021 - Results announced at a Clarity Challenge Workshop in conjunction with Interspeech 2021 There will be *cash prizes* for the systems that top our evaluation. *Background* We are organising a series of machine learning challenges to advance hearing aid speech signal processing. Even if you=E2=80=99ve not worked on = hearing aids before, we=E2=80=99ll provide you with the tools to enable you to appl= y your machine learning and speech processing algorithms to help those with a hearing loss. Although age-related hearing loss affects 40% of 55 to 74 year-olds, the majority of adults who would benefit from hearing aids don=E2=80=99t use th= em. A key reason is simply that hearing aids don=E2=80=99t provide enough benefit= . In particular, speech in noise is still a critical problem, even for the most sophisticated devices. The purpose of the =E2=80=9CClarity=E2=80=9D challen= ges is to catalyse new work to radically improve the speech intelligibility provided by hearing aids. The series of challenges will consider increasingly complex listening scenarios. The first round focusses on speech in indoor environments in the presence of a single interferer. It begins with a challenge involving improving hearing aid processing. Future challenges on how to model speech-in noise perception will be launched at a later date. *The task* You will be provided with simulated scenes, each including a target speaker and interfering noise. For each scene, there will be signals that simulate those captured by a behind-the-ear hearing aid with 3-channels at each ear and those captured at the eardrum without a hearing aid present. The target speech will be a short sentence and the interfering noise will be either speech or domestic appliance noise. The task will be to deliver a hearing aid signal processing algorithm that can improve the intelligibility of the target speaker for a specified hearing-impaired listener. Initially, entries will be evaluated using an objective speech intelligibility measure. Subsequently, up to twenty of the most promising systems will be evaluated by a panel of listeners. We will provide a baseline system so that teams can choose to focus on individual components or to develop their own complete pipelines. *What will be provided* - Evaluation of the best entries by a panel of hearing-impaired listeners. - Speech + interferer scenes for training and evaluation. - An entirely new database of 10,000 spoken sentences - Listener characterisations including audiograms and speech-in-noise testing. - Software including tools for generating training data, a baseline hearing aid algorithm, a baseline model of hearing impairment, and a binaural objective intelligibility measure. Challenge and workshop participants will be invited to contribute to a journal Special Issue on the topic of Machine Learning for Hearing Aid Processing that will be announced next year. *For further information* If you are interested in participating and wish to receive further information, please sign up to the Clarity Forum at http://claritychallenge.org/sign-up-to-the-challenges If you have questions, contact us directly at contact@xxxxxxxx *Organisers* Prof. Jon Barker, Department of Computer Science, University of Sheffield Prof. Michael A. Akeroyd, Hearing Sciences, School of Medicine, University of Nottingham Prof. Trevor J. Cox, Acoustics Research Centre, University of Salford Prof. John F. Culling, School of Psychology, Cardiff University Prof. Graham Naylor, Hearing Sciences, School of Medicine, University of Nottingham Dr Simone Graetzer, Acoustics Research Centre, University of Salford Dr Rhoddy Viveros Mu=C3=B1oz, School of Psychology, Cardiff University Eszter Porter, Hearing Sciences, School of Medicine, University of Nottingham *Funded by* the Engineering and Physical Sciences Research Council (EPSRC), UK *Supported by *RNID (formerly Action on Hearing Loss), Hearing Industry Research Consortium, Amazon TTS Research, Honda Research Institute Europe --=20 Professor Jon Barker, Department of Computer Science, University of Sheffield +44 (0) 114 222 1824 --0000000000000b518605ba6bc93d Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr">We are pleased to announce the launch of the first Clarity= Enhancement Challenge for Hearing Aid Signal Processing. Details are avail= able at the Clarity Challenge website (<a href=3D"http://www.claritychallen= ge.org">www.claritychallenge.org</a>). Sample data is available now for dow= nload; the full dataset and tools will be released on 15th February 2021.<b= r><br><b>Important Dates</b><br><ul><li>1st Feb - Challenge launch and rele= ase of sample data</li><li>15th Feb - Release of full dataset, tools and ba= seline system</li><li>1st May - Evaluation data released</li><li>1st June -= Submission deadline</li><li>June-August 2021 - Listening test evaluation p= eriod</li><li>September 2021 - Results announced at a Clarity Challenge Wor= kshop in conjunction with Interspeech 2021</li></ul>There will be <b>cash p= rizes</b> for the systems that top our evaluation.<br><br><b>Background</b>= <div><b><br></b>We are organising a series of machine learning challenges t= o advance hearing aid speech signal processing. Even if you=E2=80=99ve not = worked on hearing aids before, we=E2=80=99ll provide you with the tools to = enable you to apply your machine learning and speech processing algorithms = to help those with a hearing loss.<br><br>Although age-related hearing loss= affects 40% of 55 to 74 year-olds, the majority of adults who would benefi= t from hearing aids don=E2=80=99t use them. A key reason is simply that hea= ring aids don=E2=80=99t provide enough benefit. In particular, speech in no= ise is still a critical problem, even for the most sophisticated devices. T= he purpose of the =E2=80=9CClarity=E2=80=9D challenges is to catalyse new w= ork to radically improve the speech intelligibility provided by hearing aid= s.<br><br>The series of challenges will consider increasingly complex liste= ning scenarios. The first round focusses on speech in indoor environments i= n the presence of a single interferer. It begins with a challenge involving= improving hearing aid processing. Future challenges on how to model speech= -in noise perception will be launched at a later date.<br><br><b>The task<b= r></b><br>You will be provided with simulated scenes, each including a targ= et speaker and interfering noise. For each scene, there will be signals tha= t simulate those captured by a behind-the-ear hearing aid with 3-channels a= t each ear and those captured at the eardrum without a hearing aid present.= =C2=A0 The target speech will be a short sentence and the interfering noise= will be either speech or domestic appliance noise.<br><br>The task will be= to deliver a hearing aid signal processing algorithm that can improve the = intelligibility of the target speaker for a specified<br>hearing-impaired l= istener. Initially, entries will be evaluated using an objective speech int= elligibility measure. Subsequently, up to twenty of the most promising syst= ems will be evaluated by a panel of listeners.<br><br>We will provide a bas= eline system so that teams can choose to focus on individual components or = to develop their own complete pipelines.<br><br><b>What will be provided</b= ><br><ul><li>Evaluation of the best entries by a panel of hearing-impaired = listeners.</li><li>Speech + interferer scenes for training and evaluation.<= /li><li>An entirely new database of 10,000 spoken sentences</li><li>Listene= r characterisations including audiograms and speech-in-noise testing.</li><= li>Software including tools for generating training data, a baseline hearin= g aid algorithm, a baseline model of hearing impairment, and a binaural obj= ective intelligibility measure.</li></ul>Challenge and workshop participant= s will be invited to contribute to a journal Special Issue on the topic of = Machine Learning for Hearing Aid Processing that will be announced next yea= r.<br><br><b>For further information<br></b><br>If you are interested in pa= rticipating and wish to receive further information, please sign up to the = Clarity Forum at <a href=3D"http://claritychallenge.org/sign-up-to-the-chal= lenges">http://claritychallenge.org/sign-up-to-the-challenges</a> If you ha= ve questions, contact us directly at <a href=3D"mailto:contact@xxxxxxxx= enge.org">contact@xxxxxxxx</a><br><br><b>Organisers<br></b><br>= Prof. Jon Barker, Department of Computer Science, University of Sheffield<b= r>Prof. Michael A. Akeroyd, Hearing Sciences, School of Medicine, Universit= y of Nottingham<br>Prof. Trevor J. Cox, Acoustics Research Centre, Universi= ty of Salford<br>Prof. John F. Culling, School of Psychology, Cardiff Unive= rsity<br>Prof. Graham Naylor, Hearing Sciences, School of Medicine, Univers= ity of Nottingham<br>Dr Simone Graetzer, Acoustics Research Centre, Univers= ity of Salford<br>Dr Rhoddy Viveros Mu=C3=B1oz, School of Psychology, Cardi= ff University<br>Eszter Porter, Hearing Sciences, School of Medicine, Unive= rsity of Nottingham<br><br><b>Funded by</b> the Engineering and Physical Sc= iences Research Council (EPSRC), UK<div><br><b>Supported by </b>RNID (forme= rly Action on Hearing Loss), Hearing Industry Research Consortium, Amazon T= TS Research, Honda Research Institute Europe<br><div><br></div><div><br></d= iv>-- <br><div dir=3D"ltr" class=3D"gmail_signature" data-smartmail=3D"gmai= l_signature"><div dir=3D"ltr"><div><div dir=3D"ltr">Professor Jon Barker,<d= iv><div>Department of Computer Science,</div><div>University of Sheffield</= div><div>+44 (0) 114 222 1824</div><div><br></div></div></div></div></div><= /div></div></div></div> --0000000000000b518605ba6bc93d--


This message came from the mail archive
src/postings/2021/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University