Announcing the 2nd CHiME Speech Separation and Recognition Challenge (jon )


Subject: Announcing the 2nd CHiME Speech Separation and Recognition Challenge
From:    jon  <j.barker@xxxxxxxx>
Date:    Fri, 20 Jul 2012 13:10:07 +0100
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--Apple-Mail=_30968697-31AD-47A6-A18D-6441B4C44E54 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=us-ascii [with apologies for cross-postings] Dear Auditory list, The CHiME challenge has been designed for the speech processing, source separation and machine learning communities in particular, as an opportunity to evaluate signal enhancement and recognition algorithms in the hot field of distant-microphone speech recognition. Many teams from different communities participated in the first edition and we hope even more of you will participate this year! In case you are not an expert in speech processing, a complete set of speech recognition tools based on HTK are provided so you may focus on signal enhancement and simply run the baseline recogniser to derive speech transcripts in the end. ---------------------------------------------- 2nd CHiME Speech Separation and Recognition Challenge Supported by IEEE Technical Committees Deadline: January 15, 2013 Workshop: June 1, 2013, Vancouver, Canada http://spandh.dcs.shef.ac.uk/chime_challenge/ ---------------------------------------------- Following the success of the 1st PASCAL CHiME Speech Separation and Recognition Challenge, we are happy to announce a new challenge dedicated to speech recognition in real-world reverberant, noisy conditions, that will culminate in a dedicated satellite workshop of ICASSP 2013. The challenge is supported by several IEEE Technical Committees and by an Industrial Board. FEATURED TASKS The challenge consists of recognising distant-microphone speech mixed in two-channel nonstationary noise recorded over a period of several weeks in a real family house. Entrants may address either one or both of the following tracks: Medium vocabulary track: WSJ 5k sentences uttered by a static speaker Small vocababulary track: simpler commands but small head movements TO ENTER You will find everything you need to get started (and even more) on the challenge website: - a full description of the challenge, - clean, reverberated and multi-condition training and development data, - baseline training, decoding and scoring software tools based on HTK. Submission consists of a 2- to 8-page paper describing your system and reporting its performance on the development and the test set. In addition, you are welcome to submit an earlier paper to ICASSP 2013, which will tentatively be grouped with other papers into a dedicated session. Any approach is welcome, whether emerging or established. If you are interested in participating, please email us so we can monitor interest and send you further updates about the challenge. BEST CHALLENGE PAPER AWARD The best challenge paper will distinguished by an award from the Industrial Board. IMPORTANT DATES July 2012 Launch October 2012 Test set release January 15, 2013 Challenge & workshop submission deadline February 18, 2013 Paper notification & release of the challenge results June 1, 2013 ICASSP satellite workshop INDUSTRIAL BOARD Masami Akamine, Toshiba Carlos Avendano, Audience Li Deng, Microsoft Erik McDermott, Google Gautham Mysore, Adobe Atsushi Nakamura, NTT Peder A. Olsen, IBM Trausti Thormundsson, Conexant Daniel Willett, Nuance WORKSHOP SPONSORS Conexant Systems Inc. Audience Inc. Mitsubishi Electric Corp. ORGANISERS Emmanuel Vincent, INRIA Jon Barker, University of Sheffield Shinji Watanabe & Jonathan Le Roux, MERL Francesco Nesta & Marco Matassoni, FBK-IRST -- Dr. Jon Barker, Department of Computer Science, University of Sheffield, Sheffield, S1 4DP, UK Phone: +44-(0)114-22 21824 FAX: +44-(0)114-222 1810 Email: j.barker@xxxxxxxx http://www.dcs.shef.ac.uk/~jon -- Dr. Jon Barker, Department of Computer Science, University of Sheffield, Sheffield, S1 4DP, UK Phone: +44-(0)114-22 21824 FAX: +44-(0)114-222 1810 Email: j.barker@xxxxxxxx http://www.dcs.shef.ac.uk/~jon --Apple-Mail=_30968697-31AD-47A6-A18D-6441B4C44E54 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=us-ascii <html><head></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">[with = apologies for cross-postings]<div><br></div><div><div>Dear Auditory = list,</div><div><br></div><div>The CHiME challenge has been designed for = the speech processing, source</div><div>separation and machine learning = communities in particular, as an</div><div>opportunity to evaluate = signal enhancement and recognition algorithms</div><div>in the hot field = of distant-microphone speech recognition. Many teams</div><div>from = different communities participated in the first edition and we = hope</div><div>even more of you will participate this = year!</div><div><br></div><div>In case you are not an expert in speech = processing, a complete set of</div><div>speech recognition tools based = on HTK are provided so you may focus on</div><div>signal enhancement and = simply run the baseline recogniser to derive = speech</div><div>transcripts in the end.</div><div><br></div><div>&nbsp; = &nbsp; &nbsp; = ----------------------------------------------</div><div><br></div><div>&n= bsp; 2nd CHiME Speech Separation and Recognition = Challenge</div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Supported by IEEE = Technical Committees</div><div><br></div><div>&nbsp; &nbsp; &nbsp; = &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Deadline: January 15, = 2013</div><div>&nbsp; &nbsp; &nbsp; &nbsp; Workshop: June 1, 2013, = Vancouver, Canada</div><div><br></div><div>&nbsp; &nbsp; &nbsp;&nbsp;<a = href=3D"http://spandh.dcs.shef.ac.uk/chime_challenge/">http://spandh.dcs.s= hef.ac.uk/chime_challenge/</a></div><div><br></div><div>&nbsp; &nbsp; = &nbsp; = ----------------------------------------------</div><div><br></div><div><b= r></div><div>Following the success of the 1st PASCAL CHiME Speech = Separation and</div><div>Recognition Challenge, we are happy to announce = a new challenge</div><div>dedicated to speech recognition in real-world = reverberant, noisy conditions,</div><div>that will culminate in a = dedicated satellite workshop of ICASSP = 2013.</div><div><br></div><div>The challenge is supported by several = IEEE Technical Committees and by</div><div>an Industrial = Board.</div><div><br></div><div><br></div><div>FEATURED = TASKS</div><div><br></div><div>The challenge consists of recognising = distant-microphone speech mixed in</div><div>two-channel nonstationary = noise recorded over a period of several weeks</div><div>in a real family = house. Entrants may address either one or both of = the</div><div>following tracks:</div><div><br></div><div>Medium = vocabulary track: WSJ 5k sentences uttered by a static = speaker</div><div><br></div><div>Small vocababulary track: simpler = commands but small head = movements</div><div><br></div><div><br></div><div>TO = ENTER</div><div><br></div><div>You will find everything you need to get = started (and even more) on the</div><div>challenge website:</div><div>- = a full description of the challenge,</div><div>- clean, reverberated and = multi-condition training and development data,</div><div>- baseline = training, decoding and scoring software tools based on = HTK.</div><div><br></div><div>Submission consists of a 2- to 8-page = paper describing your system and</div><div>reporting its performance on = the development and the test set. In</div><div>addition, you are welcome = to submit an earlier paper to ICASSP 2013,</div><div>which will = tentatively be grouped with other papers into a = dedicated</div><div>session.</div><div><br></div><div>Any approach is = welcome, whether emerging or established.</div><div><br></div><div>If = you are interested in participating, please email us so we = can</div><div>monitor interest and send you further updates about the = challenge.</div><div><br></div><div><br></div><div>BEST CHALLENGE PAPER = AWARD</div><div><br></div><div>The best challenge paper will = distinguished by an award from the</div><div>Industrial = Board.</div><div><br></div><div><br></div><div>IMPORTANT = DATES</div><div><br></div><div>July 2012 &nbsp; &nbsp; &nbsp; &nbsp; = &nbsp;Launch</div><div>October 2012 &nbsp; &nbsp; &nbsp; Test set = release</div><div>January 15, 2013 &nbsp; Challenge &amp; workshop = submission deadline</div><div>February 18, 2013 &nbsp;Paper notification = &amp; release of the challenge results</div><div>June 1, 2013 &nbsp; = &nbsp; &nbsp; ICASSP satellite = workshop</div><div><br></div><div><br></div><div>INDUSTRIAL = BOARD</div><div><br></div><div>Masami Akamine, Toshiba</div><div>Carlos = Avendano, Audience</div><div>Li Deng, Microsoft</div><div>Erik = McDermott, Google</div><div>Gautham Mysore, Adobe</div><div>Atsushi = Nakamura, NTT</div><div>Peder A. Olsen, IBM</div><div>Trausti = Thormundsson, Conexant</div><div>Daniel Willett, = Nuance</div><div><br></div><div><br></div><div>WORKSHOP = SPONSORS</div><div><br></div><div>Conexant Systems = Inc.</div><div>Audience Inc.</div><div>Mitsubishi Electric = Corp.</div><div><br></div><div><br></div><div>ORGANISERS</div><div><br></d= iv><div>Emmanuel Vincent, INRIA</div><div>Jon Barker, University of = Sheffield</div><div>Shinji Watanabe &amp; Jonathan Le Roux, = MERL</div><div>Francesco Nesta &amp; Marco Matassoni, = FBK-IRST</div><div><br></div><div apple-content-edited=3D"true"><div = style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; = -webkit-line-break: after-white-space; "><div>--&nbsp;<br>Dr. Jon = Barker, Department of Computer Science,<br>University of Sheffield, = Sheffield, &nbsp;S1 4DP, UK<br>Phone: +44-(0)114-22 21824 FAX: = +44-(0)114-222 1810<br>Email:&nbsp;<a = href=3D"mailto:j.barker@xxxxxxxx">j.barker@xxxxxxxx</a>&nbsp;&= nbsp;<a = href=3D"http://www.dcs.shef.ac.uk/~jon">http://www.dcs.shef.ac.uk/~jon</a>= <br></div><div><br></div></div></div></div><div = apple-content-edited=3D"true"> <span class=3D"Apple-style-span" style=3D"border-collapse: separate; = color: rgb(0, 0, 0); font-family: Helvetica; font-size: medium; = font-style: normal; font-variant: normal; font-weight: normal; = letter-spacing: normal; line-height: normal; orphans: 2; text-align: = auto; text-indent: 0px; text-transform: none; white-space: normal; = widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; = -webkit-border-vertical-spacing: 0px; = -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: = auto; -webkit-text-stroke-width: 0px; "><span class=3D"Apple-style-span" = style=3D"border-collapse: separate; color: rgb(0, 0, 0); font-family: = Helvetica; font-size: medium; font-style: normal; font-variant: normal; = font-weight: normal; letter-spacing: normal; line-height: normal; = orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; = widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; = -webkit-border-vertical-spacing: 0px; = -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: = auto; -webkit-text-stroke-width: 0px; "><div style=3D"word-wrap: = break-word; -webkit-nbsp-mode: space; -webkit-line-break: = after-white-space; "><div>--&nbsp;<br>Dr. Jon Barker, Department of = Computer Science,<br>University of Sheffield, Sheffield, &nbsp;S1 4DP, = UK<br>Phone: +44-(0)114-22 21824 FAX: +44-(0)114-222 1810<br>Email: <a = href=3D"mailto:j.barker@xxxxxxxx">j.barker@xxxxxxxx</a> = &nbsp;<a = href=3D"http://www.dcs.shef.ac.uk/~jon">http://www.dcs.shef.ac.uk/~jon</a>= <br></div></div></span></span> </div> <br></body></html>= --Apple-Mail=_30968697-31AD-47A6-A18D-6441B4C44E54--


This message came from the mail archive
/var/www/postings/2012/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University