[AUDITORY] CfP ICRA 2019 Workshop on Sound Source Localization and Its Applications for Robots (Inkyu An )


Subject: [AUDITORY] CfP ICRA 2019 Workshop on Sound Source Localization and Its Applications for Robots
From:    Inkyu An  <dksdlsrb89@xxxxxxxx>
Date:    Sun, 24 Mar 2019 17:30:28 +0900
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--0000000000004c96f50584d2e7e2 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable ICRA 2019 Workshop on Sound Source Localization and Its Applications for Robots https://sglab.kaist.ac.kr/SSL_ICRA19/ May 23, 2019 Montreal, Canada Submission deadline: April 1, 2019 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Acoustic signals are ubiquitous, and both humans and animals utilize them for understanding surrounding environments and for performing various interactions. However, there is relatively little work in robotics on the use of acoustic signals for scene understanding or analysis. There has been some work over the last two decades on robot audition and machine hearing, the goal of which is to improve human-robot interactions and scene navigation. Thanks to recent advances in deep learning and acoustic sensing, robots and computers have been improved considerably in terms of human speech understanding and interpreting other acoustic signals. However, these developments are mainly limited to direct sound signals in simple settings (e.g., a living room) or near-field speech recognition. In this context, we will cover the following topics: - Interactive sound propagation technologies Localization applications for drone and underwater robots - Deep learning techniques for multi-party interactions and simultaneous talkers - Real-time localization techniques considering non line of sight wave effects (diffraction) - Joint optimizations between SLAM and acoustic approaches. - Audio-motor localization, etc. Abstract submission We invite participants to submit extended abstracts (max 2 pages) related to the workshop's topics of interest. Accepted submissions are expected to be presented in a brief spotlight talk (2-3 min) and a subsequent poster session (about 20-30 min). The accepted contributions will be posted on the workshop website but they will not appear in the official IEEE proceedings. Contributors may also request that their manuscripts not appear on the website. The reviewing will be made by the organizers and invited speakers of the workshop. The manuscripts should use the IEEE ICRA two-column format, and a PDF copy of the manuscript should be submitted to: Please submit your papers via email to sungeui@xxxxxxxx Important Dates Paper submission deadline: April 1, 2019 Author notification: April 15, 2019 Camera-ready submission: May 6, 2019 Workshop date: May 23 or 24, 2019 Invited speakers - Dinesh Manocha, IEEE/ACM/AAAI Fellow, Professor at Univ. Maryland, USA - Prof. Hiroshi G. Okuno, IEEE Fellow, Waseda Univ., Japan - Frank Dellaert, Professor, Georgia Tech (formerly at Facebook), USA - Sung-eui Yoon, IEEE/ACM Senior member, KAIST, Korea - Mandar Chitre, National University of Singapore, Editor-in-Chief for the IEEE Journal of Oceanic Engineering, Singapore - Patrick Danes, Professor at Universit=C3=A9 de Toulouse and LAAS-CNRS, Fr= ance - Jean-Marc Odobez, Professor at Maine University and Idiap Research Institute/EPFL, France - Kazuhiro Nakadai, IEEE Senior member, Principal research at Honda Research Institute, and Professor at Tokyo Institute of Technology, Japan Workshop organizers Main organizer Sung-eui Yoon Associate Professor, School of Computing (E3-1), KAIST https://sgvr.kaist.ac.kr/~sungeui/ Co-organizer Dinesh Manocha, Paul Chrisman Iribe Professor of Computer Science and Professor of Electrical and Computer Engineering, Univ. of Maryland, College Park https://www.cs.umd.edu/people/dmanocha --=20 Inkyu An Scalable Graphics, Vision and Robotics (SGVR) Lab, KAIST Homepage: https://sglab.kaist.ac.kr/~ikan/ Mobile: +82-10-4730-0006 --0000000000004c96f50584d2e7e2 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr">ICRA 2019 Workshop on Sound Source Localization and Its Ap= plications for Robots<br><a href=3D"https://sglab.kaist.ac.kr/SSL_ICRA19/" = rel=3D"noreferrer" target=3D"_blank">https://sglab.kaist.ac.kr/SSL_ICRA19/<= /a><br><br>May 23, 2019<br>Montreal, Canada<br><br>Submission deadline:=C2= =A0 April 1, 2019<br><br>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br><br>Acoustic s= ignals are ubiquitous, and both humans and animals utilize<br>them for unde= rstanding surrounding environments and for performing<br>various interactio= ns. However, there is relatively little work in<br>robotics on the use of a= coustic signals for scene understanding or<br>analysis. There has been some= work over the last two decades on robot<br>audition and machine hearing, t= he goal of which is to improve<br>human-robot interactions and scene naviga= tion. Thanks to recent<br>advances in deep learning and acoustic sensing, r= obots and computers<br>have been improved considerably in terms of human sp= eech understanding<br>and interpreting other acoustic signals. However, the= se developments<br>are mainly limited to direct sound signals in simple set= tings (e.g., a<br>living room) or near-field speech recognition.<br><br>In = this context, we will cover the following topics:<br><br>- Interactive soun= d propagation technologies Localization applications<br>for drone and under= water robots<br>- Deep learning techniques for multi-party interactions and= simultaneous talkers<br>- Real-time localization techniques considering no= n line of sight wave<br>effects (diffraction)<br>- Joint optimizations betw= een SLAM and acoustic approaches.<br>- Audio-motor localization, etc.<br><b= r><br>Abstract submission<br><br>We invite participants to submit extended = abstracts (max 2 pages)<br>related to the workshop&#39;s topics of interest= . Accepted submissions are<br>expected to be presented in a brief spotlight= talk (2-3 min) and a<br>subsequent poster session (about 20-30 min). The a= ccepted<br>contributions will be posted on the workshop website but they wi= ll not<br>appear in the official IEEE proceedings. Contributors may also re= quest<br>that their manuscripts not appear on the website. The reviewing wi= ll<br>be made by the organizers and invited speakers of the workshop. The<b= r>manuscripts should use the IEEE ICRA two-column format, and a PDF copy<br= >of the manuscript should be submitted to:<br><br>Please submit your papers= via email to=C2=A0<a href=3D"mailto:sungeui@xxxxxxxx" target=3D"_blank">s= ungeui@xxxxxxxx</a><br><br>Important Dates<br><br>Paper submission deadlin= e:=E2=80=83April 1, 2019<br>Author notification:=E2=80=83April 15, 2019<br>= Camera-ready submission:=E2=80=83May 6, 2019<br>Workshop date:=E2=80=83May = 23 or 24, 2019<br><br><br>Invited speakers<br>- Dinesh Manocha, IEEE/ACM/AA= AI Fellow, Professor at Univ. Maryland, USA<br>- Prof. Hiroshi G. Okuno, IE= EE Fellow, Waseda Univ., Japan<br>- Frank Dellaert, Professor, Georgia Tech= (formerly at Facebook), USA<br>- Sung-eui Yoon, IEEE/ACM Senior member, KA= IST, Korea<br>- Mandar Chitre, National University of Singapore, Editor-in-= Chief for<br>the IEEE Journal of Oceanic Engineering, Singapore<br>- Patric= k Danes, Professor at Universit=C3=A9 de Toulouse and LAAS-CNRS, France<br>= - Jean-Marc Odobez, Professor at Maine University and Idiap Research<br>Ins= titute/EPFL, France<br>- Kazuhiro Nakadai, IEEE Senior member, Principal re= search at Honda<br>Research Institute, and Professor at Tokyo Institute of = Technology,<br>Japan<br><br>Workshop organizers<br><br>Main organizer<br><b= r>Sung-eui Yoon<br>Associate Professor, School of Computing (E3-1), KAIST<b= r><a href=3D"https://sgvr.kaist.ac.kr/~sungeui/" rel=3D"noreferrer" target= =3D"_blank">https://sgvr.kaist.ac.kr/~sungeui/</a><br><br>Co-organizer<br>D= inesh Manocha, Paul Chrisman Iribe Professor of Computer Science and<br>Pro= fessor of Electrical and Computer Engineering, Univ. of Maryland,<br>Colleg= e Park<br><a href=3D"https://www.cs.umd.edu/people/dmanocha" rel=3D"norefer= rer" target=3D"_blank">https://www.cs.umd.edu/people/dmanocha</a>=C2=A0=C2= =A0<br clear=3D"all"><div><br></div>-- <br><div dir=3D"ltr" class=3D"gmail_= signature" data-smartmail=3D"gmail_signature"><div dir=3D"ltr"><div><div di= r=3D"ltr">Inkyu An<div>Scalable Graphics, Vision and Robotics (<span>SGVR</= span>) Lab, KAIST</div><div><br></div><div>Homepage:=C2=A0<a href=3D"https:= //sglab.kaist.ac.kr/~ikan/" target=3D"_blank">https://sglab.kaist.ac.kr/~ik= an/</a></div><div>Mobile: +82-10-4730-0006</div></div></div></div></div></d= iv> --0000000000004c96f50584d2e7e2--


This message came from the mail archive
src/postings/2019/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University