[AUDITORY] CFP - Audio and Acoustics of Movement (=?UTF-8?Q?Ante_Juki=C4=87?=)


Subject: [AUDITORY] CFP - Audio and Acoustics of Movement
From:    =?UTF-8?Q?Ante_Juki=C4=87?= <=?UTF-8?Q?Ante_Juki=C4=87?=>
Date:    Tue, 16 May 2023 16:40:43 -0700

--000000000000236c5805fbd81a34 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Dear all, Please consider submitting your research to our research topic "Audio and Acoustics of Movement" in the Frontiers in Signal Processing. You can find the Call for Papers in this email (below) or at the following link: https://www.frontiersin.org/research-topics/51606/audio-and-acoustics-of-mo= vement We are looking for original research, but also accept reviews/mini-reviews, brief research reports, and perspectives. To participate, please use the link above or reach out directly. Important dates: - Abstract submission: July 9, 2023 - Manuscript submission: November 20, 2023. Please forward to anyone interested and do not hesitate to reach out if you have any questions. Thanks, [Topic Editors] Thomas Dietzen (KU Leuven) Ina Kodrasi (Idiap Research Institute) Ante Juki=C4=87 (NVIDIA) --- *AUDIO AND ACOUSTICS OF MOVEMENT* While the state of the art in audio signal processing often assumes a spatially stationary environment, we are in practice often confronted with spatially dynamic sound scenes where moving objects emit or receive sound. This may concern movement of human speakers and listeners, moving audio devices in the context of wireless acoustic sensor networks, autonomous vehicles equipped with microphones or loudspeakers such as robots or drones, or noise emitted by moving vehicles as in road or air traffic. In many processing applications, the movement of acoustic sources or receivers poses challenges, while in others it may be explicitly exploited in that it provides additional spatio-temporal cues as an alternative to discrete spatial sampling. The wide impact of movement on audio signals is expressed in various spatial, temporal, and spectral aspects. When spatial distances vary, propagation delays become channel- and time-dependent. On the one hand, this obviously affects time-differences of arrival in multi-channel audio signals, which are commonly used to obtain a spatial interpretation of the scene. On the other hand, one also observes spectro-temporal effects, such as frequency-shifts in the well-known Doppler effect. Further, in regards to statistical processing, time-dependent time and frequency shifts question the concept of implementing the expectation operator by averaging over time. With this Research Topic, we promote the development of movement-aware signal and system models and processing algorithms, that is approaches explicitly addressing the time-variant nature of sound propagation paths caused by the relative movement of acoustic sources and receivers, as well as the spectro-temporal and statistical impact thereof. This covers a wide range of problems including spatial estimation, signal enhancement, system identification, room acoustic information retrieval, and signal generation. Further, in order to enable reproducible and comparative research in the context of movement, we promote the development of relevant databases. Submissions considered relevant to this Research Topic include, but are not limited to, contributions in the following fields: *Dynamic spatial estimation* - Modeling of movement trajectories - Acoustic trajectory estimation and location tracking - Acoustics-based simultaneous localization and mapping (SLAM - Echolocation of moving objects *Signal enhancement under movement* - Enhancement of speech, noise reduction and dereverberation - Movement-robust intrusive performance measures such as quality, intelligibility, and power metrics *Identification of spatially dynamic systems* - Time-varying room impulse response models - Feedback and echo cancellation *Movement-based room acoustic information retrieval* - Room acoustic feature estimation - Room acoustic measurement protocols - Room geometry estimation *Spatially dynamic signal generation* - Movement-aware reverberation models - Accurate simulation of spatially dynamic acoustic scenes - Auralization of movement *Acoustic databases involving movement* *IMPORTANT DATES* Abstract Submission Deadline 09 July 2023 Manuscript Submission Deadline 20 November 2023 --000000000000236c5805fbd81a34 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr">Dear all,<div><br></div><div>Please consider submitting yo= ur research to our research topic &quot;Audio and Acoustics of Movement&quo= t; in the Frontiers in Signal Processing.</div><div><br></div><div>You can = find the Call for Papers in this email (below) or at the following link:<br= ><a href=3D"https://www.frontiersin.org/research-topics/51606/audio-and-aco= ustics-of-movement" target=3D"_blank">https://www.frontiersin.org/research-= topics/51606/audio-and-acoustics-of-movement</a></div><div><br></div><div>W= e are looking for original research, but also accept reviews/mini-reviews, = brief research reports, and perspectives. To participate, please use the li= nk above or reach out directly.<br><br></div><div>Important dates:</div><di= v>- Abstract submission: July 9, 2023</div><div>- Manuscript submission: No= vember 20, 2023.</div><div><br></div><div>Please forward to anyone interest= ed and do not hesitate to reach out if you have any questions.</div><div><b= r></div><div>Thanks,</div><div>[Topic Editors]<br></div><div>Thomas Dietzen= (KU Leuven)<br>Ina Kodrasi (Idiap Research Institute)<br>Ante Juki=C4=87 (= NVIDIA)</div><div><br></div><div>---</div><div><br></div><div><p style=3D"f= ont-size:16px;box-sizing:border-box;outline:0px;margin:0px;padding:0px;bord= er:0px;vertical-align:baseline;font-family:Arial;color:rgb(111,111,111)"><s= trong style=3D"box-sizing:border-box;outline:0px;margin:0px;padding:0px;bor= der:0px;vertical-align:baseline">AUDIO AND ACOUSTICS OF MOVEMENT</strong></= p><p style=3D"font-size:16px;box-sizing:border-box;outline:0px;margin:0px;p= adding:0px;border:0px;vertical-align:baseline;font-family:Arial;color:rgb(1= 11,111,111)">While the state of the art in audio signal processing often as= sumes a spatially stationary environment, we are in practice often confront= ed with spatially dynamic sound scenes where moving objects emit or receive= sound. This may concern movement of human speakers and listeners, moving a= udio devices in the context of wireless acoustic sensor networks, autonomou= s vehicles equipped with microphones or loudspeakers such as robots or dron= es, or noise emitted by moving vehicles as in road or air traffic. In many = processing applications, the movement of acoustic sources or receivers pose= s challenges, while in others it may be explicitly exploited in that it pro= vides additional spatio-temporal cues as an alternative to discrete spatial= sampling.</p><p style=3D"font-size:16px;box-sizing:border-box;outline:0px;= margin:0px;padding:0px;border:0px;vertical-align:baseline;font-family:Arial= ;color:rgb(111,111,111)">The wide impact of movement on audio signals is ex= pressed in various spatial, temporal, and spectral aspects. When spatial di= stances vary, propagation delays become channel- and time-dependent. On the= one hand, this obviously affects time-differences of arrival in multi-chan= nel audio signals, which are commonly used to obtain a spatial interpretati= on of the scene. On the other hand, one also observes spectro-temporal effe= cts, such as frequency-shifts in the well-known Doppler effect. Further, in= regards to statistical processing, time-dependent time and frequency shift= s question the concept of implementing the expectation operator by averagin= g over time.</p><p style=3D"font-size:16px;box-sizing:border-box;outline:0p= x;margin:0px;padding:0px;border:0px;vertical-align:baseline;font-family:Ari= al;color:rgb(111,111,111)">With this Research Topic, we promote the develop= ment of movement-aware signal and system models and processing algorithms, = that is approaches explicitly addressing the time-variant nature of sound p= ropagation paths caused by the relative movement of acoustic sources and re= ceivers, as well as the spectro-temporal and statistical impact thereof. Th= is covers a wide range of problems including spatial estimation, signal enh= ancement, system identification, room acoustic information retrieval, and s= ignal generation. Further, in order to enable reproducible and comparative = research in the context of movement, we promote the development of relevant= databases.</p><p style=3D"font-size:16px;box-sizing:border-box;outline:0px= ;margin:0px;padding:0px;border:0px;vertical-align:baseline;font-family:Aria= l;color:rgb(111,111,111)">Submissions considered relevant to this Research = Topic include, but are not limited to, contributions in the following field= s:</p><p style=3D"font-size:16px;box-sizing:border-box;outline:0px;margin:0= px;padding:0px;border:0px;vertical-align:baseline;font-family:Arial;color:r= gb(111,111,111)"><strong style=3D"box-sizing:border-box;outline:0px;margin:= 0px;padding:0px;border:0px;vertical-align:baseline">Dynamic spatial estimat= ion</strong></p><ul style=3D"font-size:16px;box-sizing:border-box;outline:0= px;margin:0px;padding:10px 0px 10px 20px;border:0px;vertical-align:baseline= ;font-family:Arial;list-style-position:inside;color:rgb(111,111,111)"><li s= tyle=3D"margin:0px;box-sizing:border-box;outline:0px;padding:0px;border:0px= ;vertical-align:baseline">Modeling of movement trajectories</li><li style= =3D"margin:0px;box-sizing:border-box;outline:0px;padding:0px;border:0px;ver= tical-align:baseline">Acoustic trajectory estimation and location tracking<= /li><li style=3D"margin:0px;box-sizing:border-box;outline:0px;padding:0px;b= order:0px;vertical-align:baseline">Acoustics-based simultaneous localizatio= n and mapping (SLAM</li><li style=3D"margin:0px;box-sizing:border-box;outli= ne:0px;padding:0px;border:0px;vertical-align:baseline">Echolocation of movi= ng objects</li></ul><p style=3D"font-size:16px;box-sizing:border-box;outlin= e:0px;margin:0px;padding:0px;border:0px;vertical-align:baseline;font-family= :Arial;color:rgb(111,111,111)"><strong style=3D"box-sizing:border-box;outli= ne:0px;margin:0px;padding:0px;border:0px;vertical-align:baseline">Signal en= hancement under movement</strong></p><ul style=3D"font-size:16px;box-sizing= :border-box;outline:0px;margin:0px;padding:10px 0px 10px 20px;border:0px;ve= rtical-align:baseline;font-family:Arial;list-style-position:inside;color:rg= b(111,111,111)"><li style=3D"margin:0px;box-sizing:border-box;outline:0px;p= adding:0px;border:0px;vertical-align:baseline">Enhancement of speech, noise= reduction and dereverberation</li><li style=3D"margin:0px;box-sizing:borde= r-box;outline:0px;padding:0px;border:0px;vertical-align:baseline">Movement-= robust intrusive performance measures such as quality, intelligibility, and= power metrics</li></ul><p style=3D"font-size:16px;box-sizing:border-box;ou= tline:0px;margin:0px;padding:0px;border:0px;vertical-align:baseline;font-fa= mily:Arial;color:rgb(111,111,111)"><strong style=3D"box-sizing:border-box;o= utline:0px;margin:0px;padding:0px;border:0px;vertical-align:baseline">Ident= ification of spatially dynamic systems</strong></p><ul style=3D"font-size:1= 6px;box-sizing:border-box;outline:0px;margin:0px;padding:10px 0px 10px 20px= ;border:0px;vertical-align:baseline;font-family:Arial;list-style-position:i= nside;color:rgb(111,111,111)"><li style=3D"margin:0px;box-sizing:border-box= ;outline:0px;padding:0px;border:0px;vertical-align:baseline">Time-varying r= oom impulse response models</li><li style=3D"margin:0px;box-sizing:border-b= ox;outline:0px;padding:0px;border:0px;vertical-align:baseline">Feedback and= echo cancellation</li></ul><p style=3D"font-size:16px;box-sizing:border-bo= x;outline:0px;margin:0px;padding:0px;border:0px;vertical-align:baseline;fon= t-family:Arial;color:rgb(111,111,111)"><strong style=3D"box-sizing:border-b= ox;outline:0px;margin:0px;padding:0px;border:0px;vertical-align:baseline">M= ovement-based room acoustic information retrieval</strong></p><ul style=3D"= font-size:16px;box-sizing:border-box;outline:0px;margin:0px;padding:10px 0p= x 10px 20px;border:0px;vertical-align:baseline;font-family:Arial;list-style= -position:inside;color:rgb(111,111,111)"><li style=3D"margin:0px;box-sizing= :border-box;outline:0px;padding:0px;border:0px;vertical-align:baseline">Roo= m acoustic feature estimation</li><li style=3D"margin:0px;box-sizing:border= -box;outline:0px;padding:0px;border:0px;vertical-align:baseline">Room acous= tic measurement protocols</li><li style=3D"margin:0px;box-sizing:border-box= ;outline:0px;padding:0px;border:0px;vertical-align:baseline">Room geometry = estimation</li></ul><p style=3D"font-size:16px;box-sizing:border-box;outlin= e:0px;margin:0px;padding:0px;border:0px;vertical-align:baseline;font-family= :Arial;color:rgb(111,111,111)"><strong style=3D"box-sizing:border-box;outli= ne:0px;margin:0px;padding:0px;border:0px;vertical-align:baseline">Spatially= dynamic signal generation</strong></p><ul style=3D"font-size:16px;box-sizi= ng:border-box;outline:0px;margin:0px;padding:10px 0px 10px 20px;border:0px;= vertical-align:baseline;font-family:Arial;list-style-position:inside;color:= rgb(111,111,111)"><li style=3D"margin:0px;box-sizing:border-box;outline:0px= ;padding:0px;border:0px;vertical-align:baseline">Movement-aware reverberati= on models</li><li style=3D"margin:0px;box-sizing:border-box;outline:0px;pad= ding:0px;border:0px;vertical-align:baseline">Accurate simulation of spatial= ly dynamic acoustic scenes</li><li style=3D"margin:0px;box-sizing:border-bo= x;outline:0px;padding:0px;border:0px;vertical-align:baseline">Auralization = of movement</li></ul><p style=3D"font-size:16px;box-sizing:border-box;outli= ne:0px;margin:0px;padding:0px;border:0px;vertical-align:baseline;font-famil= y:Arial;color:rgb(111,111,111)"><strong style=3D"box-sizing:border-box;outl= ine:0px;margin:0px;padding:0px;border:0px;vertical-align:baseline">Acoustic= databases involving movement</strong></p><p style=3D"font-size:16px;box-si= zing:border-box;outline:0px;margin:0px;padding:0px;border:0px;vertical-alig= n:baseline;font-family:Arial;color:rgb(111,111,111)"><br style=3D"box-sizin= g:border-box;outline:currentcolor;margin:0px;padding:0px"></p><p style=3D"f= ont-size:16px;box-sizing:border-box;outline:0px;margin:0px;padding:0px;bord= er:0px;vertical-align:baseline;font-family:Arial;color:rgb(111,111,111)"><s= trong style=3D"box-sizing:border-box;outline:0px;margin:0px;padding:0px;bor= der:0px;vertical-align:baseline">IMPORTANT DATES</strong></p><p style=3D"fo= nt-size:16px;box-sizing:border-box;outline:0px;margin:0px;padding:0px;borde= r:0px;vertical-align:baseline;font-family:Arial;color:rgb(111,111,111)">Abs= tract Submission Deadline 09 July 2023<br style=3D"box-sizing:border-box;ou= tline:currentcolor;margin:0px;padding:0px">Manuscript Submission Deadline 2= 0 November 2023</p></div></div> --000000000000236c5805fbd81a34--


This message came from the mail archive
src/postings/2023/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University