Re: [AUDITORY] synthesizing virtual movements of naturalistic sound sources (Matthias Geier )


Subject: Re: [AUDITORY] synthesizing virtual movements of naturalistic sound sources
From:    Matthias Geier  <matthias.geier@xxxxxxxx>
Date:    Wed, 13 Oct 2021 21:05:59 +0200

On Wed, Oct 13, 2021 at 6:08 AM Valeriy Shafiro wrote: > > Dear list, > > We are looking for ways to generate movement of naturalistic auditory sou= nd sources, where we take a recorded sound (e.g. a car or a bee) and specif= y direction, speed and distance/range of its movement. Ideally, this would= be something publically available and open source, but if there is proprie= tary VR software that can do the job, it would be great to know as well . = Any suggestions from experts or enthusiasts on the list are much appreciate= d! If you are not afraid of highly experimental (and unfinished) software, you can try the next generation of the Audio Scene Description Format (ASDF), which I'm currently working on. It allows you to define 3D trajectories with a (somewhat) simple HTML synta= x. For documentation, see https://AudioSceneDescriptionFormat.readthedocs.io/ The format is independent of the playback software. One way to listen to the sound scene is via a Pure Data External I've implemented as part of the reference implementation: https://github.com/AudioSceneDescriptionFormat/asdf-rust/tree/master/pure-d= ata Another possibility is to use the SoundScape Renderer (SSR, http://spatialaudio.net/ssr/), but this is even more experimental. Here you'll have to check out a certain branch (https://github.com/SoundScapeRenderer/ssr/pull/155) which enables ASDF support. cheers, Matthias


This message came from the mail archive
src/postings/2021/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University