Re: [AUDITORY] synthesizing virtual movements of naturalistic sound sources (Brian FG Katz )


Subject: Re: [AUDITORY] synthesizing virtual movements of naturalistic sound sources
From:    Brian FG Katz  <brian.katz@xxxxxxxx>
Date:    Wed, 13 Oct 2021 10:33:46 +0200

Dear Valeriy, If you are concerned with naturally perceived rendering of very near = sources, like a bee buzzing around you, loudspeaker sources are not = really feasible, unless you are able to go to a WFS system (or NFC-HOA), = with a high density of speakers.=20 Note that very near source positioning (within arm's reach) requires = additional processing over and above simple HRTF convolution, unless a = near-field HRTF dataset is available.=20 For very far distances, air attenuation is necessary over and above = HRTF. This distance attenuation can be modelled, but should ideally be a = function of atmospheric conditions depending how far away the source is. = For detailed distant natural rendering, one may also need to account for = general terrain acoustic properties. For headphone rendering, we have made our research rendered public as = Anaglyph (http://anaglyph.dalembert.upmc.fr/), free for all use as a VST = plug-in. It is geared very much towards realistic proximity rendering = with some basic far distance attenuation.=20 As with others mentioned, you should be able to automate trajectories = using various VST supported hosts (MatLab even supports it now).=20 Cordially, -Brian -- Brian FG Katz, Research Director, CNRS Groupe Lutheries - Acoustique =E2=80=93 Musique Sorbonne Universit=C3=A9, CNRS, UMR 7190, Institut Jean Le Rond = =E2=88=82'Alembert=20 http://www.dalembert.upmc.fr/home/katz -----Original Message----- From: AUDITORY - Research in Auditory Perception = <AUDITORY@xxxxxxxx> On Behalf Of Giso Grimm Sent: mercredi 13 octobre 2021 09:44 To: AUDITORY@xxxxxxxx Subject: Re: [AUDITORY] synthesizing virtual movements of naturalistic = sound sources Dear Valeriy, in addition to the suggestion of Lorenzo Picinali you may look at TASCAR - it is primarily made to simulate arbitrary movements in real-time. It = offers rendering methods for loudspeakers and an HRTF simulation. Examples can be found on our lab youtube channel: https://www.youtube.com/channel/UCAXZPzxbOJM9CM0IBfgvoNg Installation instructions (currently Linux only) are on = http://tascar.org/ Best, Giso On 12.10.21 16:33, Valeriy Shafiro wrote: > Dear list, >=20 > We are looking for ways to generate movement of naturalistic auditory=20 > sound sources, where we take a recorded sound (e.g. a car or a bee)=20 > and specify direction, speed and distance/range of its movement. =20 > Ideally, this would be something publically available and open source, = > but if there is proprietary VR software that can do the job, it would=20 > be great to know as well . Any suggestions from experts or=20 > enthusiasts on the list are much appreciated! >=20 > Thanks, >=20 > Valeriy


This message came from the mail archive
src/postings/2021/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University