Re: [AUDITORY] synthesizing virtual movements of naturalistic sound sources (Giso Grimm )


Subject: Re: [AUDITORY] synthesizing virtual movements of naturalistic sound sources
From:    Giso Grimm  <g.grimm@xxxxxxxx>
Date:    Wed, 13 Oct 2021 09:44:21 +0200

Dear Valeriy, in addition to the suggestion of Lorenzo Picinali you may look at TASCAR - it is primarily made to simulate arbitrary movements in real-time. It offers rendering methods for loudspeakers and an HRTF simulation. Examples can be found on our lab youtube channel: https://www.youtube.com/channel/UCAXZPzxbOJM9CM0IBfgvoNg Installation instructions (currently Linux only) are on http://tascar.org= / Best, Giso On 12.10.21 16:33, Valeriy Shafiro wrote: > Dear list, >=20 > We are looking for ways to generate movement of naturalistic auditory > sound sources, where we take a recorded sound (e.g. a car or a bee) and > specify=C2=A0direction, speed and distance/range of its movement.=C2=A0= Ideally, > this would be something publically available and open source, but if > there is proprietary VR software that can do the job, it would be great > to know as well .=C2=A0 Any suggestions from experts or enthusiasts on = the > list are much appreciated!=C2=A0=C2=A0 >=20 > Thanks, >=20 > Valeriy=C2=A0


This message came from the mail archive
src/postings/2021/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University