[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: HRTF failure

Dear Christian,

We are also using generic HRTFs to virtually arrange sounds in space.

Before begining our experiments, we checked that the "generic HRTF" way was acceptable by running some preliminary experiments like the one you described. The principal difference is that we only used sounds in the frontal hemispace (between -90 (=270°) & 90 ° of Azimuth.

The results were quite acurate but we noted a systematic tendency to :
- overestimate the excentricity for sounds Az of in [-50°;50°] (a sound presented at a virtual azimuth of 30° is perceived at say 40°) and an unde - underestimate excentricity for extreme positions ([-70;-90] & [70;90]). eg : a -85° az is perceived as "-75°".

These effects are often uncommented but present in figures plotted in different published papers (in general in papers with a small number of trials and without intensive training).

Our participant where quite precise (in some experiments we get a standard deviation <3° in pointing tasks).

You used sound in the back hemispace. I don't know if it is an important requirement for you but this is known to produce front/back confusions (some are visible in your plot I think).

To reduce front-back confusion you might consider (see Begault et al (2001)): - virtually install your participants in a "real room" with some reverberation that help solving F/B confusions (simulating at least 1st order reverberation) - You can also use real-time head movement tracking (which imply that you have a motion tracking system). - Maybe you can more "compatible" HRTF for each of your subjects (you have to run a prior experiment in order to choose the best HRTF from several possibilities instead of only using MIT kemar ones).

I never had accurate pointing response in task involving front & back sounds in static situations (without head movement tracking).

I hope that this can help and I'm curious about other advises on this topic.

Sylvain Clément

Neuropsychology & Auditory Cognition Team
Lille, France

Le 14 nov. 08 à 11:43, Christian Kaernbach a écrit :

Dear List,

We encounter a problem when trying to place a sound at a virtual position in space by means of head related transfer functions (HRTF).

We use sounds from the IAPS database (International Affective Digitized Sounds System, Bradley & Lang) as well as simple white noise of six seconds duration. We use the Kemar HRTF, the "compact data" with zero elevation. We convolve the original sound data with the HRTF data as suggested in the documentation. The final sounds are presented using Beyer Dynamic DT770 headphones.

We have tested the precision with which our sounds are placed in virtual space, by presenting them to eight listeners. The listeners had a touchscreen lying on their lap, with a circle plotted on it, and they could indicated the direction where they perceived that the sound came from. We presented to them in total 144 sounds, 72 noises and 72 IAPS sounds, coming from 36 virtual directions (0°, 10°, 20°...) in randomized order.

The results are shown in a figure that I put in the internet:
The red dots are from IAPS sounds, the yellow dots are from the noises. The x-axis shows the "true" (virtual) angle, the y-axis shows the estimated angle. As can be seen in this figure, listeners could well discriminate between sounds from the left and sounds from the right. But not more than that. There is a certain reduction of variance for sounds coming from 90° and from 270°, but there is no correlation with angle within one hemifield.

Now we are eager to learn from you: What could be the cause for this failure?

A) HRTFs are not better than that.
B) The headphones are inadequate.
C) It must be a programming error (we don't think so)
D) ....

We are grateful for any help in interpreting the possible cause for this failure.

Thank you very much in advance,

Christian Kaernbach
Christian-Albrechts-Universität zu Kiel