[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
We encounter a problem when trying to place a sound at a virtual
position in space by means of head related transfer functions (HRTF).
We use sounds from the IAPS database (International Affective Digitized
Sounds System, Bradley & Lang) as well as simple white noise of six
seconds duration. We use the Kemar HRTF, the "compact data" with zero
elevation. We convolve the original sound data with the HRTF data as
suggested in the documentation. The final sounds are presented using
Beyer Dynamic DT770 headphones.
We have tested the precision with which our sounds are placed in virtual
space, by presenting them to eight listeners. The listeners had a
touchscreen lying on their lap, with a circle plotted on it, and they
could indicated the direction where they perceived that the sound came
from. We presented to them in total 144 sounds, 72 noises and 72 IAPS
sounds, coming from 36 virtual directions (0°, 10°, 20°...) in
The results are shown in a figure that I put in the internet:
The red dots are from IAPS sounds, the yellow dots are from the noises.
The x-axis shows the "true" (virtual) angle, the y-axis shows the
estimated angle. As can be seen in this figure, listeners could well
discriminate between sounds from the left and sounds from the right. But
not more than that. There is a certain reduction of variance for sounds
coming from 90° and from 270°, but there is no correlation with angle
within one hemifield.
Now we are eager to learn from you: What could be the cause for this
A) HRTFs are not better than that.
B) The headphones are inadequate.
C) It must be a programming error (we don't think so)
We are grateful for any help in interpreting the possible cause for this
Thank you very much in advance,
Christian-Albrechts-Universität zu Kiel