[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

1. HRTF failure (4)



Dear Chris,
I think the problems you reported about localization performances with HRTF could be linked to the following factors:
-The HRTF were not individualized: you said you used Kemar HRTF (personally, I don't like at all that database...), therefore they are not at all customized on the external hearing system of the individuals you tested. This could be solved measuring the HRTF for each individual you are testing (this would probably be a bit too expensive and time-consuming), or maybe using the Duda and Algazi HRTF databases (CIPIC), or to the Ircam LISTEN one... In these database, HRTFs have been measured from different individuals, and data about the extenal ear shapes and dimensions are delivered within the database, therefore you could try to match the pinna shape of the subject you are testing with the one of the measured HRTF (this could even be done with simple localization performances tests...).
-You don't have any headtracking system in order to follow the movements of the head of the individuals and rotate (rool-pitch-yaw), according to these, the virtual soundscape. There are various pubblications (JAES and JASA) about this.
-An heapphone calibration stage could be very important, in order to eliminate the non linearities (in terms of frequency response) of the headphones and use and inverse filter to eliminate the filtering brought by the second passage on the pinna (this only if you are using circumaural or supraaural headphones).
I hope this helps!
Yours
Lorenzo Picinali


--
Lorenzo Picinali
Music, Technology and Innovation Research Centre
0116 2551551, internal 6770
Clephan Building CL0.19
De Montfort University
Leicester




-----Original Message-----
From: AUDITORY - Research in Auditory Perception on behalf of AUDITORY automatic digest system
Sent: Sat 15/11/2008 05:08
To: AUDITORY@xxxxxxxxxxxxxxx
Subject: AUDITORY Digest - 13 Nov 2008 to 14 Nov 2008 (#2008-232)
 
There are 10 messages totalling 685 lines in this issue.

Topics of the day:

  1. HRTF failure (4)
  2. AUDITORY Digest - 11 Nov 2008 to 12 Nov 2008 (#2008-230)
  3. Request of paper "Implementing a gammatone filterbank" by Patterson et al.
     (2)
  4. any help please? (3)

----------------------------------------------------------------------

Date:    Fri, 14 Nov 2008 11:43:50 +0100
From:    Christian Kaernbach <auditorylist@xxxxxxxxxxxx>
Subject: HRTF failure

Dear List,

We encounter a problem when trying to place a sound at a virtual=20
position in space by means of head related transfer functions (HRTF).

We use sounds from the IAPS database (International Affective Digitized=20
Sounds System, Bradley & Lang) as well as simple white noise of six=20
seconds duration. We use the Kemar HRTF, the "compact data" with zero=20
elevation. We convolve the original sound data with the HRTF data as=20
suggested in the documentation. The final sounds are presented using=20
Beyer Dynamic DT770 headphones.

We have tested the precision with which our sounds are placed in virtual=20
space, by presenting them to eight listeners. The listeners had a=20
touchscreen lying on their lap, with a circle plotted on it, and they=20
could indicated the direction where they perceived that the sound came=20
from. We presented to them in total 144 sounds, 72 noises and 72 IAPS=20
sounds, coming from 36 virtual directions (0=B0, 10=B0, 20=B0...) in=20
randomized order.

The results are shown in a figure that I put in the internet:
   http://www.uni-kiel.de/psychologie/emotion/hrtf/direction.gif
The red dots are from IAPS sounds, the yellow dots are from the noises.=20
The x-axis shows the "true" (virtual) angle, the y-axis shows the=20
estimated angle. As can be seen in this figure, listeners could well=20
discriminate between sounds from the left and sounds from the right. But=20
not more than that. There is a certain reduction of variance for sounds=20
coming from 90=B0 and from 270=B0, but there is no correlation with angle=
=20
within one hemifield.

Now we are eager to learn from you: What could be the cause for this=20
failure?

A) HRTFs are not better than that.
B) The headphones are inadequate.
C) It must be a programming error (we don't think so)
D) ....

We are grateful for any help in interpreting the possible cause for this=20
failure.

Thank you very much in advance,
Chris

--=20
Christian Kaernbach
Christian-Albrechts-Universit=E4t zu Kiel
Germany
www.kaernbach.de

------------------------------

Date:    Fri, 14 Nov 2008 08:42:54 -0600
From:    Jont Allen <jontalle@xxxxxxxx>
Subject: Re: AUDITORY Digest - 11 Nov 2008 to 12 Nov 2008 (#2008-230)

Richard,
Please look at:

http://hear.ai.uiuc.edu/public/AllenJengLevitt05.pdf

I believe the type of loss your thinking of is well know, and measurable 
using wide band pressure reflectance technology, which is a variation on 
a wide band acoustic impedance measurement (not tymp).

Jont Allen


AUDITORY automatic digest system wrote:
> There are 6 messages totalling 663 lines in this issue.
> 
> Topics of the day:
> 
>   1. Frequency dependent conductive loss (2)
...
> 
> ----------------------------------------------------------------------
> 
> Date:    Wed, 12 Nov 2008 09:11:14 -0000
> From:    Richard - UK <auditory@xxxxxxxxxxxxxx>
> Subject: Frequency dependent conductive loss
> 
> This is a multi-part message in MIME format.
> 
> I have noticed that conductive losses can have unusual profiles.
> 
> For example a significant low frequency conductive loss can very steeply
> disappear at 2KHz.
> 
> Can anyone recommend an on-line PDF or similar which discusses =
> conductive loss causes and profiles.
> 
> Thanks.
> ------=_NextPart_000_038A_01C944A6.9D1308A0
> Content-Type: text/html;
> 	charset="iso-8859-1"
> Content-Transfer-Encoding: quoted-printable
> 
> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">

------------------------------

Date:    Fri, 14 Nov 2008 07:38:05 -0800
From:    "Richard F. Lyon" <DickLyon@xxxxxxx>
Subject: Re: HRTF failure

Christian,

If you interpret the pattern in your GIF plot as=20
a pair of big "X" patterns, you can see it=20
represents primarily a front-back confusion.=20
This is very typical with headphone listening.=20
There is lots of literature on this particular=20
difficulty and ways to improve it:=20
http://books.google.com/books?q=3Dfront-back-confusion

Dick

At 11:43 AM +0100 11/14/08, Christian Kaernbach wrote:
>Dear List,
>
>We encounter a problem when trying to place a=20
>sound at a virtual position in space by means of=20
>head related transfer functions (HRTF).
>
>We use sounds from the IAPS database=20
>(International Affective Digitized Sounds=20
>System, Bradley & Lang) as well as simple white=20
>noise of six seconds duration. We use the Kemar=20
>HRTF, the "compact data" with zero elevation. We=20
>convolve the original sound data with the HRTF=20
>data as suggested in the documentation. The=20
>final sounds are presented using Beyer Dynamic=20
>DT770 headphones.
>
>We have tested the precision with which our=20
>sounds are placed in virtual space, by=20
>presenting them to eight listeners. The=20
>listeners had a touchscreen lying on their lap,=20
>with a circle plotted on it, and they could=20
>indicated the direction where they perceived=20
>that the sound came from. We presented to them=20
>in total 144 sounds, 72 noises and 72 IAPS=20
>sounds, coming from 36 virtual directions (0=B0,=20
>10=B0, 20=B0...) in randomized order.
>
>The results are shown in a figure that I put in the internet:
>   http://www.uni-kiel.de/psychologie/emotion/hrtf/direction.gif
>The red dots are from IAPS sounds, the yellow=20
>dots are from the noises. The x-axis shows the=20
>"true" (virtual) angle, the y-axis shows the=20
>estimated angle. As can be seen in this figure,=20
>listeners could well discriminate between sounds=20
>from the left and sounds from the right. But not=20
>more than that. There is a certain reduction of=20
>variance for sounds coming from 90=B0 and from=20
>270=B0, but there is no correlation with angle=20
>within one hemifield.
>
>Now we are eager to learn from you: What could be the cause for this failur=
e?
>
>A) HRTFs are not better than that.
>B) The headphones are inadequate.
>C) It must be a programming error (we don't think so)
>D) ....
>
>We are grateful for any help in interpreting the=20
>possible cause for this failure.
>
>Thank you very much in advance,
>Chris
>
>--
>Christian Kaernbach
>Christian-Albrechts-Universit=E4t zu Kiel
>Germany
>www.kaernbach.de

------------------------------

Date:    Fri, 14 Nov 2008 16:56:34 +0100
From:    Sylvain CLEMENT <sylvain.clement@xxxxxxxxxxxxxx>
Subject: Re: HRTF failure

Dear Christian,

We are also using generic HRTFs to virtually arrange sounds in space.

  Before begining our experiments, we checked that the "generic HRTF" =20=

way was acceptable by running some preliminary experiments like the =20
one you described. The principal difference is that we only used =20
sounds in the frontal hemispace (between -90 (=3D270=B0) & 90 =B0 of =
Azimuth.

The results were quite acurate but we noted a systematic tendency to :
- overestimate the excentricity for sounds Az of in [-50=B0;50=B0] (a =20=

sound presented at a virtual azimuth of 30=B0 is perceived at say 40=B0) =
=20
and an unde
- underestimate excentricity for extreme positions ([-70;-90] & =20
[70;90]). eg : a -85=B0 az is perceived as "-75=B0".

These effects are often uncommented but present in figures plotted in =20=

different published papers (in general in papers with a small number =20
of trials and without intensive training).

Our participant where quite precise (in some experiments we get a =20
standard deviation <3=B0 in pointing tasks).


You used sound in the back hemispace. I don't know if it is an =20
important requirement for you but this is known to produce front/back =20=

confusions (some are visible in your plot I think).

  To reduce front-back confusion you might consider  (see Begault et =20
al (2001)):
- virtually install your participants in a "real room" with some =20
reverberation that help solving F/B confusions (simulating at least =20
1st order reverberation)
- You can also use real-time head movement tracking (which imply that =20=

you have a motion tracking system).
- Maybe you can more "compatible" HRTF for each of your subjects (you =20=

have to run a prior experiment in order to choose the best HRTF from =20
several possibilities instead of only using MIT kemar ones).

I never had accurate pointing response in task involving front & back =20=

sounds in static situations (without head movement tracking).

I hope that this can help and I'm curious about other advises on this =20=

topic.


Sylvain Cl=E9ment

Neuropsychology & Auditory Cognition Team
Lille, France



Le 14 nov. 08 =E0 11:43, Christian Kaernbach a =E9crit :

> Dear List,
>
> We encounter a problem when trying to place a sound at a virtual =20
> position in space by means of head related transfer functions (HRTF).
>
> We use sounds from the IAPS database (International Affective =20
> Digitized Sounds System, Bradley & Lang) as well as simple white =20
> noise of six seconds duration. We use the Kemar HRTF, the "compact =20
> data" with zero elevation. We convolve the original sound data with =20=

> the HRTF data as suggested in the documentation. The final sounds =20
> are presented using Beyer Dynamic DT770 headphones.
>
> We have tested the precision with which our sounds are placed in =20
> virtual space, by presenting them to eight listeners. The listeners =20=

> had a touchscreen lying on their lap, with a circle plotted on it, =20
> and they could indicated the direction where they perceived that the =20=

> sound came from. We presented to them in total 144 sounds, 72 noises =20=

> and 72 IAPS sounds, coming from 36 virtual directions (0=B0, 10=B0, =20=

> 20=B0...) in randomized order.
>
> The results are shown in a figure that I put in the internet:
>  http://www.uni-kiel.de/psychologie/emotion/hrtf/direction.gif
> The red dots are from IAPS sounds, the yellow dots are from the =20
> noises. The x-axis shows the "true" (virtual) angle, the y-axis =20
> shows the estimated angle. As can be seen in this figure, listeners =20=

> could well discriminate between sounds from the left and sounds from =20=

> the right. But not more than that. There is a certain reduction of =20
> variance for sounds coming from 90=B0 and from 270=B0, but there is no =
=20
> correlation with angle within one hemifield.
>
> Now we are eager to learn from you: What could be the cause for this =20=

> failure?
>
> A) HRTFs are not better than that.
> B) The headphones are inadequate.
> C) It must be a programming error (we don't think so)
> D) ....
>
> We are grateful for any help in interpreting the possible cause for =20=

> this failure.
>
> Thank you very much in advance,
> Chris
>
> --=20
> Christian Kaernbach
> Christian-Albrechts-Universit=E4t zu Kiel
> Germany
> www.kaernbach.de

------------------------------

Date:    Fri, 14 Nov 2008 08:48:07 -0800
From:    Pierre Divenyi <pdivenyi@xxxxxxxxx>
Subject: Re: HRTF failure

Christian,

You don't mention including a headphone correction stage. Did you?

-Pierre

Christian Kaernbach wrote:
> Dear List,
>
> We encounter a problem when trying to place a sound at a virtual 
> position in space by means of head related transfer functions (HRTF).
>
> We use sounds from the IAPS database (International Affective 
> Digitized Sounds System, Bradley & Lang) as well as simple white noise 
> of six seconds duration. We use the Kemar HRTF, the "compact data" 
> with zero elevation. We convolve the original sound data with the HRTF 
> data as suggested in the documentation. The final sounds are presented 
> using Beyer Dynamic DT770 headphones.
>

------------------------------

Date:    Fri, 14 Nov 2008 09:46:33 -0800
From:    Arturo Camacho <acamacho@xxxxxxxxxxxx>
Subject: Request of paper "Implementing a gammatone filterbank" by Patterson et al.

Dear members of the list:

Can someone please facilitate me a copy of the paper "Implementing a
gammatone filterbank", APU Report 2341 by R. Patterson et al? I have
found in several papers/books a formula to compensate for the
differences in delay that exist in the different channels of a
gammatone filterbank, and they cite the aforementioned paper as the
origin of the formula. I would like to take a look at that paper to
learn more about the method.

Thanks,

Arturo

-- 
__________________________________________________

Arturo Camacho, PhD
Alumni
Computer and Information Science and Engineering
University of Florida

E-mail: acamacho@xxxxxxxxxxxx
Web page: www.cise.ufl.edu/~acamacho
__________________________________________________

------------------------------

Date:    Fri, 14 Nov 2008 11:03:45 -0800
From:    Roy Patterson <rdp1@xxxxxxxxx>
Subject: Re: Request of paper "Implementing a gammatone filterbank" by Patterson et al.

Arturo Camacho wrote:
> Dear members of the list:
> 
> Can someone please facilitate me a copy of the paper "Implementing a
> gammatone filterbank", APU Report 2341 by R. Patterson et al? I have
> found in several papers/books a formula to compensate for the
> differences in delay that exist in the different channels of a
> gammatone filterbank, and they cite the aforementioned paper as the
> origin of the formula. I would like to take a look at that paper to
> learn more about the method.

APU Report 2341 was written before the advent of electronic figures and 
pdf's. I guess I should scan the paper and put it on our web page, but I 
am in California at the moment so it will be closer to Christmas when 
that happens. The point the paper makes is as follows:

The fluid in the cochlea is incompressible. So when the cochlea is hit 
by an impulse, all of the filters in the filterbank begin to respond at 
the same instant. However, the bandwidth of the auditory filter narrows 
as centre frequency decreases down the length of the cochlea. The 
impulse reponse in a narrow filter takes longer to rise to its maximum 
than it does for a broad filter. So the delay you see in a cochleogram 
is the delay in rise time of the filter due to the narrowing of filter 
bandwidth. Thus, the delay is the delay of the maximum of the gamma 
envelope of the filter which is easy to calculate. The only complication 
is that you do not actually see the envelope in the output of the 
filter, you see the peaks of the carrier, and these peaks shift a little 
relative to the peak of the envelope. But again it is easy to calculate 
where they will occur given the equation for the impulse response of the 
filter.

Regards Roy P

-- 
* ** *** * ** *** * ** *** * ** *** * ** *** *
Roy D. Patterson
Centre for the Neural Basis of Hearing
Department of Physiology, Development and Neuroscience
University of Cambridge
Downing Street, Cambridge, CB2 3EG

http://www.pdn.cam.ac.uk/cnbh/
phone: +44 (1223) 333819 office
fax:   +44 (1223) 333840 department
email	rdp1@xxxxxxxxx
   	

------------------------------

Date:    Fri, 14 Nov 2008 21:31:51 +0000
From:    Adrian Attard Trevisan <a.trevisan@xxxxxxxxx>
Subject: any help please?

This is a multi-part message in MIME format.
--------------010008070109010804040209
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Dear List

 I'm searching for a dissertation by Dr Brahim Hamadicharef ( or even 
papers derived from it) for a new DSP compressor I am currently working on .

*Artificial Intelligence-based Approach to Modeling of Pipe Organs*
Brahim Hamadicharef - Ph.D. Thesis (December 2005)
School of Computing, Communications and Electronics (SoCCE), University 
of Plymouth



Thanks
Adrian

Adrian Attard Trevisan
Msc Audiological Sciences Student
UCL Ear Institutde
London






--------------010008070109010804040209
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
</head>
<body bgcolor="#ffffff" text="#000000">
Dear List <br>
<br>
&nbsp;I'm searching for a dissertation by Dr Brahim Hamadicharef ( or even
papers derived from it) for a new DSP compressor I am currently working
on .<br>
<br>
<b>Artificial Intelligence-based Approach to Modeling of Pipe Organs</b><br>
Brahim Hamadicharef - Ph.D. Thesis (December 2005)<br>
School of Computing, Communications and Electronics (SoCCE), University
of Plymouth <br>
<br>
<br>
<br>
Thanks<br>
Adrian <br>
<br>
Adrian Attard Trevisan <br>
<small>Msc Audiological Sciences Student<br>
UCL Ear Institutde<br>
London </small><br>
<br>
<br>
<br>
<br>
<br>
</body>
</html>

--------------010008070109010804040209--

------------------------------

Date:    Fri, 14 Nov 2008 16:53:35 -0500
From:    "Harriet B. Jacobster, AuD" <hjacobster@xxxxxxx>
Subject: Re: any help please?

--------------030106020009010902050006
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Have you tried any of the links on his website:
http://www.tech.plym.ac.uk/spmc/brahim/bhamadicharef.html

~~~~~~~~~~~~~~~~~~~~~
Harriet B. Jacobster, Au.D.
Board Certified in Audiology
hjacobster@xxxxxxx



Adrian Attard Trevisan wrote:
> Dear List
>
>  I'm searching for a dissertation by Dr Brahim Hamadicharef ( or even 
> papers derived from it) for a new DSP compressor I am currently 
> working on .
>
> *Artificial Intelligence-based Approach to Modeling of Pipe Organs*
> Brahim Hamadicharef - Ph.D. Thesis (December 2005)
> School of Computing, Communications and Electronics (SoCCE), 
> University of Plymouth
>
>
>
> Thanks
> Adrian
>
> Adrian Attard Trevisan
> Msc Audiological Sciences Student
> UCL Ear Institutde
> London
>
>
>
>
>


--------------030106020009010902050006
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
  <meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
  <title></title>
</head>
<body bgcolor="#ccffff" text="#000066">
Have you tried any of the links on his website:<br>
<a class="moz-txt-link-freetext" href="http://www.tech.plym.ac.uk/spmc/brahim/bhamadicharef.html";>http://www.tech.plym.ac.uk/spmc/brahim/bhamadicharef.html</a><br>
<br>
~~~~~~~~~~~~~~~~~~~~~<br>
Harriet B. Jacobster, Au.D.<br>
Board Certified in Audiology<br>
<a class="moz-txt-link-abbreviated" href="mailto:hjacobster@xxxxxxx";>hjacobster@xxxxxxx</a><br>
<br>
<br>
<br>
Adrian Attard Trevisan wrote:
<blockquote cite="mid:20081114214259.54AFD5EFA@xxxxxxxxxxxxxxxxxxxxxxx";   type="cite">Dear List <br>
  <br>
&nbsp;I'm searching for a dissertation by Dr Brahim Hamadicharef ( or even
papers derived from it) for a new DSP compressor I am currently working
on .<br>
  <br>
  <b>Artificial Intelligence-based Approach to Modeling of Pipe Organs</b><br>
Brahim Hamadicharef - Ph.D. Thesis (December 2005)<br>
School of Computing, Communications and Electronics (SoCCE), University
of Plymouth <br>
  <br>
  <br>
  <br>
Thanks<br>
Adrian <br>
  <br>
Adrian Attard Trevisan <br>
  <small>Msc Audiological Sciences Student<br>
UCL Ear Institutde<br>
London </small><br>
  <br>
  <br>
  <br>
  <br>
  <br>
</blockquote>
<br>
</body>
</html>

--------------030106020009010902050006--

------------------------------

Date:    Fri, 14 Nov 2008 21:59:08 +0000
From:    Adrian Attard Trevisan <a.trevisan@xxxxxxxxx>
Subject: Re: any help please?

I have been looking around those papers , its just would be interesting 
to have a look at that particular  dissertation .
Thanks for the prompt help

-Adrian


Harriet B. Jacobster, AuD wrote:
> Have you tried any of the links on his website:
> http://www.tech.plym.ac.uk/spmc/brahim/bhamadicharef.html
>
> ~~~~~~~~~~~~~~~~~~~~~
> Harriet B. Jacobster, Au.D.
> Board Certified in Audiology
> hjacobster@xxxxxxx
>
>
>
> Adrian Attard Trevisan wrote:
>> Dear List
>>
>>  I'm searching for a dissertation by Dr Brahim Hamadicharef ( or even 
>> papers derived from it) for a new DSP compressor I am currently 
>> working on .
>>
>> *Artificial Intelligence-based Approach to Modeling of Pipe Organs*
>> Brahim Hamadicharef - Ph.D. Thesis (December 2005)
>> School of Computing, Communications and Electronics (SoCCE), 
>> University of Plymouth
>>
>>
>>
>> Thanks
>> Adrian
>>
>> Adrian Attard Trevisan
>> Msc Audiological Sciences Student
>> UCL Ear Institutde
>> London
>>
>>
>>
>>
>>
>

------------------------------

End of AUDITORY Digest - 13 Nov 2008 to 14 Nov 2008 (#2008-232)
***************************************************************