comparing cochlear models ? (John Bates )


Subject: comparing cochlear models ?
From:    John Bates  <jkbates@xxxxxxxx>
Date:    Thu, 20 May 2010 20:37:15 -0400
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

This is a multi-part message in MIME format. ------=_NextPart_000_0037_01CAF85C.3B78C290 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Dear Emad, The only criterion that makes sense to me is that a cochlear model = should replicate well-known psychoacoustic experiments without having to = apply "special adjustments" for each experiment. For example, the = two-tone interference experiments such as missing fundamental, = combination tones, masking, etc. should all be explainable by the same = model. Likewise for many other phenomena, such as rippled noise, = whispered speech, real-time pitch detection, or detection of tones = without periodic repetition, not requiring exotic computations. = Furthermore, the model should be capable of separating sounds in terms = of awareness and attention. =20 As far as I know, no current models can come anywhere close to doing = even one of these things. So you have an open field to pursue.=20 Here's a hint: Helmholtz was wrong, Seebeck was right. Best regards, John Bates ----- Original Message -----=20 From: emad burke=20 To: AUDITORY@xxxxxxxx=20 Sent: Wednesday, May 19, 2010 9:04 AM Subject: comparing cochlear models ? Dear list I am trying to find a metric which i can compare different cochlear = models based on. in other words i need a "quantitative" metric which = there is a consensus on in the whole community and its widely accepted. = Of course i dont mean a metric like "how biologically plausible a = cochlear model is" since i dont think it is quantifiable. as a simple = example if you were going to compare the traditional Dick Lyons old = cochlear model with the one that i'm developing myself how am i supposed = to compare them and conclude which ones superior ? Best Regards Emad ------=_NextPart_000_0037_01CAF85C.3B78C290 Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML><HEAD> <META content=3D"text/html; charset=3Diso-8859-1" = http-equiv=3DContent-Type> <META name=3DGENERATOR content=3D"MSHTML 8.00.6001.18904"> <STYLE></STYLE> </HEAD> <BODY bgColor=3D#ffffff> <DIV><FONT size=3D2 face=3DArial></FONT>&nbsp;</DIV> <DIV><FONT size=3D2 face=3DArial>Dear Emad,</FONT></DIV> <DIV><FONT size=3D2 face=3DArial></FONT>&nbsp;</DIV> <DIV><FONT size=3D2 face=3DArial>The only criterion that makes sense to = me is that a=20 cochlear model should replicate well-known psychoacoustic experiments = without=20 having to apply "special adjustments" for each experiment. For example, = the=20 two-tone interference experiments such as missing fundamental, = combination=20 tones, masking, etc. should all be explainable by the same model. = Likewise=20 for&nbsp;many&nbsp;other phenomena, such as rippled noise, whispered = speech,=20 real-time pitch detection, or detection of tones without periodic = repetition,=20 not requiring exotic computations. Furthermore, the model&nbsp;should be = capable=20 of&nbsp;separating sounds in terms of awareness and = attention</FONT><FONT size=3D2=20 face=3DArial>. &nbsp;</FONT><FONT size=3D2 = face=3DArial>&nbsp;</FONT></DIV> <DIV><FONT size=3D2 face=3DArial></FONT>&nbsp;</DIV> <DIV><FONT size=3D2 face=3DArial>As far as I know, no current models can = come=20 anywhere close to doing even one of these things. So you have an open = field to=20 pursue.&nbsp;</FONT></DIV> <DIV><FONT size=3D2 face=3DArial></FONT>&nbsp;</DIV> <DIV><FONT size=3D2 face=3DArial>Here's a hint: Helmholtz was wrong, = Seebeck was=20 right.</FONT></DIV> <DIV><FONT size=3D2 face=3DArial></FONT>&nbsp;</DIV> <DIV><FONT size=3D2 face=3DArial>Best regards,</FONT></DIV> <DIV><FONT size=3D2 face=3DArial></FONT>&nbsp;</DIV> <DIV><FONT size=3D2 face=3DArial>John Bates</FONT></DIV> <BLOCKQUOTE=20 style=3D"BORDER-LEFT: #000000 2px solid; PADDING-LEFT: 5px; = PADDING-RIGHT: 0px; MARGIN-LEFT: 5px; MARGIN-RIGHT: 0px"> <DIV style=3D"FONT: 10pt arial">----- Original Message ----- </DIV> <DIV=20 style=3D"FONT: 10pt arial; BACKGROUND: #e4e4e4; font-color: = black"><B>From:</B>=20 <A title=3Demad.burke@xxxxxxxx = href=3D"mailto:emad.burke@xxxxxxxx">emad=20 burke</A> </DIV> <DIV style=3D"FONT: 10pt arial"><B>To:</B> <A = title=3DAUDITORY@xxxxxxxx=20 href=3D"mailto:AUDITORY@xxxxxxxx">AUDITORY@xxxxxxxx</A> = </DIV> <DIV style=3D"FONT: 10pt arial"><B>Sent:</B> Wednesday, May 19, 2010 = 9:04=20 AM</DIV> <DIV style=3D"FONT: 10pt arial"><B>Subject:</B> comparing cochlear = models=20 ?</DIV> <DIV><BR></DIV>Dear list<BR><BR><BR>I am trying to find a metric which = i can=20 compare different cochlear models based on. in other words i need a=20 "quantitative" metric which there is a consensus on in the whole = community and=20 its widely accepted. Of course i dont mean a metric like "how = biologically=20 plausible a cochlear model is" since i dont think it is quantifiable. = as a=20 simple example if you were going to compare the traditional Dick Lyons = old=20 cochlear model with the one that i'm developing myself how am i = supposed to=20 compare them and conclude which ones superior ?<BR><BR>Best=20 Regards<BR>Emad<BR></BLOCKQUOTE></BODY></HTML> ------=_NextPart_000_0037_01CAF85C.3B78C290--


This message came from the mail archive
/home/empire6/dpwe/public_html/postings/2010/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University