Re: Question about latency in CI comprehension (Matt Winn )


Subject: Re: Question about latency in CI comprehension
From:    Matt Winn  <mwinn83@xxxxxxxx>
Date:    Wed, 10 Dec 2014 07:32:26 -0600
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--001a11362f10c0b0800509dcb0e3 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Willem, It was a pleasure to read about your experiences with your CI. The intersection of CI use and expert knowledge in acoustics is a rarity, and we are lucky to have you share your story. I thought it might be good to add to this story a cautionary note before we draw conclusions about cochlear function or brain function. The CI processor transforms the signal into a series of compressed pulse trains, and in doing so, discards a number of different properties of the acoustic input. So even though we can be clever and design experiments where perception of the acoustic signal can differentiate various auditory processes, we are in many ways subordinate to the prerogative of the CI processor. In other words, we cannot trust the perceived signal to be what we intended it to be. This is especially true in the case of a delicate temporal/spectral interaction of the type you described. To make a simple analogy, you can imagine the pitfalls of drawing conclusions about differences between your right eye and left eye if color vision in your right eye were tested using a black & white tube monitor from the 40s, and your left eye were tested using an LCD HD monitor from 2014. Any conclusions you draw from this test would really be a statement about the apparatus, not the visual system itself. In my opinion, the same risks apply in the case of comparing a CI ear to an acoustic ear. To your specific experiment: although your acoustic ear heard the fundamental in the complex sine tone you created, your CI ear in fact never heard the sines at all (just as your right eye never saw the color); it heard whatever the processor generated to represent those tones. So in my mind, you might not have been comparing apples to apples. What some researchers do is gain control over the CI signal by bypassing the clinical processors and instead use research processors (e.g. HEINRI, NIC, BEPS+, BEDCS), where each element of stimulation is explicitly controlled. Then you can at least be assured of what signal is being delivered and be confident about the relationship between stimulus and response. Other experimenters have more experience in this area and may offer more eloquent descriptions of their approach. Matt On Tue, Dec 9, 2014 at 9:55 AM, Willem Christiaan Heerens < heerens1@xxxxxxxx> wrote: > Dear Tam=C3=A1s, Nathan and List, > > Tam=C3=A1s you reported: > =E2=80=A6while working with cochlear implants (CI) I often notice that ev= en CI > listeners with very good speech perception need some extra time (in > comparison to normal hearing listeners) to comprehend a spoken > sentence=E2=80=A6. > > I have no data about relevant literature of this subject. > But maybe my study since January this year of my experiences as an > =E2=80=98expert in the field=E2=80=99 can be of value and interest for yo= u. And maybe > for others too. > > Since May 2013 I have the Advanced Bionics Harmony CI in my to 120 dB > deaf left ear [In January 2014 my Harmony equipment is replaced by the > New concept AB Naida. In my right ear, with a 70 dB overall hearing > loss, I have the Phonak Naida hearing aid which can support to some > extend the functioning of the AB CI. > > In my rehabilitation period it took me less than 2 weeks to have a > speech perception score that almost reaches that of a normal hearing > person even without seeing the speaking person. > My phoneme score was up to 90 % for a normal stimulation of my CI. > Remarkable enough my phoneme score reduces a few percent in case both > apparatuses =E2=80=98cooperate=E2=80=99. > But that is only under better than normal quit environmental conditions > and with listening to a single speaker. > As soon as the environment becomes more =E2=80=98noisy=E2=80=99 my hearin= g abilities > reduce rapidly. > When three or more people are discussing more or less chaotically I only > hear a tremendous loud noise in which I can hardly distinguish a single > word. My speech perception then is dropped to zero and the latency for > comprehending spoken sentences can be named infinite. > > Only when someone in such an auditory environment is loudly speaking > [almost screaming] near the microphone of my CI processor I can > comprehend just less than approximately 50 % of the sentences. > Far too low to have a pleasant discussion. > > Listening to music =E2=80=93 especially classical music =E2=80=93 is for = me far from > joyful. Actually the only aspect in music I experience almost normally > is rhythm. Pitch perception, timbre, dynamic range and melody > recognition are all really bad. Naming a single instrument out of what > I hear with my CI is for me a hell of a job. > What I experience in the comparison of my two hearing apparatuses is > that with my CI I hear all background noises like traffic and cocktail > party rumble as lower frequencies compared to the frequencies I hear > with my normal hearing aid. > In literature such experiences are reported as well. But more as an > unclear and remarkable phenomenon. > > So being a physicist and with my research in cochlear functioning in > mind =E2=80=93 what brought me earlier to the statement that the normal > functioning human hearing sense makes use of the sound energy stimulus > in the cochlea and not the sound pressure stimulus, what everybody now > still assumes =E2=80=93 I started with the survey of what actually the CI > processor software is doing with the incoming sound pressure stimulus. > What I found =E2=80=93 and please correct me if I am wrong =E2=80=93 in a= nutshell was > that for dynamic behavior purposes in the different electrodes this > stimulus is rectified and there is further no indication that the sound > pressure stimulus is transferred into the sound energy stimulus, which > on its turn is used in a frequency selective way as the electrical > stimulation of the electrode array. > > So I hypothesized that if I compose quite simple tone settings for > listening to beat phenomena I can study with the resulting sound > fragments how I experience beats with my CI in comparison with my other > hearing aid. They simply must sound different. > This because a beat phenomenon in the sound pressure domain is clearly > different from the corresponding beat phenomenon in the sound energy > domain. > > My most illustrative beat experiment is the following: > > I combined two tones with equal amplitude =E2=80=93 999.99983333 Hz and > 1000.00016667 Hz =E2=80=93 to a sound pressure stimulus. > This combination results in a beat in a 1000 Hz stimulus with a beat > period observed as having a duration of 3000 seconds. > Actually the complete beat period T is 6000 seconds. This because the > shape of the modulation function in the sum of the two sinusoidal > contributions is a cosine function with frequency equal to half the > frequency difference of the two combined tones. > Hence equal to cos(2&#960;=C3=970.00016667=C3=97t) or equal to cos(2&#960= ;=C3=97t/6000). > And the > modulation envelope is equal to the modulus of this function, so > |cos(2&#960;=C3=97t/6000)|. And that is a function with a period of 3000 > seconds. > > You must be aware that when you look closely to the shape of this > stimulus you will find that near halfway the 3000 seconds the signal > amplitude falls sharply to zero, remains zero during just a split second > and then rises again sharply to higher values. > However when you calculate the sound energy stimulus connected with this > sound pressure stimulus you will find that the beat in this signal still > has a period of 3000 seconds. And when time is approaching the 1500 > seconds halfway this period the sound amplitude is also declining to > zero. But it does this in an entirely different way. > At first the frequency is not 1000 Hz anymore but an octave higher so > 2000 Hz. > And the shape of the beat envelope of that 2000 Hz stimulus in the > vicinity of halfway the period is entirely smooth. The sudden transition > from sharply descending to sharply rising after the 1500 second point is > completely disappeared. Instead a gradual approach resulting in a smooth > touching to the zero level followed again by a gradual increase in > amplitude. > > The two striking differences =E2=80=93 1000 Hz versus 2000 Hz and a sharp > approach to zero versus a smooth approach =E2=80=93 must give unmistakabl= e > differences in hearing impression. > > And the results of my experiment confirm my hypotheses: > > I have cut the 30 seconds period around halfway the period out of the > calculated soundtrack of the sound pressure stimulus. And with > sufficient amplification for my observations I have listened separately > with my CI and my Phonak hearing aid. And even with another amplifier > connected to a high quality headphone without my Phonak hearing aid. > > With my CI I heard without any doubt the sharp continuous decline to > zero stimulus and after a split second the continuous increase. I could > not observe a substantial long period of zero signal. > With my other ear I heard in both cases during the period of 30 seconds > a smooth decline to zero that was reached approximately 7 =E2=80=93 8 sec= onds > before the halfway moment. This zero signal ended approximately 7 =E2=80= =93 8 > seconds after the halfway moment. So during a period of 14 =E2=80=93 16 s= econds > the signal remains zero followed by a smooth increase. > And the tone has without any doubt a doubled frequency =E2=80=93 so 2000 = Hz > instead of 1000 Hz. > > I have repeated these experiments with the common series of audiology > test frequencies except the 125 Hz stimulus =E2=80=93 so starting with 25= 0 Hz up > to 7000 Hz. > With all frequencies I experienced the same results as for the 1000 Hz > signal. > > My following experiment was modifying the 1000 Hz sound pressure > stimulus into the sound energy stimulus. And then listening to this > sound fragment with my CI. > As I expected as result for this experiment I experienced the same sound > via my CI as I heard from the sound pressure experiment with my Phonak > hearing aid. > A 2000 Hz signal and a smooth approach to a zero period of 16 seconds > followed by a smooth rising of the 2000 Hz signal. > > After that I concluded that also pitch and missing fundamental > experiments will give different results when a normally functioning > basilar membrane is apparently stimulated with the sound energy stimulus > while in the CI processor the sound energy stimulus isn=E2=80=99t generat= ed and > transferred to the brain but the sound pressure stimulus. > > So I composed two tone complexes the first one existing of the > frequencies: > > 800 =E2=80=93 1000 =E2=80=93 1200 =E2=80=93 1400 =E2=80=93 1600 =E2=80=93= 1800 =E2=80=93 2000 Hz. > > All sine functions. > And the other one with the same frequencies but successively sine and > cosine functions. > Both functions having a 1/f amplitude frequency relation, which results > for the sound energy tone complex into equal energy contributions for > all frequencies. > From calculations and experimental results in earlier studies and out of > literature I know that a normal hearing person experiences with all sine > functions a pitch of 200 Hz. While with the alternating sine =E2=80=93 co= sine =E2=80=93 > sine composition the listener hears a 400 Hz pitch. > > The complete calculation for all sine contributions results in a series > of missing lower harmonics starting with the fundamental of 200 Hz > followed by harmonics 400 =E2=80=93 600 Hz and then the harmonics 800 =E2= =80=93 1000 =E2=80=93 > 1200 Hz. > The alternating sine =E2=80=93 cosine composition shows after calculation= that > the series starts with the missing lower harmonic 400 Hz followed by > the 800 Hz and 1200 Hz harmonic. All three od harmonics 200 =E2=80=93 600= =E2=80=93 1000 > Hz are disappeared in the sound energy frequency spectrum. > > The results of these two tone complex experiments are even more > remarkable. > > With my CI apparatus I experience no significant difference between the > two sound fragments. I hear both sounds as higher tones with identical > frequency and hardly no difference in intensity. > While with my Phonak hearing aid or amplifier headphone combination I > hear precisely the missing fundamental as a low 200 Hz tone combined > with higher tone contributions for the all sine function contributions. > And a 400 Hz tone with a somewhat altered higher tone contribution =E2=80= =93 > which I can characterize as a change in timbre. > > So now I can draw a number of conclusions out of these results: > > When I follow the existing hearing hypotheses or theory I am confronted > with a serious anomaly: > > It is clear that the implantation of the CI has done nothing at all with > my auditory brain functions. > However by the stimulation of my CI with the sound pressure signal my > auditory cortex or other brain areas involved in sound perception don=E2= =80=99t > produce hearable missing fundamentals out of the sound pressure signal. > > I can only draw the anomalous conclusion that before any signal is > transferred to the brain the missing fundamental information must be > present in this stimulus. > Hence it must be generated inside the cochlea. And not in the brain. > > But when I follow my hearing concept, where the non-stationary Bernoulli > effect transfers the incoming sound pressure stimulus into the sound > energy stimulus in front of the basilar membrane, there doesn=E2=80=99t e= xist > any anomaly. > > May I remark that the non-stationary Bernoulli effect is a physically > correct solution of the Navier-Stokes equation for a non-viscous > alternating potential flow in a non-compressible fluid? These flow > conditions exist in the cochlear duct. > > Tam=C3=A1s, regarding your remark: > =E2=80=A6.In fact, some patients with single-sided deafness and CI in the= deaf > ear report a perceived latency between the normal hearing and the CI > side, which does not seem to be of technical nature=E2=80=A6.. > > I can give you the following answer: > > Related to my experiments in which it is clear to me that the CI program > does not generate the for normal hearing correct signals I also have my > strong doubts about your assumption that the latency you mentioned is > not of technical nature. > > Nathan I agree with you, regarding your remarks to Tam=C3=A1s: > > =E2=80=A6.In terms of your specific question with unilateral loss and co= chlear > implants, I would be tempted to look at the engineering side of the > device, or possibly the settings of the implant programming, but you > mention you do not think the delay is a technical one=E2=80=A6. > > As you can conclude with me from the results of the described > experiments I have done it is not only a technical issue. > > It is really fundamental in origin. It is related to the fact that > already for a long time the scientific hearing community is fully > convinced that the cochlea transfers the sound pressure stimulus to the > brain and the brain applies nonlinear functions in its auditory > perception process. > And for me apparently out of my experimental results I distinguish that > the cochlea performs the major non-linear process step. It transfers the > sound pressure stimulus by two successive process steps =E2=80=93 > differentiation followed by squaring =E2=80=93 into the sound energy stim= ulus. > And the latter stimulus is frequency selective transferred to the brain. > > In that case the at the best 30 dB dynamic range of the CI processor is > transferred to 60 dB by the squaring process step as well. Which brings > the CI dynamic range in balance with the normal hearing apparatus. > > For a better perception of sound impressions via the CI it is needed > that the programming of the CI processor must be changed. And that is a > technical issue. > > Maybe the conclusions out of my experiments that my CI processor isn=E2= =80=99t > well programmed for this transfer of fundamentals =E2=80=93 especially fo= r > missing fundamentals =E2=80=93 can be of high value for Mandarin speaking > Chinese users of a CI. This tonal language, spoken by them, in which > fundamentals play a crucial role, is highly problematic for a good > speech perception score until now. I know that algorithms are developed > or in development for extracting the fundamentals together with CIS > technology. > > [See for instance: > N. Lan*, K. B. Nie, S. K. Gao, and F. G. Zeng: > A Novel Speech-Processing Strategy Incorporating Tonal Information for > Cochlear Implants > IEEE Transactions on Biomedical Engineering, Vol. 51, No. 5, May 2004] > > Nathan relating to your following remark: > =E2=80=A6.Another area to consider may be the idea of hemispheric connect= ivity. > In your example of a unilateral loss with the CI in the deaf ear, it may > be that the non-CI (and fully hearing ear) input is processing faster in > the brain than the CI input is. This is an extension of the concept that > auditory-deprivation impacts on plasticity =E2=80=A6. > > What do you think about the suggestion that a perception latency can be > observed in the CI activated side related to the more or less normal > hearing other side because the CI stimulus is fundamentally not correct > which causes that the brain needs more time to make a correct > perception. This can be placed perfectly in the category auditory- > deprivation resulting in an impact on the brain plasticity. > > I want to close my remarks with the following: > > Of course the scientific auditory community can state that my hearing > experiences with my CI and Phonak hearing aid in tone experiments are a > pure personal issue. > Firstly you can say that I have heard everything erroneously by using > the wrong arguments and my experiments does not meet the high > international standards you always use. > And secondly you are right if you say my experiments are purely > subjective in origin. > > My answer to the first comment will be: > > I want to remind you to August Seebeck=E2=80=99s quote [dated 1844] in th= e > dispute with Ohm and Helmholtz: > > Wodurch kann =C3=BCber die Frage, was zu einem Tone geh=C3=B6re, entschie= den > werden, als eben durch das Ohr? > (How else can the question as to what makes out a tone be decided but by > the ear?) > > And to the second comment: > > Collect the data of such experiments and show me that I am wrong by > doing the same experiments I have done. Do that with other subjects who > are equipped with a hearing aid for moderate hearing loss and a CI in > the deaf ear. If necessary and applicable use up-to-date techniques > like auditory fMRI or high resolution EEG methods to improve the level > of objectivity. > > I have asked a few fellow CI-users for their experiences with these > phenomena. Their answers made me very confident. > > > Willem Chr. Heerens > --001a11362f10c0b0800509dcb0e3 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><p class=3D"MsoNormal">Willem,</p> <p class=3D"MsoNormal">It was a pleasure to read about your experiences wit= h your CI. The intersection of CI use and expert knowledge in acoustics is a rarit= y, and we are lucky to have you share your story. </p> <p class=3D"MsoNormal">=C2=A0</p> <p class=3D"MsoNormal">I thought it might be good to add to this story a ca= utionary note before we draw conclusions about cochlear function or brain function. = The CI processor transforms the signal into a series of compressed pulse trains= , and in doing so, discards a number of different properties of the acoustic = input. So even though we can be clever and design experiments where perception of the acoustic signal can differentiate various auditory processes, we are in man= y ways subordinate to the prerogative of the CI processor. In other words, we cannot trust the perceived signal to be what we intended it to be. This is especially true in the case of a delicate temporal/spectral interaction of = the type you described. </p> <p class=3D"MsoNormal">To make a simple analogy, you can imagine the pitfal= ls of drawing conclusions about differences between your right eye and left eye if color = vision in your right eye were tested using a black &amp; white tube monitor from the 40s, = and your left eye were tested using an LCD HD monitor from 2014. Any conclusions you draw from thi= s test would really be a statement about the apparatus, not the visual system itself. In my opinion, the same risks apply in the case of comparing a CI e= ar to an acoustic ear. </p> <p class=3D"MsoNormal">=C2=A0</p> <p class=3D"MsoNormal">To your specific experiment: although your acoustic = ear heard the fundamental in the complex sine tone you created, your CI ear in = fact never heard the sines at all (just as your right eye never saw the color); = it heard whatever the processor generated to represent those tones. So in my mind, you might not have been comparing app= les to apples. </p><p class=3D"MsoNormal"><br></p> <p class=3D"MsoNormal">What some researchers do is gain control over the CI= signal by bypassing the clinical processors and instead use research processors (e= .g. HEINRI, NIC, BEPS+, BEDCS), where each element of stimulation is explicitly controlled. Then you can at least be assured of what signal is being delive= red and be confident about the relationship between stimulus and response. Othe= r experimenters have more experience in this area and may offer more eloquent descriptions of their approach. </p> <p class=3D"MsoNormal">=C2=A0</p> <p class=3D"MsoNormal">Matt</p></div><div class=3D"gmail_extra"><br><div cl= ass=3D"gmail_quote">On Tue, Dec 9, 2014 at 9:55 AM, Willem Christiaan Heere= ns <span dir=3D"ltr">&lt;<a href=3D"mailto:heerens1@xxxxxxxx" target=3D"_b= lank">heerens1@xxxxxxxx</a>&gt;</span> wrote:<br><blockquote class=3D"gmai= l_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left= :1ex">Dear Tam=C3=A1s, Nathan and List,<br> <br> Tam=C3=A1s you reported:<br> =E2=80=A6while working with cochlear implants (CI) I often notice that even= CI<br> <span class=3D"">listeners with very good speech perception need some extra= time (in<br> comparison to normal hearing listeners) to comprehend a spoken<br> </span>sentence=E2=80=A6.<br> <br> I have no data about relevant literature of this subject.<br> But maybe my study since January this year of my experiences as an<br> =E2=80=98expert in the field=E2=80=99 can be of value and interest for you.= And maybe<br> for others too.<br> <br> Since May 2013 I have the Advanced Bionics Harmony CI in my to 120 dB<br> deaf left ear [In January 2014 my Harmony equipment is replaced by the<br> New concept AB Naida. In my right ear, with a 70 dB overall hearing<br> loss, I have the Phonak Naida hearing aid which can support to some<br> extend the functioning of the AB CI.<br> <br> In my rehabilitation period it took me less than 2 weeks to have a<br> speech perception score that almost reaches that of a normal hearing<br> person even without seeing the speaking person.<br> My phoneme score was up to 90 % for a normal stimulation of my CI.<br> Remarkable enough my phoneme score reduces a few percent in case both<br> apparatuses =E2=80=98cooperate=E2=80=99.<br> But that is only under better than normal quit environmental conditions<br> and with listening to a single speaker.<br> As soon as the environment becomes more =E2=80=98noisy=E2=80=99 my hearing = abilities<br> reduce rapidly.<br> When three or more people are discussing more or less chaotically I only<br= > hear a tremendous loud noise in which I can hardly distinguish a single<br> word. My speech perception then is dropped to zero and the latency for<br> comprehending spoken sentences can be named infinite.<br> <br> Only when someone in such an auditory environment is loudly speaking<br> [almost screaming] near the microphone=C2=A0 of my CI processor I can<br> comprehend just less than approximately 50 % of the sentences.<br> Far too low to have a pleasant discussion.<br> <br> Listening to music =E2=80=93 especially classical music =E2=80=93 is for me= far from<br> joyful. Actually the only aspect in music I experience almost normally<br> is rhythm. Pitch perception, timbre, dynamic range and melody<br> recognition are all really bad.=C2=A0 Naming a single instrument out of wha= t<br> I hear with my CI is for me a hell of a job.<br> What I experience in the comparison of my two hearing apparatuses is<br> that with my CI I hear all background noises like traffic and cocktail<br> party rumble as lower frequencies compared to the frequencies I hear<br> with my normal hearing aid.<br> In literature such experiences are reported as well. But more as an<br> unclear and remarkable phenomenon.<br> <br> So being a physicist and with my research in cochlear functioning in<br> mind =E2=80=93 what brought me earlier to the statement that the normal<br> functioning human hearing sense makes use of the sound energy stimulus<br> in the cochlea and not the sound pressure stimulus, what everybody now<br> still assumes =E2=80=93 I started with the survey of what actually the CI<b= r> processor software is doing with the incoming sound pressure stimulus.<br> What I found =E2=80=93 and please correct me if I am wrong =E2=80=93 in a n= utshell was<br> that for dynamic behavior purposes in the different electrodes this<br> stimulus is rectified and there is further no indication that the sound<br> pressure stimulus is transferred into the sound energy stimulus, which<br> on its turn is used in a frequency selective way as the electrical<br> stimulation of the electrode array.<br> <br> So I hypothesized that if I compose quite simple tone settings for<br> listening to beat phenomena I can study with the resulting sound<br> fragments how I experience beats with my CI in comparison with my other<br> hearing aid. They simply must sound different.<br> This because a beat phenomenon in the sound pressure domain is clearly<br> different from the corresponding beat phenomenon in the sound energy<br> domain.<br> <br> My most illustrative beat experiment is the following:<br> <br> I combined two tones with equal amplitude=C2=A0 =E2=80=93 999.99983333 Hz a= nd<br> 1000.00016667 Hz =E2=80=93 to a sound pressure=C2=A0 stimulus.<br> This combination results in a beat in a 1000 Hz stimulus with a beat<br> period observed as having a duration of 3000 seconds.<br> Actually the complete beat period T is 6000 seconds. This because the<br> shape of the modulation function in the sum of the two sinusoidal<br> contributions is a cosine function with frequency equal to half the<br> frequency difference of the two combined tones.<br> Hence equal to cos(2&amp;#960;=C3=970.00016667=C3=97t) or equal to cos(2&am= p;#960;=C3=97t/6000). And the<br> modulation envelope is equal to the modulus of this function, so<br> |cos(2&amp;#960;=C3=97t/6000)|. And that is a function with a period of 300= 0 seconds.<br> <br> You must be aware that when you look closely to the shape of this<br> stimulus you will find that near halfway the 3000 seconds the signal<br> amplitude falls sharply to zero, remains zero during just a split second<br= > and then rises again sharply to higher values.<br> However when you calculate the sound energy stimulus connected with this<br= > sound pressure stimulus you will find that the beat in this signal still<br= > has a period of 3000 seconds. And when time is approaching the 1500<br> seconds halfway this period the sound amplitude is also declining to<br> zero. But it does this in an entirely different way.<br> At first the frequency is not 1000 Hz anymore but an octave higher so<br> 2000 Hz.<br> And the shape of the beat envelope of that 2000 Hz stimulus in the<br> vicinity of halfway the period is entirely smooth. The sudden transition<br= > from sharply descending to sharply rising after the 1500 second point is<br= > completely disappeared. Instead a gradual approach resulting in a smooth<br= > touching to the zero level followed again by a gradual increase in<br> amplitude.<br> <br> The two striking differences =E2=80=93 1000 Hz versus 2000 Hz and a sharp<b= r> approach to zero versus a smooth approach =E2=80=93 must give unmistakable<= br> differences in hearing impression.<br> <br> And the results of my experiment confirm my hypotheses:<br> <br> I have cut the 30 seconds period around halfway the period out of the<br> calculated soundtrack of the sound pressure stimulus. And with<br> sufficient amplification for my observations I have listened separately<br> with my CI and my Phonak hearing aid. And even with another amplifier<br> connected to a high quality headphone without my Phonak hearing aid.<br> <br> With my CI I heard without any doubt the sharp continuous decline to<br> zero stimulus and after a split second the continuous=C2=A0 increase. I cou= ld<br> not observe a substantial long period of zero signal.<br> With my other ear I heard in both cases during the period of 30 seconds<br> a smooth decline to zero that was reached approximately 7 =E2=80=93 8 secon= ds<br> before the halfway moment. This zero signal ended approximately 7 =E2=80=93= 8<br> seconds after the halfway moment. So during a period of 14 =E2=80=93 16 sec= onds<br> the signal remains zero followed by a smooth increase.<br> And the tone has without any doubt a doubled frequency =E2=80=93 so 2000 Hz= <br> instead of 1000 Hz.<br> <br> I have repeated these experiments with the common series of audiology<br> test frequencies except the 125 Hz stimulus =E2=80=93 so starting with 250 = Hz up<br> to 7000 Hz.<br> With all frequencies I experienced the same results as for the 1000 Hz<br> signal.<br> <br> My following experiment was modifying the 1000 Hz sound pressure<br> stimulus into the sound energy stimulus. And then listening to this<br> sound fragment with my CI.<br> As I expected as result for this experiment I experienced the same sound<br= > via my CI as I heard from the sound pressure experiment with my Phonak<br> hearing aid.<br> A 2000 Hz signal and a smooth approach to a zero period of 16 seconds<br> followed by a smooth rising of the 2000 Hz signal.<br> <br> After that I concluded that also pitch and missing fundamental<br> experiments will give different results when a normally functioning<br> basilar membrane is apparently stimulated with the sound energy stimulus<br= > while in the CI processor the sound energy stimulus isn=E2=80=99t generated= and<br> transferred to the brain but the sound pressure stimulus.<br> <br> So I composed two tone complexes the first one existing of the<br> frequencies:<br> <br> 800 =E2=80=93 1000 =E2=80=93 1200 =E2=80=93 1400 =E2=80=93 1600 =E2=80=93 1= 800 =E2=80=93 2000 Hz.<br> <br> All sine functions.<br> And the other one with the same frequencies but successively sine and<br> cosine functions.<br> Both functions having a 1/f amplitude frequency relation, which results<br> for the sound energy tone complex into equal energy contributions for<br> all frequencies.<br> >From calculations and experimental results in earlier studies and out of<br= > literature I know that a normal hearing person experiences with all sine<br= > functions a pitch of 200 Hz. While with the alternating sine =E2=80=93 cosi= ne =E2=80=93<br> sine composition the listener hears a 400 Hz pitch.<br> <br> The complete calculation for all sine contributions results in a series<br> of missing lower harmonics starting with the=C2=A0 fundamental of 200 Hz<br= > followed by harmonics 400 =E2=80=93 600 Hz and then the harmonics 800 =E2= =80=93 1000 =E2=80=93<br> 1200 Hz.<br> The alternating sine =E2=80=93 cosine composition shows after calculation t= hat<br> the series starts with the=C2=A0 missing lower harmonic 400 Hz followed by<= br> the 800 Hz and 1200 Hz harmonic. All three od harmonics 200 =E2=80=93 600 = =E2=80=93 1000<br> Hz are disappeared in the sound energy frequency spectrum.<br> <br> The results of these two tone complex experiments are even more<br> remarkable.<br> <br> With my CI apparatus I experience no significant difference between the<br> two sound fragments. I hear both sounds as higher tones with identical<br> frequency and hardly no difference in intensity.<br> While with my Phonak hearing aid or amplifier headphone combination I<br> hear precisely the missing fundamental as a low 200 Hz tone combined<br> with higher tone contributions for the all sine function contributions.<br> And a 400 Hz tone with a somewhat altered higher tone contribution =E2=80= =93<br> which I can characterize as a change in timbre.<br> <br> So now I can draw a number of conclusions out of these results:<br> <br> When I follow the existing hearing hypotheses or theory I am confronted<br> with a serious anomaly:<br> <br> It is clear that the implantation of the CI has done nothing at all with<br= > my auditory brain functions.<br> However by the stimulation of my CI with the sound pressure signal my<br> auditory cortex or other brain areas involved in sound perception don=E2=80= =99t<br> produce hearable missing fundamentals out of the sound pressure signal.<br> <br> I can only draw the anomalous conclusion that before any signal is<br> transferred to the brain the missing fundamental information must be<br> present in this stimulus.<br> Hence it must be generated inside the cochlea. And not in the brain.<br> <br> But when I follow my hearing concept, where the non-stationary Bernoulli<br= > effect=C2=A0 transfers the incoming sound pressure stimulus into the sound<= br> energy stimulus in front of the basilar membrane, there doesn=E2=80=99t exi= st<br> any anomaly.<br> <br> May I remark that the non-stationary Bernoulli effect is a physically<br> correct solution of the Navier-Stokes equation for a non-viscous<br> alternating potential flow in a non-compressible fluid? These flow<br> conditions exist in the cochlear duct.<br> <br> Tam=C3=A1s, regarding your remark:<br> =E2=80=A6.In fact, some patients with single-sided deafness and CI in the d= eaf<br> <span class=3D"">ear report a perceived latency between the normal hearing = and the CI<br> </span>side, which does not seem to be of technical nature=E2=80=A6..<br> <br> I can give you the following answer:<br> <br> Related to my experiments in which it is clear to me that the CI program<br= > does not generate the for normal hearing correct signals I also have my<br> strong doubts about your assumption that the latency you mentioned is<br> not of technical nature.<br> <br> Nathan I agree with you, regarding your remarks to Tam=C3=A1s:<br> <br> =E2=80=A6.In terms of your=C2=A0 specific question with unilateral loss and= cochlear<br> <span class=3D"">implants, I would be tempted to look at the engineering si= de of the<br> device, or possibly the settings of the implant programming, but you<br> </span>mention you do not think the delay is a technical one=E2=80=A6.<br> <br> As you can conclude with me from the results of the described<br> experiments I have done it is not only a technical issue.<br> <br> It is really fundamental in origin. It is related to the fact that<br> already for a long time the scientific hearing community is fully<br> convinced that the cochlea transfers the sound pressure stimulus to the<br> brain and the brain applies nonlinear functions in its auditory<br> perception process.<br> And for me apparently out of my experimental results I distinguish that<br> the cochlea performs the major non-linear process step. It transfers the<br= > sound pressure stimulus by two successive process steps =E2=80=93<br> differentiation followed by squaring =E2=80=93 into the sound energy stimul= us.<br> And the latter stimulus is frequency selective transferred to the brain.<br= > <br> In that case the at the best 30 dB dynamic range of the CI processor is<br> transferred to 60 dB by the squaring process step as well. Which brings<br> the CI dynamic range in balance with the normal hearing apparatus.<br> <br> For a better perception of sound impressions via the CI it is needed<br> that the programming of the CI processor must be changed. And that is a<br> technical issue.<br> <br> Maybe the conclusions out of my experiments that my CI processor isn=E2=80= =99t<br> well programmed for this transfer of fundamentals =E2=80=93 especially for<= br> missing fundamentals =E2=80=93 can be of high value for Mandarin speaking<b= r> Chinese users of a CI. This tonal language, spoken by them, in which<br> fundamentals play a crucial role, is highly problematic for a good<br> speech perception score until now. I know that algorithms are developed<br> or in development for extracting the fundamentals together with CIS<br> technology.<br> <br> [See for instance:<br> N. Lan*, K. B. Nie, S. K. Gao, and F. G. Zeng:<br> A Novel Speech-Processing Strategy Incorporating Tonal Information for<br> Cochlear Implants<br> IEEE Transactions on Biomedical Engineering, Vol. 51, No. 5, May 2004]<br> <br> Nathan relating to your following remark:<br> =E2=80=A6.Another area to consider may be the idea of hemispheric connectiv= ity.<br> In your example of a unilateral loss with the CI in the deaf ear, it may<br= > <span class=3D"">be that the non-CI (and fully hearing ear) input is proces= sing faster in<br> the brain than the CI input is. This is an extension of the concept that<br= > auditory-deprivation impacts on plasticity =E2=80=A6.<br> <br> </span>What do you think about the suggestion that a perception latency can= be<br> observed in the CI activated side related to the more or less normal<br> hearing other side because the CI stimulus is fundamentally not correct<br> which causes that the brain needs more time to make a correct<br> perception. This can be placed perfectly in the category auditory-<br> deprivation resulting in an impact on the brain plasticity.<br> <br> I want to close my remarks with the following:<br> <br> Of course the scientific auditory community can state that my hearing<br> experiences with my CI and Phonak hearing aid in tone experiments are a<br> pure personal issue.<br> Firstly you can say that I have heard everything erroneously by using<br> the wrong arguments and my experiments does not meet the high<br> international standards you always use.<br> And secondly you are right if you say my experiments are purely<br> subjective in origin.<br> <br> My answer to the first comment will be:<br> <br> I want to remind you to August Seebeck=E2=80=99s quote [dated 1844] in the<= br> dispute with Ohm and Helmholtz:<br> <br> Wodurch kann =C3=BCber die Frage, was zu einem Tone geh=C3=B6re, entschiede= n<br> werden, als eben durch das Ohr?<br> (How else can the question as to what makes out a tone be decided but by<br= > the ear?)<br> <br> And to the second comment:<br> <br> Collect the data of such experiments and show me that I am wrong by<br> doing the same experiments I have done. Do that with other subjects who<br> are equipped with a hearing aid for moderate hearing loss and a CI in<br> the deaf ear. If necessary and applicable=C2=A0 use up-to-date techniques<b= r> like auditory fMRI or high resolution EEG methods to improve the level<br> of objectivity.<br> <br> I have asked a few fellow CI-users for their experiences with these<br> phenomena. Their answers made me very confident.<br> <br> <br> Willem Chr. Heerens<br> </blockquote></div><br></div> --001a11362f10c0b0800509dcb0e3--


This message came from the mail archive
http://www.auditory.org/postings/2014/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University