[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [AUDITORY] AW: How is the signal of a cochlear implant? [Sound art Project in honor to my deaf sister]



Dear Hugo, Niki, and everyone,


I agree with Niki that music's perception in CI users is a complex topic, and listening to music through a vocoder might be misleading.

First, the vocoder should not be considered a tool to simulate the sound perceived through a CI but to simulate speech score. In other words, when a normal hearing person (NH) listens to a vocoded speech, we cannot assume that he/she will have the same percept as a CI user, but just that she/he will understand the same amount of words. For a given situation, if an NH understands 100% of a sentence and a CI user 50%, the same NH will also understand about 50% of a vocoded speech.


It is very difficult for a CI user to describe how they perceive sounds, as we lack vocabulary. Just for NH, for CI users, the sound of a bird sounds like a "bird singing."


The only way is to ask CI users, who have enough residual hearing on one ear, to compare the same sound presented into the two ears. You can find some studies that will do just that:

Lazard et al., (2012), https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0038687

Adel et al. (2019) https://www.frontiersin.org/articles/10.3389/fnins.2019.01119/full

Marozeau et al. (2020) https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0235504

Dorman et al. (2020) https://journals.sagepub.com/doi/full/10.1177/2331216520920079


Based on those studies' results, it seems that there are as many answers as CI users in the world. Some of them will claim that a sound through a CI is exactly as in the normal ear, some as white noise, and some as inharmonic sounds.


Now to answer your question, we do not know how CI users will perceive music because they will all perceive it differently.  However, we know that the sound processor will not send enough information to convey pitch cues properly (see here). Although there are some star performers (see Maarefvand, 2013), it is pretty safe to assume that most CI users will not perceive the melody. Nevertheless, as Niki mentioned, many CI users appreciate music and are engaged in musical activities. They can probably focus on different musical cues such as rhythm and dynamic. Similarly, many NH people can appreciate music without a clear tonal structure and defined melodies.


To be provocative, I will propose that as the vocoder is a good model for speech understanding, some contemporary music (like Boulez) can be a good model of how CI users can experience (not perceive !) music. And as for the music composed by Boulez, some people love it, and many people hate it.

To support my point, we have made a study in which we have asked NH and CIL to rate the musical tension of a piano piece of Mozart (Spangmose, 2019, www.frontiersin.org/articles/10.3389/fnins.2019.00987/full.) Surprisingly, CIL and NHL rated overall musical tension in a very similar way. Then, we have repeated the task but on a modified version of the piano, in which all the notes were shuffled. Removing the melody had an important effect on NH's musical judgment, but none for the CIL. Furthermore, CIL reports appreciating the piece of music with original notes or with the random one similarly.


In summary, for your project, you should look into atonal or purely rhythmical music. Good luck,

Jeremy


Jeremy Marozeau
Associate Professor

DTU Health Tech
 
Technical University of Denmark
Department of Health Technology
Ørsteds Plads
Building 352, Room 124
2800 Kgs. Lyngby
Direct +45 45254790
jemaroz@xxxxxx



On Mar 20, 2021 5:15 AM, "Vavatzanidis, Niki" <Niki.Vavatzanidis@xxxxxxxxxxxxxxxxxxxxxx> wrote:

Dear Hugo and all,

 

as a cognitive neuroscientist I’d like to add: don’t forget the brain! Jan made some very valid points about how much we can infer from vocoders from a technical point of view. Complicating things further is the fact that no brain will interpret the sound in the same fashion. How much hearing experience you have had before you get a CI is crucial for what you can extract from the CI signal. A person whose brain experienced decades of hearing and only a relatively short period of deafness before getting the implant will extract much more from the CI signal than someone whose brain has never learned to decode (audio) speech and gets an implant late in life like your sister. Speech discrimination may come almost effortlessly for some in the first case, while it is out of reach for almost everybody in the latter case. The CI might still be useful because it informs you about environmental sounds (your child is crying in the next room, someone is addressing you from behind etc) but understanding speech without lip-reading is no hope one should pose on cochlear implantation if born deaf and not getting the implant early in life.  

With music, it will be similar in certain aspects. The interviews of CI users Kathy and Angela made for the CI hackathon that Alan send around (https://cihackathon.com/docs/CI_interviews) describe very nicely, I think, how they are able to fill in missing information for songs they know from before their hearing loss (and which they are able to enjoy) and how this does not work for new pieces of music for which they do not have a “pre-CI” memory. On the other hand, music enjoyment has so much to do with your own expectations that in one of our studies we found that those who have never experienced music bore they got the CI actually tend to enjoy it much more than those who can compare it to “how it used to sound” before their hearing loss and who are disappointed by how different the music sounds with the CI ( Hahne et al., 2020, doi: 10.1055/s-0040-1711102).

 

This is just to give you an idea of how diverse the experience of one and the same CI output may be depending on your individual history and how it has shaped your brain. Of course, there are more factors that shape what you hear with the CI (many related to the individual brain, others linked to the technology itself), but one’s hearing history is a very fundamental one.

I for one would be very interested to hear your CI art project, maybe you could point me/us towards it when the time comes? That would be great!

 

All the best

Niki

 

 

***********************************

Dr. rer. nat. Niki K. Vavatzanidis (she/her)

Saxonian Cochlear Implant Center Dresden

University Hospital Dresden

Fetscherstr. 74

01307 Dresden

Germany

 

niki.vavatzanidis(at)ukdd.de

https://www.uniklinikum-dresden.de/scic/research

 

 

 

Von: Jan Schnupp <jan.schnupp@xxxxxxxxxxxxxx>
Gesendet: Dienstag, 9. März 2021 11:22
Betreff: Re: How is the signal of a cochlear implant? [Sound art Project in honor to my deaf sister]

 


Dear Hugo,

 

one thing you must appreciate is that, although there are a number of vocoders out there to simulate cochlear implants, aone Alan recommended is perfectly fine, it is nevertheless important to appreciate it is fundamentally impossible to give a true, veridical impression of the sensation cochlear implants create through acoustic stimulation of the normal cochlea. The main reason for this is that the mechanics of the cochlea links temporal stimulation patterns to places of stimulation, and CIs don't do anything like that. Many established CI designs do not pay much attention to the precise temporal patterning of stimuus pulses, so CI users lose important cues for the pitch of complex sounds, for binaural scene analysis and for spatial fearing. What exactly that means cannot be simulated with sound, although "vocoding techniques" give an impression. You may have seen the demo here which I like to use of a Beethoven sonata: http://auditoryneuroscience.com/prosthetics/music   If you listen to the original it is very clearly two instruments playing two distinct melodies. The vocoded version sounds much more like a single stream and the melody is much harder to appreciate, but the rhythm is unimpaired. 

That demo I made with a bit of simple Matlab code, a bank of bandpass followed by envelope extraction, and then I use the envelope to modulate narrow band noise. Happy to share the code but it is pretty trivial.

 

Good luck with your public engagement  artwork, and all the best to your sister.

 

Jan

---------------------------------------

Prof Jan Schnupp
City University of Hong Kong
Dept. of Neuroscience

31 To Yuen Street, 

Kowloon Tong

Hong Kong

 

https://auditoryneuroscience.com

 

 

On Tue, 9 Mar 2021 at 13:15, Alan Kan <alan.kan@xxxxxxxxx> wrote:

Hi Hugo,

 

Check out https://cihackathon.com/docs/presentations. It’s a hackathon that just finished but they provide Python code for a vocoder that follows the Advanced Bionics cochlear implant signal processing. All you would need to do is just run your sound files through it.

 

Cheers

 

Alan

 

--- 
Alan Kan, PhD
Research Fellow 

School of Engineering | Macquarie University
Level 1, 50 Waterloo Road, Macquarie Park, NSW 2113, Australia

Australian Hearing Hub

Level 1, 16 University Avenue, Macquarie University, NSW 2109, Australia

 

T: +61 (2) 9850 2247 
E:  alan.kan@xxxxxxxxx 
W: https://researchers.mq.edu.au/en/persons/alan-kan |  mq.edu.au

L: www.linkedin.com/in/alan-kan

 

Macquarie University

CRICOS Provider 00002J. ABN: 90 952 801 237.

This message is intended for the addressee named and may
contain confidential information. If you are not the intended
recipient, please delete the message and notify the sender.
Views expressed in this message are those of the individual
sender and are not necessarily the views of Macquarie
University and its controlled entities.

 

 

 

 

From: AUDITORY - Research in Auditory Perception <AUDITORY@xxxxxxxxxxxxxxx> On Behalf Of Hugo Solís
Sent: Sunday, 7 March 2021 11:55 PM
To: AUDITORY@xxxxxxxxxxxxxxx
Subject: [AUDITORY] How is the signal of a cochlear implant? [Sound art Project in honor to my deaf sister]

 

Hi everybody,

My names is Hugo and I am a sound artist with some background in
computer science. I have a sister that was born fully deaf and she got a
cochlear implant when she was 40 years old. She is now 48. The cochlear
changed too little in my sister's live and she doesn't describe music as
a pleasant experience.

I want to create a piece of art where hearing people could hear the real
signal that the cochlear implant sends to the brain. I know that the
signal is processed and that pulses are generated on each one of the
electrodes. However I do not nothing about the details of the
transformation.

I am capable of write code in Python (ussing the Essentia Library
(https://essentia.upf.edu/) in order the emulate the transformation to a
signal but I don´t know what is the typical process. I could also write
the code in SuperCollider (https://supercollider.github.io/) but
although it has tons of unit generators it does not have as many
extractors of audio descriptors and common phsyacoustic process as Python.

I am not an audiologist and I have a lack of the signal processing
transformation that happens in a cochlear implant. I do know a lot about
digital signal processing though.

So I need some basics:

1. Code or libraries in any programming code but ideally in Python that
does the emulation. I could write the process but I imagine that many
people has already done this and that there is opensource code already
written.

2. Basic reference about the process that happens in the cochlear device
that could help me to either write the code or tuning the opensource
code in order to make my piece. The work will be shown in a exhibition
and I am running out of time. So any help would be more than appreciate it.

I will be forever thankfull with your support.

Warm regards

Hugo Solís