I hope you can help me with the following:
I'm looking for a user-requirements study that deals with metadata for people with hearing impairments or a study that ranks metadata for hearing impaired persons according to their contextual accessibility.
In particular, I would like to answer the question what kind of metadata, semantic and additional contextual information is desired by people with hearing disorders (besides more or less established techniques like subtitles, signer- in-screen and text-to-speech-to-sign-language).
Is anyone of you aware of such a study? It seems like that I'm lacking in the right vocabulary to end up with a succesful literature search.
Thank you very much in advance,