I donât know of literature off the top of my head but I expect it is not an uncommon finding at least for unshaped speech. However, I also wouldnât be surprised if the results varied across studies based on the importance function and AI-intelligibility transfer function of the speech materials used, whether or not a background noise was used and its shape, and the degree and configuration of hearing loss of the participants.
I played around with this using an AI program and found that if thresholds were normal in the low frequencies and sloped to about 55 dB HL in the high frequencies the PI functions for a nonsense syllable for a mild sloping to severe loss the prediction would be for a steeper PI function for the HI over the range of performance of ~25-75%Â (~3.5%/dB vs ~5%/dB for the NH and HI respectively). ÂThe shape of the speech, and its importance function, interacts with the threshold function of your subjects to determine your PI function, at least based solely on audibility. Essentially, once the speech level is high enough to enable some understanding HI listeners with a sloping HF hearing loss have a larger change in AI with changes in level than listeners with NH. Obviously other factors like suprathreshold processing abilities also play a role particularly with more severe losses.
Hi, I have some data for normal and hearing-imparied listeners for 90 AFC identification task for which the psychometric functions for the HI listeners have significantly steeper slopes than for the normal-hearing ones, even though their thresholds are higher. Is this a common finding in the psychophysical literaure, or related to the nature of the hearing impairment, namely that once they can hear it, they rapidly achieve perfect performance (somewhat similar to the growth of loudness functions).