4aSC10. Hearing-impaired perceivers' encoding and retrieval speeds for auditory, visual, and audiovisual spoken words.

Session: Thursday Morning, June 19

Author: Philip F. Seitz
Location: Army Audiol. & Speech Ctr., Walter Reed Army Med. Ctr., Washington, DC 20307-5001, seitz@wrair-emhl.army.mil


Perceptual encoding and memory retrieval processing speeds were assessed for spoken words in 26 subjects, mean age 66, with mild to moderate acquired sensorineural hearing loss. Subjects were trained to achieve error-free recognition of a set of ten spoken words in auditory, visual (speech reading), and audiovisual conditions. They then performed the Sternberg item recognition task in each of the modality conditions using the same set of ten words. The task involved presenting memory sets of one to four words, followed by a probe word to which subjects made a speeded ``YES'' or ``NO'' button response to indicate whether the probe matched any of the memory set items. Least-squares linear models provided good fits to subjects' memory-set size by reaction time functions (mean r[sup 2]>0.90 for all three conditions). Using the models' intercepts and slopes to represent encoding and retrieval times, respectively, Wilcoxon tests showed significant differences among the conditions with respect to both encoding and retrieval speed, with audiovisual fastest and visual slowest. These results are interpreted as evidence for: (1) audiovisual ``benefit'' to processing speed in hearing-impaired speech perception; (2) relativeinefficiency of encoding visual speech; and (3) representation differences associated with the modalities. [Work supported by NIH-NIDCD.]

ASA 133rd meeting - Penn State, June 1997