Edward T. Auer, Jr.
Lynne E. Bernstein
Ctr. for Auditory and Speech Sci., Gallaudet Univ., 800 Florida Ave. N.E., Washington, DC 20002
Speech perceived on the basis of viewing a talker's face affords less phonetic distinctiveness than acoustic speech. Effects of this reduced distinctiveness can be estimated in relation to the structure of the mental lexicon. Based on empirical measures of phonetic confusability, recoding rules can be defined for mapping fully specified phonological forms into lexical equivalence classes. For example, under the recoding rule that /b/ and /p/ are in the same phonemic equivalence class the words ``bat'' and ``pat'' map into the same lexical equivalence class. After applying a set of recoding rules to a large online lexical database, the resulting structure of the lexicon can then be studied quantitatively. One such measure of the recoding effects on the lexicon is percent information extracted (PIE) [D. M. Carter, Comput. Speech Lang. 2, 1--11 (1987)]. Lexical statistics describing the results of applying sets of recoding rules derived from analyses of visual-phonetic confusability to a 30 000-entry lexicon will be presented. Implications for the use of top-down lexical constraints in resolving bottom-up visual-phonetic ambiguity during lipreading will be discussed. [Work supported by NIH.]