Traditionally, the recovery of linguistic message and speaker identity is thought to involve distinct operations and information. However, recent observations with auditory speech show a contingency of speech perception on speaker identification/familiarity [e.g., Nygaard et al., Psychol. Sci. 5, 42--46 (1994)]. Remez and his colleagues [Remez et al., J. Exp. Psychol. (in press)] have provided evidence that these contingencies could be based on the use of common phonetic information for both operations. In order to examine whether common information might also be useful for face and visual speech recovery, point-like visual speech stimuli were implemented which provide phonetic information without containing facial features [L. D. Rosenblum and H. M. Saldana, J. Exp. Psychol.: Human Percept. Perform. 22, 318--331 (1996)]. A 2AFC procedure was used to determine if observers could match speaking point-light faces to the same fully illuminated speaking face. Results revealed that dynamic point-light displays afforded high face matching accuracy which was significantly greater than accuracy with frozen point-light displays. These results suggest that dynamic speech information can be used for both visual speech and face recognition.