Elizabeth M. Wenzel
NASA---Ames Res. Ctr., MS 262-2, Moffett Field, CA 94035
This paper will review the nature of virtual acoustic displays, which attempt to simulate a spatial acoustic environment in real time (usually over headphones) such that a listener can dynamically interact with virtual sound sources that have apparently fixed locations in space and perceptual qualities that approximate real sound sources. They represent a new, multidisciplinary area in the study of audition, which encompasses both basic research in human perception as well as a very applied or engineering-oriented approach to system implementation. The common goal is to develop technology that is driven by human needs and requirements while resulting in useful, working systems for a variety of application areas including architectural acoustics, advanced human-computer interfaces, navigation aids for the visually impaired, telepresence and virtual reality, and as a test bed for psychoacoustical investigations of complex signals. Achieving this goal requires understanding of the nature of an acoustic object, including both spatial characteristics and nonspatial cues like the spectral and temporal properties that determine perceptual streaming. It also requires practical consideration of the means needed to realistically generate such environmental sounds in a real-time display system. Research issues in these areas will be outlined, the nature of current systems will be reviewed, and some directions for future research and development will be discussed.