Scott H. Foster
Crystal River Eng., 12350 Wards Ferry Rd., Groveland, CA 95321
Elizabeth M. Wenzel
NASA--Ames Res. Ctr., MS 262-2, Moffett Field, CA 94035
This demonstration illustrates some recent efforts to ``render'' in real time the complex acoustic field experienced by a listener within an environment using a very high-speed, signal processor, the Convolvotron, and headphone presentation. The current implementation follows conceptually from the image model. The filtering effects of multiple reflecting surfaces are modeled by a finite impulse response filter than can be changed in real time and is based on the superposition of the direct path from the source with the symmetrically located image sources coming from all the reflectors in the environment. Directional characteristics of the reflections are determined by filters based on head-related transfer functions. The demonstration scenario allows the listener to experience how sound quality is affected by the manipulation of various environmental characteristics. For example, while listening over headphones and ``flying'' through a three-dimensional visual scene, one can hear how the sound quality of four simultaneous sources changes as virtual walls are expanded and contracted in different room configurations. Other environmental parameters that can be changed include ceiling height, absorption characteristics of the walls, ceiling, and floor surfaces (e.g., wood, versus glass versus drapery), comparison of anechoic versus reflective environments, and location of the sound sources. Doppler effects and directional radiation patterns for the virtual sources are also implemented.