MIT Media Lab., 20 Ames St., Rm. E15-490, Cambridge, MA 02139-4307
A common fallacy in interactive music systems is that a computer can make musically intelligent use of data from classical DSP-based acoustic analysis. In fact, human auditory sensors and processors are so different from standard silicon ones that machines are unlikely to ``hear'' music anything like the way humans perceive it. However, a computer model of human auditory sensing and preattentive processing can provide a good approximation of the conditioned stimuli that the musical brain actually works with. A real-time model is described in which the first derivative of a constant-Q filter bank is positively rectified, convolved with an energy integration function, then summed and fed into a temporal pattern detector (a pulse and tempo sensor, or foot-tapper) which provides a robust control for synthetic instruments needing to play ``in sync'' with the acoustic source. The computational stages are based on solid evidence from psychacoustic research, and appear robust in practice. Videotaped and live demonstrations.