This talk discusses the role of musical acoustics in teaching engineering courses offered at the Stanford Center for Computer Research in Music and Acoustics, and in the Princeton University Department of Computer Science. For digital signal processing, musical acoustics is used to motivate and demonstrate mathematical concepts such as zeroes (multi-path cancellations in rooms), poles (resonant modes in a guitar), nonlinearity, and instability (what happens when one turns up the guitar amp). Simultaneous real-time visual and auditory displays build intuition and interest. For scientific computing, the pendulum provides a simple but meaningful first experience in numerical methods. By setting up differential equations from a physical description, noting the possibility of linearizing assumptions, and solving the system discretely on a computer, students discover how parameter values affect accuracy and stability. By solving both with and without the linearizing assumption, then placing a soundfile header on the output data, differences in frequency and waveform can be easily detected. Techniques are extended in plucked string and vibrating membrane simulations. For human--computer interface technology, musical instruments are perfect examples of effective human-artifact interfaces. Labs using computer sound synthesis clarify basic interaction concepts such as latency and cross-modal perception.