Re: Intermediate representation for music analysis ("Richard F. Lyon" )


Subject: Re: Intermediate representation for music analysis
From:    "Richard F. Lyon"  <DickLyon@xxxxxxxx>
Date:    Mon, 17 Jul 2006 09:14:19 -0700

At 12:23 PM +0200 7/17/06, alexander lerch wrote: >Naturally, music is made for humans, but does this mean that we need >physiological/perceptual/cognitive models to analyze the content of >music? An excellent question. In my opinion, yes. Since the phenomena of greatest interest are inherently perceptual, and since human hearing usually works much better than any sets of algorithms that we've been able to come up with so far, it seems likely that it will pay to continue to develop approaches that attempt to mimic human perception, partly by modeling human physiology and cognition. If you have a task that is not primarily perceptual, say recovering the notes played on a set of instruments, then it might work better to use physical models of the instruments and mathematical optimization techniques. It would work even better to "instrument" the instruments with sensors that would give you more information than you could get from a sound waveform. But if the task is to follow and understand a melody or a rhythm or other aspects of musical/perceptual language, then relying too much on things like "physical" notions of pitch and tempo can do more harm than good. In summary, I think that "hearing" problems should be addressed with auditory techniques, and other problems in sound and music should be addressed with whatever techniques suit them best. Dick


This message came from the mail archive
http://www.auditory.org/postings/2006/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University