Re: Intermediate representation for music analysis (Rachel van Besouw )


Subject: Re: Intermediate representation for music analysis
From:    Rachel van Besouw  <rmvb101@xxxxxxxx>
Date:    Mon, 17 Jul 2006 10:18:20 +0100

Hi Ilya, >Human ear is able to distinguish pitch differences at least twice less than >a semitone which implies that for the analysis of musical piece that spans >4 octaves the number of filters should be of the order of couple hundreds. >Nevertheless front-ends commonly used for music analysis usually use not >more than couple of dosens of filters (Fourier bins), sometimes even non->logarithmically spaced. I've previously made the mistake of assuming that the Difference Limen for Frequency and the Equivalent Rectangular Bandwidths (or frequency selectivity) of auditory filters covary, but they do not - if they did we could assume a place model of pitch perception. Below 5 kHz it is believed that our ability to discriminate very fine differences in frequency can be attributed to the temporal patterns of nerve firings. I recommend Chapter 6: Pitch Perception of 'An Introduction to the Psychology of Hearing' by Brian Moore. Rachel :-) _______________________________________________________________________ Rachel van Besouw| PhD Researcher | Audio Lab, Intelligent Systems Group Department of Electronics | University of York Heslington | York | UK | YO10 5DD rmvb101@xxxxxxxx | +44 (0) 1904 432407


This message came from the mail archive
http://www.auditory.org/postings/2006/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University