[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Intermediate representation for music analysis



Hi,

is it really a requirement to model human perception in the context of
music analysis?
Naturally, music is made for humans, but does this mean that we need
physiological/perceptual/cognitive models to analyze the content of
music? Personally, I don't think so for (kind of) objective features
like tempo, musical pitch, etc, while for other features such models
might be useful.
I tried several times to improve music analysis systems with (simple)
physiological models, in most cases without success.
That does not mean that I think such models don't make sense, I just
want to argue that these models do not automatically make sense just
because we are talking about musical content.

BTW, the number of bins/filters of the systems strongly depends on the
goal. Naturally you need thousands of bins for an fft-based frequency
tracking, not dozens. In other cases where the correct frequency
detection is not (as) important, a couple of filters or none may be better.

Kind regards,
Alexander

Ilya Sedelnikov wrote:
> Dear list,
> 
> Does someone aware of works that use filterbanks with more than a couple of
> dosens filters as a front-end for the music analysis ?
> 
> Human ear is able to distinguish pitch differences at least twice less than a
> semitone which implies that for the analysis of musical piece that spans 4
> octaves the number of filters should be of the order of couple hundreds.
> Nevertheless front-ends commonly used for music analysis usually use not more
> than couple of dosens of filters (Fourier bins), sometimes even
> non-logarithmically spaced.
> 
> I will be glad to hear any opinions on the subject.
> Ilya


-- 
dipl. ing.
alexander lerch

zplane.development
:www.zplane.de
katzbachstr.21
d-10965 berlin

fon: +49.30.854 09 15.0
fax: +49.30.854 09 15.5