[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Computational ASA
I'm not exactly an expert on CASA but I try.
You might be interested in checking out an essay on my Web site that
describes a different approach to auditory modeling. It might be useful to
you either for its ideas or simply for entertainment, i.e., as science or
science fiction. In any case it should reveal why CASA is a tough problem
but not insoluble, once the real problems are specified.
I have just posted in the site a short commentary on the situation in
auditory modeling. It relates to a paper in the ACM's conference
proceedings for Artificial Life VIII, Standish, Abbass, and Bedau (eds),
(MIT Press) 2002, pp 345-349. C. L. Nehaniv, et al, "Meaningful
information, sensor evolution and the temporal horizon of embodied organisms."
I have included some comments on the paper on my web site.
At 09:58 AM 04/30/2004, you wrote:
I am a grad student in the University of Miami's Music Engineering
program, and I am just starting to learn about auditory scene analysis,
particularly computational ASA models.
I know there are several CASA experts on this list, so I'd like to ask why
source separation seems to be so difficult. It's seems like the general
consensus is that source separation is far too difficult, and research has
focused on understanding features within a mix. Yet, from what I've read,
current methods of feature extraction work quite well. It only seems
natural that we could write an algorithm that groups these features
according to their perceived source and creates separate audio streams
based on this information. While this would be much more difficult in
noisy or reverberant environments, I would imagine it would be quite
simple in a less complex environment.
What is it that makes source separation so difficult?