[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Generating a continuum of consonant to dissonant sounds
As part of a machine learning research project investigating
audio/visual cross-modal perception, I'm looking at the relationship in
perceived correspondences between "simple" sounds and visual inputs to
"complex" sounds and visual inputs.
Most importantly, I'm interested in _lack_ of correspondence between the
two, e.g, simple shapes with complex sounds and vice-versa, and the
impact of these "disagreements" on classifications and reaction times.
I'm curious what principled studies (or perchance code?) might have been
written for generating sounds ranging continuously from harmonious to
dissonant. I can easily think of ways of doing this mathematically,
e.g., randomly phase shift the harmonics, but I'm curious what the
psychoacoustics community has to say regarding this issue.
Any pointers would be greatly appreciated. And of course, if you're
aware of anything more directly addressing the problem I described, that
would be most welcome as well.