[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: selection pressure underlying scaled/measured music
My super-stimulus theory of music explains both musical scales and
measured time (and other aspects of music) based on the idea that each
aspect of music corresponds to the occurrence of "constant activity
patterns" in a particular cortical map, where these activity patterns do
not occur when the cortical map is responding to speech (but the
cortical map still processes information relevant to the perception of
For musical scales, the relevant cortical map (which I call the "scale
cortical map") is one that responds to the recent occurrence of pitch
values (modulo octaves), in proportion to how slowly the pitch value
changed when the pitch contour passed through each pitch value, and to
how many times it passed through. When responding to music, this map
becomes saturated for pitch values from the scale, but remains inactive
for in-between values. And this pattern of activity will *only* occur if
the melodies being perceived are created from a fixed set of pitch
values modulo octaves.
For measured time, the cortical map (which I call the "regular beat
cortical map") is one that responds to the occurrence of regular beats
for different periods. If we consider music in say 4/4 time, with 16th
notes, then there are regular beats with period 4 crotchets, 2
crotchets, 1 crotchet, 1/2 crotchet and 1/4 crotchet. This will give
rise to 5 fixed peaks of activity within the cortical map (and low
levels of activity between those peaks). Again, this type of activity
pattern will *only* occur if the rhythm is both regular and hierarchical
(i.e. beat periods in a sequence such that each period is a multiple of
the next one).
Some diagrams that illustrate these ideas can be found in the PDF
preview for my book "What is Music? Solving a Scientific Mystery" (see
http://whatismusic.info/, figures 10.3, 10.4, 10.5, 10.6, 10.9, 10.10
and 10.11 in the preview).
The analogies between musical pitch and musical time extend beyond the
explanation of scales and measured time. Both perceptual variables are
subject to non-trivial perceptual symmetries: pitch is subject to pitch
translation invariance (which is music played in a higher or lower key),
time is subject to time scaling invariance (music played faster or
slower). Pitch translation invariance is more exact over a larger range
of translation, and that may be partly due to the fact that the pitch
scale modulo octaves is effectively a circular scale. Both of these
symmetries imply a requirement for calibration. In the case of pitch the
calibrating element seems to be consonant intervals, and this explains
why human sound perception includes the ability to distinguish consonant
and non-consonant intervals. In the case of time, the calibrating
element is probably the relationship between beat periods that are
simple multiples of each other (in particular if the multiple is 2).
The evolutionary aspect is that neither of these cortical maps evolved
for the purpose of listening to music, in fact they both exist for the
purpose of processing speech melodies and rhythms. The occurrence of
constant activity patterns appears to be the condition that maximises
perceived "musicality", and this leads to the further question as to
what the perception of musicality represents, and what evolutionary
advantage is conferred by the perception of it.
chen-gia tsai wrote:
It was observed by ethnomusicologists that music tends to be either scaled or measured - or both. Percussion music may be not based on musical scales, but it is always subject to an isochronous temporal pulse. On the other hand, nonmeasured music is usually based on musical scales. In my experience, very few percussion pieces that are neither scaled nor measured can be found in Chinese music.
The question is: why are scales and measures (or beats) necessary? Animal acoustic communication appears to differ from music in these two features. What is the selection pressure underlying scaled/measured music of humans?
It is my feeling that scaled/measured music was not selected by humans (whereas golden fishes were selected by humans). Human cultures that did not use music or used nonscaled/nonmeasured music might be unable to survive, because nonscaled/nonmeasured music poses difficulty in synchronous chorusing, which plays a key role in social bonding.
This hypothesis 'co-evolution of human society and music', is similar to the hypothesis 'co-evolution of human brain and language' (Deacon 1997). I guess my idea is not new and perhaps someone can help me to find relevant papers. Thanks in advance.
Deacon, T. W. (1997) The Symbolic Species: The Co-evolution of Language and the Brain. W.W. Norton.
Institute of Applied Mechanics, National Taiwan
University, Taipei, Taiwan
Humboldt-University Berlin, Germany