[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

tone deafness



As to the question of ``tone deafness'' for music, I also know of
``tone deafness'' in speech. I know some people who are perfectly able
to indicate in a spoken utterance which syllable has an pitch-accent,
but they are unable to say whether that accent is lent by a rise (a
rising pitch movement), a fall (a falling pitch movement), or by a
rise-fall (a rapid succession of a rising and a falling pitch
movement). All three types of accent-lending pitch movements occur very
regularly in languages like Dutch, German, and both British and
American English. Furthermore, the intonation in the speech spoken by
those ``tone deaf'' people is perfect. One of those people had worked
for years at IPO within the Hearing-and-Speech Group, and knew much
about intonation.
    Another observation is that many people are unable to reproduce an
utterance spoken by someone with the same intonation contour.  For
example, though in about 70% of all cases questions end on a high level
tone, many questions end on a low tone, especially wh-questions.  For
some studies, I wanted to have some utterances with well specified
pitch contours, among which there were some wh-questions which I wanted
to end on a low tone. Many subjects are unable to do this task; when I
speak both those sentences, one time ending on a low tone, the other
time lending on a high tone, some subjects react with ``But I don't
hear the difference!'', let alone that they can reproduce it.
    A study by Tahta and Wood (1981), (``Age changes in the ability to
replicate foreign pronunciation and intonation'', Language and Speech
24(4), 363-372), shows that children in the age from 8 to 11 learn much
faster to reproduce ``foreign'' intonation patterns than 11 to 14 year
old children. Their study related to native English-speaking children
who were imitating French and Armenian intonation patterns.
    Furthermore, being ``musical'' or having a fine ear for music, is
not a sufficient condition for being able to reproduce an intonation
pattern without much effort. Those musically trained people who doubt
this, are challenged to reproduce the intonation patterns of some
simple utterances spoken by speakers of an East-Asian tone language
with five, six or seven lexical tones.

Before addressing what this all implicates, I want to mention that
speech and music are different in many ways (although there are
transitions). One of the differences is that (most) music has a tonal
centre, which determines a limited set of correct pitches, whereas
running speech, as used in every-day communication, has not. A probably
related finding is that I found evidence that speech intonation in
perceived on an ERB-rate scale (JASA, 90, 97-102), while, there is no
doubt about that, music is perceived on a logarithmic frequency scale.
If this is true, but it is sometimes questioned, this means that one of
the criteria for good intonation in music, viz. being ``in tune'', does
not apply to speech.  Producing a correct intonation in speech is
independent of any correct tuning.  This shows that the perceptual
mechanism which decides the correctness of the intonation in a piece of
music, is different from the (language-specific) perceptual mechanism
which decides whether the intonation contour of a spoken utterance is
correct.

>From the discussion in AUDITORY and from above, it appears that the
perceptual mechanisms underlying the perception of intonation in speech
and music are quite different. In (most) music, in contrast with
speech, a tonal centre determines a limited set of correct pitches. In
this sense music is special, not speech. Furthermore, all native
speakers of the language mostly produce ``correct'' intonation
patterns. In speech, native speakers after the age of about 11 can
mostly determine whether an utterance is ``correct'' or whether there
is something wrong; being aware of what is wrong is something
different.

My conclusion is that the question of ``tone deafness'' in music should
be kept separated from what happens in ``non-sung'' speech.  To make
things even more complicated, I fear there is a third perceptual
mechanism related to frequency discrimination, and that is our
sensitivity to formant freqencies. We can easily follow frequency
variations of 40 octaves/s, e.g. the second formant in ``we''. Any such
change in the periodicity of a signal will not be perceived as a change
in pitch, but will induce the perception of a new ``note'', a click, or
whatever. I think, also this perceptual mechanism should be kept
separated from the previous two.


--
Dik J. Hermes
Institute for Perception Research / IPO
P.O. Box 513
NL 5300 MB Eindhoven

*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*
!                                                         !
*      NB  After 10 October 1995 telephone number,        *
!      telefax number and E-mail address will change:     !
*                                                         *
!      Tel.:   +31 40 2773842 / +31 40 2773873            !
*      Fax.:   +31 40 2773876                             *
!      E-mail: hermes@natlab.research.philips.com         !
*                                                         *
!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!