Music is surprisingly mysterious, for something so ubiquitous. For example, it’s not really clear why we generally associate major keys with happy moods and minor keys with more somber feelings. Also, we choose our scales somewhat arbitrarily out of a range of possibilities. Within a single octave, humans can discern about 240 different musical tones, but the ways we divide this complex tonal landscape are fairly uniform across not only western music but at least some other musical traditions, despite the multitude of other options.
A couple of papers from the lab of Dale Purves at Duke suggest that the answers to both questions are linked to the properties of human speech.
A paper in the Journal of the Acoustical Society of America reports on research comparing the tonal qualities of excited and subdued speech and finds that the former contains more major intervals and the latter more minor intervals, which suggests a source for our identification of the emotional qualities of music in major and minor keys. Another paper in PLoS One shows that the musical intervals making up the most widely used scales are those that most closely resemble the harmonic structure of vowel sounds appearing in human speech.
These close links between the tonal qualities of music and speech suggest that one reason music is such a powerful influence on humans is that it uses whatever mental machinery evolved to pay attention to the utterances of other humans (or as the Purves lab web page puts it, “These findings are consistent with the idea that humans have a bias for conspecific vocalizations.”).
You can read an article from Science Daily about this work. The two papers are:
Major and Minor Music Compared to Excited and Subdued Speech, by D.L. Bowling, K. Gill, J.D. Choi, J. Prinz, D. Purves. Journal of the Acoustical Society of America 127(1), 491–503, 2009.
A Biological Rationale for Musical Scales, by K. Gill and D. Purves.
PLoS One, 4(12): e8144. doi:10.1371/journal.pone.0008144, published December 3, 2009.
Interesting. I wonder what this means for researchers (like Isabelle Peretz) who use studies of acquired amusia without aphasia (and vice versa) to argue for brain specialization for music and ultimately that music is an evolutionary adaptation rather than a cultural artifact. Peretz claims that encoding musical pitch along familiar scales is realized in a music-specific neural network (http://www.brams.umontreal.ca/plab/downloads/The_neuroscientist.PDF).
The idea that we like music because it reflects what we hear in speech would seem to challenge the view that the perception of musical pitch is specialized.
And what of those 4-5% of the population with congenital amusia? These “tone-deaf” individuals lack the fine pitch discrimination skills to differentiate between even the most familiar melodies, yet they can tell the difference between identically worded statements and questions distinguished by inflection only. Does this help support or refute claims that our appreciation for music is a felicitous result of our adaptations for language? Also makes me wonder if there’s evidence that amusics aren’t as good as others at hearing emotion in speech…
It seems like they’re probably onto something but it’s easy to overgeneralize when talking about music. A minor chord doesn’t have any more minor intervals than a major chord. A minor chord is built of three intervals: minor 3rd, major 3rd, perfect 4th. A major chord is: major 3rd, minor 3rd, perfect 4th. They still sound major and minor in inversion, such as: perfect 4th, minor 3rd, major 3rd => still a minor chord.
In other words, even when we’re using specific detailed musical terminology correctly, we are still representing only the tiniest oversimplification of what is going on tonically, even within the limits of well-understood music theory.
Lauren–Good points; thanks for the link to the Peretz paper! I hope future work will address the questions you raise. I really appreciate learning about other facets of research in this area.