Da Musically Inclined Bomb

DePauw University's First Year Seminar on Writing about Music

Monday, December 04, 2006

is that pig over there?

Learning Absolute Pitch by Children: A Cross Sectional Study

Hello. I am here today to discuss the article, Learning Absolute Pitch by Children: A Cross Sectional Study by Ken’ichi Miyazaki & Yoko Ogawa.

Fist off, I thought I’d start by letting you know about some key words I will be using throughout my presentation. Absolute Pitch or AP is the ability to recognize or sing a given isolated note, and it is also called “perfect pitch. Early learning is the view that AP is learned by extensive training or repeated exposure to musical stimuli. Critical period is the time during early childhood when AP develops, and after this critical period training cannot produce AP. Pitch naming is producing the correct location of a tone
Finally, Music training is formalized training in conceptual musicianship.

In order to understand this study, you must be familiar with AP. AP listeners are able to name, accurately and quickly, the musical pitch of isolated tones presented in the absence of musical pitch. Proportions of AP possessors supposed range from 1:1,500 to 1:10,000. The percentage of AP possessors among musicians ranges from 3.4% to 15%. No one is certain how people acquire AP, or if it can truly be acquired. However, it could be the result of early musical training. Previous studies have shown that virtually all tested AP possessors had music training by the age of six. Those people who are considered to be “self-reported” AP possessors had all commences earlier music training, meaning they had begun music training as early as ages three to five.

The purpose of this report is to attempt to investigate the learning process of AP. The researchers involved took a cross-section of children aged 4-10 on their pitch naming ability. Because it is a cross-sectional not longitudinal test, the factor of training could not be manipulated for the purpose of the experiment. Also, the effects of the training couldn’t be evaluated.

The musical training of these children was in Tokyo, Japan. The aim of this school was to get children to develop the capability to express themselves by music, not to train professional musicians. The children at the school took a two-year primary course where they had a one-hour weekly lesson, fundamental music skills, and ear training. In this course the teaching method highlighted listening, singing, and playing piano. Children first learn the notes, C4, D4, E4, F4, G4, and C3. Then the children sing with lyrics, sing on solfege, play on piano, and play on piano while singing in solfege in order to remember the pitches. After the children have mastered the white keys, they move onto black ones. When the children have finished the Primary Course, they are able to continue onto the Advances Course, which is just an extension of the basic and applied music skills. Here, the children get to play, compose, arrange, and improvise.

The Method of this test was relatively simple. There were 104 participants (children) aging from 4-10. They were all tested after three months into the school year. The test itself contained a grand piano and an electric piano. The tester would play thirty-six chromatic pitches over a series of three octaves in random order. More than a perfect fifth always separated the successive tones. There were never any octaves so the student couldn’t use relative pitch. The testers wet to every precaution to make the children feel at ease. The students aged 4 were not tested with an electric organ, and all students were given introductory sessions before they were tested.

The primary focus of the results was on the piano tones because not all students were tested with organ. There was a general tendency for accuracy to increase with age, but the accuracy for children ages 5 and 6 was still widely distributed. Also, there was a distinct difference between the fluidity of the children’s knowledge of white keys versus black keys. Most ten year olds knew all of the white keynotes, but still struggled with the black keys. Also, the accuracy with the organ was lower with the organ, particularly with children seven and up. However, all response times remained quick, much to the surprise of the testers.

There are many factors that come into the validity of this test. First of all, all of the children tested came out with varying degrees of absolute pitch. Whether or not they actually had it is debatable. All children were volunteers whose parents responded to a solicitation, so it wasn’t really random selection. Children from the Advanced Course were chosen because of their progress in piano skills, whereas the students from the Primary Course were unselected. Depending on the child, there could be varying degrees of motivation to do well on the test. One could argue that there are notable outcomes if you compare this data with data from a school that isn’t focused on music, but that has yet to be done. This test doesn’t really address the difference of timbres within the experiment or how it affected the performance of the children. Also, it could have been possible for some of the children to use relative pitch, though it would have been difficult.

I would like to end my presentation with some questions that remain after this test was finished. What about standard behavioral physical development in children? Should timbres be introduced at an earlier age? Are the differences in comprehension of white and black keys due to training? Is it actually more difficult for a child to understand black keys? Is this confusion due to a misunderstanding of musical concept? Did they test the Eguchi method for AP training where the children identify chords first, instead of pitches causing them to focus on whole tonal characteristics? Does AP really matter to musicians? Does it interfere with the development of relative pitch?

Thank you for listening.

Sunday, December 03, 2006

The effects of repeated exposure on liking and judgements of musical unity on intact and patchwork compostitons

One of the biggest problems in composing music is how to combine different musical ideas into a coherent music composition. Western music however tends to use "repetition and variation of musical ideas" to combat this query. Compositions tend to use the same musical themes that are memorable and can be recognized easily to be brought back through, "repetition of in a varied form throughout the piece." Repetition is a very important in music, however variety is essential to composer's music. Using new idea and themes keep listeners interested. The central idea, however, is that music must be connected. Music is often explained from a musicians point of view often with the written score, but not much attention has been paid to the casual listener. "Can a casual listener appreciate the differences between compositions that are well-untied and those that are not?" As stated in the article," Some studies that we may overestimate the ability of listeners to listen anolitcally. Even musically trained participants often do mot focus on motifs of themes of repetition of musical ideas in music listening tasks unless directed to do so." (408) Showing that most people even musically trained people just plainly listen to music.
The procedure of the experiment are as follows:
The study was done on two seperate days (a Tuesday and a thursday). The partcipants were seated at individual tables so that everyone's responses would be genuine. There were 74 participants in three groups of 22 to 26. Intact compositions are piece that have not been altered in anyway, patchwork compositions are pieces that have been spliced together composed of many different pieces. On each day three intact compostitions and three patchworks were played twice each, also a filler compostion were played one time eachto seperate the examples. All fourteen examples were played in random order. For example:
I(1) P(6) F(4) I(7) P(2) etc.
By the end of the experiment, both intact and patchwork compositions were played four times each and the four fillers once each. The proctor of the experiment told the participants to rate the pieces on varying factors.
The participants rated the music after it was played. They rated simply if they like the piece as well as unity, changes in mood, pitch, and tempo, as well as how good the ending was. All the rating scales were found to be signifcant predicters for the unity ratings of musically and unmusically trained partcipants. In the rest of the examples, hearings 2 through 14, partcipants were asked to indicate if the peice had been played eariler or if it was the first time played. The second session took place on Thursday. The same 74 participants were asked to do the same things as the first session. At the end, the students were debriefed, given the music used in the study, and given a musically themed candy bar. The first session lasted 65 minutes and the second lasted 55 minutes.
Hypothosis
If the partcipants are sensitive to unity then, intact pieces should be rated higher then patchwork pieces. Also patchwork should be rated lower on repeated hearings. The differences should be more pronounced the musically trained partcipants then non trained. The mere exposure hypothosis, both should go up on repeated hearings.
The Results:
Overall trend across the hearings. Goodness of ending had a significant trend. There was more of a trend for goodness of ending then overall liking. Goodness of ending increased across hearings in each case.
Liking:
General increasein means from session 1 to session 2, but didn't increase from hearing to hearing.
No significant trends were found for unity clarity of themes, or repetition across hearings, but there were significant were foundfor variables of interest. Significant trends for goodness of ending and liking, which helps the mere exposure hyothosis. For the goodness of ending and liking increased in patchwork and not intact. No significant trends for anything. Patchworks had a general upward trend, while intact had a general downward trend.
At first intact higher, because it was optimally complex and supracomplex. The change is due to both seeming less complex, but patchworks had higher levels of interest. Few differences were also found in musicians and non musicians.
The limitations were that partcipants ability to recognize thematic and structured features often improves with repeated hearings. Most accurate ratings on first hearings then repeated work, meaning the partcipants loss interest. Immediate repetitions may be necessary to facilitate deeper analytical listening. Unity is important, but only if a listener can fully appreciate unity is still an open question.

Musical Rhythm, Linguistic Rhythm, and Human Evolution by Aniruddh D. Patel

A recent debate has been one of the evolutionary status of music. Some argue that humans have been shaped by evolution to be musical while others say that there is no natural selection involved in the process and that music is merely an alternative use of our basic cognitive skills. In order to solve this debate, we have to look at a couple of questions. First we have to ask if basic components of music are innate, specific to music, and unique to humans. There have been studies to prove that musical pitch perception does not involve natural selection. However, to answer this question we can use musical rhythm as an example. Unfortunately this subject has been left highly unexplored. So our question now is whether or not musical rhythm can be proven to be evolutional in humans.

This hypothesis was first explored by Darwin. Arguments were started when skeptics claimed that love of music is “a mere incidental peculiarity of the nervous system, with no teleological significance.” In simpler terms this means that we can understand music because we are interested in it; we were not designed to automatically know about it. So to prove these skeptics wrong, we need to find the basic musical concepts that cannot be an alternative use of basic cognitive skills.

When we look at musical rhythm, we have to look at its similarities with the rhythm of language. Both types of rhythm have group boundaries, pitch, and duration. Grouping in both musical rhythm and linguistic rhythm use the same part of the brain. Every type of music, from every culture, uses a regular beat that nay listener can recognize and clap along with. Unlike music, however, there is no recurring syllable stress in language. In other words there is no recognizable pattern in the rhythm of language. The two rhythms are still related in that they both have meter which means that some beats are stronger than others. Even though the stressed syllables in linguistic rhythm do not create a regular pulse, it can be assumed that this grouping of stressed and unstressed beats originated in language and was later used in music. So here is the argument: Humans can follow language which consists of complex and irregular beats. Therefore we can assume humans are more than suitable to be prepared for a regular beat. The proof to support this argument is that listeners usually tap ahead of the beat while listening to music. The ability to recognize beat is called beat perception and synchronization or BPS. BPS is unique to music and did not originate in language so separate studies are required for BPS.

When talking about BPS, we need to know several factors. One factor is whether or not BPS is innate or present at birth. The argument for BPS being innate is that babies cannot tap a beat yet they also cannot speak. Since speech is considered innate it would be illogical to claim that BPS is not innate. Therefore BPS would have to be studied developmentally. There are factors preventing this study. We do not know how old a child is when he can tap a beat and we do not know the percentage of adults who can tap a beat.

Another factor of BPS we need to know is if it is specifically related to the brain. There are two different ways to look at this. The first is that brain damage that affects BPS also affects other nonmusical cognitive skills. The second is that brain damage that affects certain functions does not harm others. An example of this is when rhythmic abilities are affected, pitch processing remains relatively undamaged. As you may have guessed, there are factors preventing this study as well. There are no studies on relationship between shortage in BPS and other cognitive skills. Nothing proves that music is a byproduct of other brain functions.

The third and final factor of BPS is whether or not it is specific to humans. The question here is if nonhuman animals naturally produce music. If an animal can acquire or develop the ability to produce music, then the ability would not be an adaptation of music. So can animals learn beat? Primates are taught sign language, but there is no record of anyone ever teaching an animal to tap, peck, or move to a beat. If animals could learn beat, then natural selection for music is not necessary for BPS. The animals we would choose to study would be chimps and bonobos because they already drum with their hands and feet voluntarily as part of their behavior and they are the most intelligent of their kind. The question that follows is whether or not apes are capable of BPS. To figure this out we need to understand basal ganglia. Basal ganglion are any of four deeply placed masses of gray matter in each cerebral hemisphere of the brain. Rhythms with regular beat are associated with increased activity in the basal ganglia. Basal ganglia are also involved in interval timing, motor control, and sequencing. Therefore, the brain structure that keeps beat also controls the coordination of patterned movement. So if BPS only required interval timing and motor control, apes would be capable of BPS because they have basal ganglia as well. However, BPS also requires a relationship between being able to hear intervals and tap them out. The reason we are able to do this is because human evolution has modified our basal ganglia to have that relationship. BPS also requires a relationship between auditory input and motor output, which is also known as vocal learning. This trait has also been a modification of our basal ganglia through evolution.

The conclusion is that being capable of vocal learning is absolutely necessary in order to synchronize with an auditory beat. This hypothesis shows that teaching primates BPS would be unsuccessful, which would prove that it is too early to conclude that BPS is unique to humans.

Musical Rhythm, Linguistic Rhythm, and Humane Evolution

Musical Rhythm, Linguistic Rhythm, and Humane Evolution

By: Aniruddh D. Patel, Neuroscience Institute


Musical Rhythm, Linguistic Rhythm, and Humane Evolution is an article discussing the contrasting interests in the ideas of music shaped by natural selection or developed by cognitive abilities. The article was developed following a similar article discussing the idea of pitch regarding evolution and cognitive skills

The first argument, evolution, which was introduced by Darwin, in 1871, is developed based on the idea that humans have changed and adapted to new concepts to understand music. The second argument, adaptive cognitive skills, established by William James, says music is understood by using skills like thinking, reasoning, remembering, imagining, or learning.

Can music be related to the way we talk? Pitch movement and duration develops in our speech habits early in life. According to evidence discovered by neuropsychology and neuron-imaging, similar parts of the brain are used to group what we say and what we play. However, beats in music are determined by meter and the beats in what we say have no particular timing.

BPS: Innateness

Innateness is defined as being born with a skill or ability. So are we born with BPS? As far as we know, infants do not synchronize their movements to a musical beat at birth. But we can’t be sure they don’t have BPS because they may not be able to physically show it (i.e. they don’t talk.) So to address the theory of “born with natural beat synchronizing” developmental studies must take place. Through testing we would be able discover if the brain is specifically prepared with BPS. If we discover they are not, we can use other studies to determine at what age children or adults gain BPS by monitoring abilities to follow rhythm by clapping, tapping, or bobbing up and down.

BPS: Domain-Specificity

Neuropsychological literature presents two cases of individuals with musical rhythmic disturbance after brain damage. In the first case, the subject’s rhythmical abilities were disruptive but pitch skills were still intact. In the second case, the subject could determine simple differences between rhythms but could not reproduce or evaluate patterns. So, is brain damage disabling areas of the brain that work with musical abilities? No tests have been conducted to examine relations between deficits in BPS and in other basic cognitive skills. But, if such relations could be found it would suggest that BPS is based on the abilities recruited from other brain functions.

BPS: Human-Specificity

Humans are not the only ones who can be studied to discover BPS. As far as we know, animals do not naturally produce music therefore, we can study their patterns to better understand BPS. If an animal could acquire the ability to produce music then it would have used cognitive skills. But, can animals learn BPS? Despite decades of psychology and neuroscience research and development not a single animal has been trained to tap, peck, or move with an auditory beat, so without proper testing we can’t be sure.

However, leading in the favor of natural selection, chimps are capable of producing drumming with their hands and feet when they play which means they can voluntarily produce rhythmic movements on a time scale appropriate for BPS.

Human testing shows rhythms that have a regular beat are associated with increased activity in the basal ganglia, a part of the brain that measures the time intervals between beats. If one assumes that BPS only requires a common brain structure to produce interval timing and motor control then one would expect that chimps would be capable of BPS.

However, we know that BPS requires more. A Relationship between auditory and patterned movements is proved necessary by the fact that just visual rhythms poorly induce BPS in humans. Therefore, we move to vocal learning.

Vocal learning is the act of producing vocal signals to provide feedback. Evolutionary perspective shows that some species have picked up the idea of vocal learning (i.e. humans, parrots, song birds). Humans find vocal learning easy because it’s needed to learn how to speak and verbally survive in our society.

Through research, it has been discovered that vocal learning requires a tight relationship between the auditory input and the motor output in the bird. Neurobiological research shows this tight relationship modifies basal ganglia in the bird’s brain the better it becomes at vocal learning. Birds and mammals have many anatomical relationships in the basal ganglia so does that mean humans have also been modified by natural selection for vocal learning?

Presentation Script

Absolute pitch, also known as perfect pitch, is defined by W.D. Ward and E. M. Burns in their article “Absolute Pitch” as the ability to attach labels to isolated auditory stimuli on the base of pitch alone, or in normal people speak, the ability to identify a note by name with no reference note or produce a correct pitch without reference. There has been much debate on whether or not absolute pitch is genetic or can actually be taught.
This experiment, “Learning Absolute Pitch by Children: A Cross-Sectional Study” was conducted by Ken’ichi Miyazaki and Yoko Ogawa. The experiment took place in a private music school in Tokyo. Children are enrolled at the age of four and being to play piano and gain a sense of musical knowledge through activities using fixed-Do in order to emphasize the base of C4, and are shortly after introduced to other pitches, with a focus on C4, D4 and E4.
In the primary course students learn pitches and pitch names through singing, memorizing, and playing songs on the piano. In the advanced courses, which includes the ages of 6 to 10, students extend their music opportunities to play, compose, arrange, and improvise music. For the experiment 104 children were used including 13-four year olds, 18-five year olds, 13-six year olds, 14-seven year olds, 26-eight year olds, 14-nine year olds, and 6-ten year olds.
The test was held approximately three months after the beginning of the year. Another element of the experiment was the use of different timbres. Both a Yamaha Grand Piano and a Yamaha electric organ ( generating string sounds) were used. The test tones included 36 chromatic pitches over three octaves, however, the participants were not asked to identify pitch class, such as C5, but simply the general name, such as A or B. The pitches were given in a random order and separated by at least 7 semitones (or a Perfect 5) each time in order to discourage the use of relative pitch. Relative pitch is the use of a given note to generate or determine a second note. As was stated earlier two timbres were used however, it was decided against the use of the organ for the 4 year old age group as it seemed they lost interest in the activity very quickly, as most of you are probably losing interest with this presentation already.
The participants responded to the given tone using the solfege fixed-Do system which they had been taught through the school. Fixed-Do means that in every key C is Do and the other pitches are Do-sharp, Re, and so on. There was no feedback give as to whether or not a response was correct but instead constant support and encouragement was given to the participants to keep them motivated through out the experiment. Each participant was tested alone and a video camera was used in order to deteremine response time at a later point.
The results are presented as a series of graphs that I will pass around. The first graphs focus on the responses to the grand piano because not all of the students were tested on the electronic organ. As can be seen in the graphs the tendency is for the percentage correct to increase with the age, however, after age seven it can can also be seen that there is little more improvement. The first graph shows how even though the general line is increasing there are still a few students who were achieving below the overall trend. Each individual is represented by a black dot. In the second graph it shows two lines the first being the white key pitches and the second the black key pitches. As you can see the white key pitches were much more easily recognized then the black key pitches. Again, there was a large increase between in the ages of 4 to 6 and a plateau after the age of 7. The other graphs that I will pass around show the answers given to each given pitch for each age group. At first glance they seem rather confusing so if you have any questions please ask. Figures 4 and 5, or the last two graphs, yes we are nearing the end, show the responses to the organ pitches. These responses are lower and have been rationalized as such because they are not the primary instrument on which the participants studied. Children who had had other training on the electric organ scored much higher.
In general this study did little to prove anything, it simply added more research for an even larger study. This study simply proves that the critical period for the possible development of absolute pitch ends at the age of seven. The study does not settle the issue of etiology of absolute pitch ( etiology is the study of the cause of something) but it can be argued that the data may be used as evidence of the learning process of absolute pitch being effected by music training in childhood.

Preserved Singing in Aphasia:A Case Study of the Efficacy of Melodic Intonation Therapy

This study demonstrates the efficacy of Melodic Intonation Therapy (MIT) in a neurologically stable amateur male musician with severe expressive aphasia using a pre- versus post-treatment design.
Aphasia is the loss or impairment of the ability to use of comprehend words resulting from brain damage. In this study the patient, referred to as KL, had difficulties with language dealing with expression and comprehension, but has verbal apraxia, or the ability to repeat words by copying someone's mouth movements. The purpose of this study was to help KL regain some of his ability to produce words and phrases using MIT sessions.
The study involved three tasks. In Task 1, pairs of sentences were presented both visually on printed cards and verbally. KL was required to identify the grammatically correct sentence within each pair; he was able to get 16/18 correct. Task 2 involved KL constructing grammatically correct sentences from printed word cards presented in a random order. He was unable to produce 8/9 sentence anagrams, thus showing he understands the structure but cannot produce his own. In Task 3, 20 phrases of three words were generated and randomly put into one of three groups on ten. Each word was printed beneath a line drawing of the last word, with each group having a simliar number of phrases with one-, two-, three- or four- syllable words. Group 1 had tunes composed for each phrase with the rhythm and pitch contour similar to that of conversation; these were the MIT phrases. Group 2 phrases were given a slightly exaggerated rhythm that approximated the rhythms of Group 1 without stepping away from the natural rhythm of speech; this was the repetition group. Both Groups 1 and 2 were recorded for practice use in between sessions. Group 3 were unrehearsed phrases, which KL saw for the first time in therapy.
The procedure for the study had KL's responses recorded throughout the sessions with a baseline performance taken prior to the start of therapy by having him say phrases from each group from a picture prompt, written word prompt, or spoken word prompts. He attended four weeks of therapy with a Music Therapist and a named researcher. KL had bi-weekly rehearsal seesions of both Group 1 and 2 phrases with identicaly training conditions. The rehearsal followed the MIT method that has six levels of phrase production graded by the level of phrase difficulty and the degree of prompting provided by the therapist. Level 6 is the goal as it is when the participant can produce an answer using the target phrase with no prompting. KL was allowed three attempts to complete each level and a melody was not incorporated into rehearsal of Group 2. Group 3 phrases were presented at baseline and at week 5, one week after the end of therapy.
A first follow up was presented at week five, with a second at week 9. In these sessions, phrases from each group were presented in a random order. He was first given a picture prompt then a melodic prompt for the MIT phrases, and finally a spoken or sung prompt until he was able to compete the phrase. The melodic and sung word prompts used the melodies accompanying the Group 1 phrases.
The results were taken by way of a 't' test by comparing the mean number of times KL reached Level 6 across the 8 sessions of therapy. This showed the he had a significant performance advantage for phrases using MIT vs. Repetition. A Repeated Measures Analysis of Covariance (ANCOVA) test was used to examine the proportion of words KL correctly produced with baseline vs. follow up 1 and among the phrase groups. KL's performance of the MIT and repetition phrases was significantly better than his performance of unrehearsed phrases across time. ANCOVA tests were also used to assess the long term effacicy of MIT by examining the proportion of words correctly produced by KL for the rehearsed phrases. His performance of the repetition phrases deteriorated at a faster rate than the MIT phrases, creating a performance advantage for MIT phrases. The therapy also showed that KL was significantly more likely to reach the stage where he could answer a question with a sung target phrases than a spoken phrase. Also KL's MIT phrases were more commonly produced without a prompt and more likely to be complete utterances. All this proves that the effects of MIT are longer lasting after therapy and is very helpful in helping people with aphasia learn to comprehend and produce phrases.

Musical Rhythm, Linguistic Rhythm, and Human Evolution

This article is called “Musical Rhythm, Linguistic Rhythm, and Human Evolution”. It was written by Aniruddh D. Patel of The Neurosciences Institute
The debate in this study is over the evolutionary status of music – whether evolution has shaped humans to be musical, or whether humans adapt cognitive skills. The journal breaks down music * cognition into four parts as shown here.
Recently, the idea that human minds have been shaped by natural selection for music, first proposed by Darwin, has become widely accepted. There are skeptics, however, who believe music is an “enjoyable mental technology built from preexisting cognitive skills”. The debate can be resolved by determining whether there are fundamental aspects of music cognition which are * innate, that is being born with it, and cannot be explained as being part of cognitive abilities that have been adapted. The author concludes that as of now, there is no reason to reject the idea that * human minds have NOT been specifically shaped by natural selection for music.*
Musical rhythm is similar to speech rhythm, since they both have nice rhythmic organization. They both use pitch movements and durational lengthening, and they both start early in life. Studies have shown that the two use similar brain substrates. This is good evidence that musical rhythm is in fact an offshoot of linguistic rhythm. Musical beats occur in the context of a meter, in which some beats are stronger than others. Interestingly enough, speech is also metrically. based on stress or prominence. This suggests that the tendency to organize rhythmic sequences may originate in language.*
Beat perception and synchronization, or BPS, is a part of rhythm unique to music. It cannot be explained as a byproduct of speech rhythm. The key questions about BPS are about its innateness – its domain specificity, and its human specificity.*
Infants don’t synchronize their movements to a musical beat. This doesn’t mean innateness is false, because infants do not speak. One way to address the innateness of BPS is to look at developmental studies, in order to explore whether the brain seems prepared to acquire this ability. As for now, we don’t have enough information about BPS, including how early one can synchronize to a beat and who can attain this ability. More research is to come…*
One way to study domain-specificity of BPS is to see if brain damage that disrupts it also disrupts other nonmusical cognitive abilities. The neuropsychological literature has descriptions of people with musical rhythmic disturbance after brain damage. It also has findings that rhythmic abilities can be selectively disrupted, leaving pitch processing skills intact. Again… this topic needs more research.*
Animals do not naturally produce music, so if an animal can acquire this ability, it would mean that the ability is not part of an adaptation for music. In all the years of research and animal training, there has not been a single report of an animal being trained to tap, peck, or move in beat. Could an animal learn BPS? If so, this would mean that natural selection is not necessary for BPS.
So there is the question of which animals to study. The obvious answer would be chimps or bonobos, since they are the most closely related to humans. Also, chimps and bonobos have short bouts or rhythmic “drumming” as part of display or play behavior. So we know they are capable of making rhythmic movements on a scale of their own. Despite this, there is still question to whether or not apes are capable of BPS because of brain circuits that are involved in beat perception and motor control. * In humans, rhythms that have a regular beat are associated with the basal ganglia structure in the brain. This structure is also used for motor control and sequencing. But if just this structure were fully in charge of these things, you would expect that chimps, and other species such as rodents, would be capable of BPS. So, because of this, we can conclude that its not just one simple brain function. * This is because BPS involves a special relationship between auditory temporal intervals and patterned movement. This means that somehow or another, human evolution modified the basal ganglia in away that makes for tight coupling between auditory input and motor output.
One way this evolutionary force could have occurred is in vocal learning. This means that you learn to speak or make noise by hearing. This is common to humans, since every child learns to speak by learning. This is only common to a few animals, such as songbirds and parrots. So, humans are unique among primates in having complex vocal learning.
Neurobiological research on birds shows that vocal learning is associated with modifications to the basal ganglia, which is key in auditory input and motor output. So, we can assume that the basal ganglia in humans have also been modified by natural selection for vocal learning.
*So basically, a testable hypothesis would be that…
Having the neural circuitry for complex vocal learning is necessary for the ability to synchronize with an auditory beat. *
This hypothesis pretty much says that if you try to teach nonhuman primates, such as the chimp, to synchronize to a beat, it’s probably not going to work. But, it also says that if it doesn’t work on primates, it would still be premature to conclude that BPS is unique to humans.

The Origins of Music: Theories and their Flaws

In the last decade, the study of music evolution has significantly increased. Ian Cross analyzes two papers on this subject, and points out how they could be stronger in certain areas.

The first paper, by Justus & Hustler, indicates that the capability to learn music is adaptive, and that inherent music talent is the basis for the exploration and reconceptualization of music. The other paper, by McDermott & Hauser is very broad-ranging, and gives a detailed check-list of what one would need to give an evolutionary view of music. While writing their paper, McDermott & Hauser referred to/were influenced by developmental & ethological essays.

Both papers seem to misinterpret the “Neanderthal Flute”, a bone that is mythically believed to be the first musical instrument. McDermott & Hauser suggest that the earliest preserved instruments date back to 6000 BC. This date is to late to be plausible. Justus & Hustler say the “Neanderthal Flute” was made by humans, when in fact, it is believed to be a product of an animals chewing. If it were to be a musical instrument, the date at which is was made is too early a period for music to have started.

Ian Cross also notes that both papers lack specificity when it comes to defining music. McDermott & Hauser state “… a definition of music is not particularly important at this stage.” Cross concludes that their lack of specificity limits their arguments greatly.

McDermott & Hauser end their paper by claiming that music lacks referential precision, because it expresses emotion and is “commonly used to produce enjoyment.” Cross believes that music must be characterized as fully as possible. Only then can you understand how music relates to other aspects of human life, and propose theories on the evolutionary roots of human musicality.

Uses of Music in Everday Life

Many people believe that due to the development of recorded music and mass media, the presence of music has become a part of our everyday lives. There have been many studies that examine just how we experience music from day to day. This one focuses on the five W’s- who, what, when, where, and why.

To answer these questions, the researches found 346 volunteers. The average age of the volunteers was 25.96 years old, but some aged anywhere from 13 to 78 years old. Each participant was asked to complete one approximately 25 minute questionnaire a day via text messaging, and 96.72% of the questionnaires were successfully completed. The participants were very diverse in both ethnic background and occupation, as well as their musical training and experience.

There were six parts to each questionnaire. First, the volunteers were asked if they were either currently listening to music, or if they had at least heard some type of music since the previous survey. If they were not or had not, they did not need to complete the rest of the questions. If they had a music experience to report, they were then asked who they were with during the experience. Next were the questions of what type of styles they had heard, if they were able to choose this music, and if they liked it. The fourth part inquired as to where they had heard the music. The last two parts were split amongst those who had been able to choose what music they experience and those who had not. The group that had a choice was asked why they chose what they did, and the group that had no choice was asked what effects the piece had on them. In both cases, they were given answers to choose from based on previous free response answers to the questions by psychology undergrad.

(Chart 1) After 14 days the study ended, and on 38.6% of the occasions, people could hear music. 60.8% of the times, people could not hear music, and there was no response for .6% of the surveys. (Chart 2) Of the times that people could not hear music, the participants indicated that they had heard some since the last survey was completed 48.6% of the time. For 48.5% of the occasions they had heard no music since the previous survey, and 2.9% of replies contained no response. Overall, this means that the participants had a very high exposure to music.

(Chart 3) Again, the main focus of this study was the 5 W’s, and I’ll start with Who?. Only slightly more than a quarter of the musical experiences were heard by the participant alone. 18.4% of the time, the participant was with friends, 7%, with only a spouse or partner, 8.4%, with family members, 5.8%, with colleagues, 3.2%, with a boyfriend or girlfriend, 1.9%, with strangers, and .6% with someone who was not any of the choices on the list. Obviously, more of the listenings were experience by more than just the participant, which interestingly differs from results of previous studies.

(Chart 4) The next question was what music the participants heard. There were 14 choices to pick from. The most popular answer was Chart pop, heard 38% of the time, followed by R&B/Soul at 8.4% and Dance at 5.3%. Every other choice was picked at least a few times, but no more than 5% of the time. All of the percentages can be seen on chart 4. The results of this part of the survey greatly reflect record sales. For instance, the most heard- chart pop- sells very well, while classical music, 3% of the experiences, does not sell so well.

(Chart 5) The participants were also asked when they could hear the music. The researches made sure to account for the fact that the time of day the messages were sent (which varied day to day) would reflect the answers and used a special formula to ensure the results would not be greatly affected. The responses were split into one hour segments of the day, such as 3-3:59 AM, or 7-7:59 PM. The results are calculated by the times music could be heard compared to the number of people that responded within that hour. The most often experiences took place between 10:00 and 10:59 PM. For more general understanding purposed, the chart splits the responses into the morning and afternoon (8-4:59), and the evening (5-11), including weekdays only. Since many participants were at work for the 8-4:59 time slot, these results are also split between those who could choose and not choose what they listened to. For the people that could choose, 63.9% of the experiences were in the earlier half, while 63.7% were in the later half. For those that could not choose, 36.1% of the responses showed an earlier listening, while 36.3% showed a later listening. (Chart 6) They also split the results into weekends and weekdays. For those that could choose 63.3% of the occurrences took place on weekdays, compared to 66.2% on weekends. Those that could not choose reported that 36.7% of their listening time was on weekdays, compared to 33.8% on weekends. These results greatly contradict thoughts that most music listening happens during leisure time instead of at the workplace. The difference is not significantly large.

(Chart 7) Another section of the survey asked where the volunteers were while they experienced the music. Only half of the experiences took place within the home, while almost 1/5 took place in public places. Also, the two choices that were geared toward people choosing to listen to music, being at home and listening to it on purpose or going to a concert, only made up slightly more than 1/10 of the experiences. These results clearly show that the technological developments in recorded music have a great effect on people’s exposure to music.

(Chart 8) The final part of the survey asked those participants who had chosen to listen to the music they had why they chose to do so. The most common answers were that they either enjoyed the music or it helped to pass the time, while the least common answers had to do with feelings or thought-things like bringing back memories or stimulating an emotion or even just to learn more about the music. (Chart 9) Those who had not chosen to listen to the music were in turn asked what effects the music had on them. While 31.6% of the experiences created the right atmosphere for the situation, and 28.7% of the time the listener enjoyed it, almost 15% of the music annoyed the listener.

So what do these results mean? Almost 70% of the answers showed that the
volunteers were exposed to music at some point in their day. Around 3/4 of the time, the participants were not alone, which means that music plays a large part in social activities. Furthermore, the data of what people are listening to is definitely consistent as to what recorded music people are buying, which shows that more exposure to a certain genre of music could definitely influence a higher sale of that genre. The study also showed that many people are not choosing to listen to the music that they hear, and that most of the instances are taking place at work or in public places, where the mass media has a very large presence. So in a nutshell, this study shows that the increase of mass media and developments in technology do, in fact, have a very large influence on how people are exposed to music every day.