The cochlea, with the help of the ciliary cells and the basilar membrane, separates the sound into different signals for each frequency interval. These signals are transmitted to a bundle of nerve fibers known as the auditory nerve, which carries them to the brain as if traveling by separate wires.
The first stop in the brain is the thalamus, a structure located in the center of the organ that retransmits the signal to the primary auditory cortex. This identifies the frequency and intensity (note and volume, say) of the tone that is heard. The sensory cortices -primary, secondary and tertiary- are located on both sides of the brain, in a region called the lateral sulcus, or Silvio's sulcus.
But identifying the note and the volume of the incoming sounds is not enough to recognize them as music. For that, there is the secondary cortex, which analyzes information about the harmony (the relation of the notes that sound at the same time), the melody (the link of the letters in their temporal succession) and the rhythm (the pattern of accented letters and sick notes). Now we need to integrate all that information. The tertiary cortex is in charge of that, and from there the signal passes to other brain compartments, as we shall see.
Researchers of the neurophysiology of music have begun to understand these processes in recent years. To explore the twists and turns of music through the brain, some researchers carry out studies of people with brain injuries that affect some of their musical abilities. By locating the lesion in mind, deductions can be made about the function that the affected area performs in recognition of music. Other researchers use techniques to visualize brain activity in real time, such as positron emission tomography and functional magnetic resonance imaging. These techniques allow the brain to be observed in action when processing music.
Thus they have realized that music not only activates the auditory cortex, but also other regions of the brain specialized in very diverse tasks: those that control the muscles (particularly in people who play an instrument), the centers of pleasure that are activate during feeding and sex, the regions associated with emotions and the areas responsible for interpreting the language.
According to Robert Zatorre, a neuroscientist at the Montreal Neurological Institute, musical activities - listening, playing, composing - put almost all of our cognitive abilities to work. Many neuroscientists are interested in the neurophysiology of music because it can reveal many things about the general functioning of the brain.
The study of the perception of language has influenced and preceded in many aspects the study of musical knowledge, probably because it is both music and poetry, information transmitted using sounds.
But today we know that the brain does not process music and language equally. Isabelle Peretz, guitarist, and psychologist at the University of Montreal and her team have studied the disorder known as amusia, the impossibility of recognizing musical sounds. The participants are unable to learn simple melodies and to detect errors in a known song. However, they preserve their linguistic skills intact. For example, they distinguish correctly between the intonation of an affirmation and that of a question. Peretz thinks that the amusia is due to some disorder of the primary auditory cortex, where the notes and their loudness are recognized, the first step that the brain carries out when analyzing music.
Ideas to Music
As if that were not enough to distinguish music from language, researchers have discovered that it is processed preferentially in the auditory cortex of the left hemisphere of the brain, more given to the analysis, while the music is processed slightly (though not exclusively) in the right auditory cortex. In musicians, the left cortex intervenes more than in people who are not, no doubt because musicians listen to music more logically.
All in all, the analogies between music and language continue to guide investigations. In the 50s the linguist Noam Chomsky argued that the human brain is already equipped with a kind of grammar program, but not for a specific language, but a universal grammar. Thus, all the words of the world, however different they may seem, would have a standard structure at a certain level. Some composers, linguists, and musicologists have extended Chomsky's ideas to music.
The linguist Ray Jackendoff and the composer Fred Lerdahl proposed in 1983 a theory of the universal grammar of music, according to which a composition is constructed with a limited number of notes that are combined according to a set of rules (musical grammar). The rules give the letters a structure divided into layers of musical meaning. By listening to the sequence of letters, the listener's brain recognizes those layers in the same way that in language knows verbs, nouns, adjectives and everything else.
The American ethnomusicologist Alan Lomax came to a Chomskian conclusion, also in the 50s, after analyzing the songs of many cultures. According to Lomax, just as through speech you can build an infinite number of sentences from a finite number of sounds, an endless number of songs can be generated from only 37 rhythmic, harmonic and melodic elements.
More recently, in the 1990s, Jukka Louhivuori and Petri Toiviainen, from the University of Jyväskyklä, in Finland, also influenced by Chomsky's ideas, have designed models that generate melodies and have converted them into computer programs that "compose" sentences musicals. Louhivuori and Toiviainen have proven the effectiveness of these programs as imitators of human composers by having many people listen and evaluate the melodies.
The study involved volunteers who were divided into several groups: those who had absolute pitch, those without the perfect ball but with musical training, and the last ones without absolute pitch and no musical training. While people with absolute pitch were able to identify tones generated at random with 100% accuracy, the rest were ready to hit only 8%.
The researchers obtained functional magnetic resonance imaging brain scans of all the participants: they found that the auditory cortex of those with full hearing was significantly higher, and also did not observe differences between the other two groups of participants (those who had or lacked musical training ). Also, they found that some of the participants in the work had not begun to learn music until adolescence, which contradicts a general idea that speaks of a critical period and postulates that only those who open their musical training before the age of seven can develop this characteristic.
And, on the contrary: is it possible to have a full hearing without having any idea of music? The problem is that, without having fundamentals of music theory, it is much more complicated to check this capacity, but the authors indicate that the ability to identify frequencies could be tested in some way, which would open a new line of research.