Psycholinguistics/Language and Music

Introduction
"Music is the universal language of mankind". ~Henry Wadsworth Longfellow. At first glance, there are some clear connections between language and music. When we begin to dig even deeper though, adopting the psycholinguistic perspective, we find the two are even more closely-related than we may have initially thought! From the basic components, uses and generativity, to the brain systems and research being conducted, there is a long-standing tie between the psychology of linguistics and music. This page will serve as an overview into the fascinating world of both language and music, and begin to examine why the two are often linked together, as well as where their differences become evident. Also, past and current research into the possible benefits of using music to help with language and memory will be discussed. We will take a look at some of the problems found with neuropsychological processing, and finally present you with an opportunity to perform your own research! Regardless of whether we consider ourselves aspiring musicians, or simply love to listen, music is a huge part of everyone's lives, and is quickly becoming one of the most fascinating aspects of modern psycholinguistics.

Breaking It Down: Comparing the Basics of Language and Music
Similarities and differences between music and language can be seen even when each is broken down into their simplest components. From the basic sounds and meanings, to the overall sentence, story or song, music and language are closely related.

Building on the Basics: Generativity
The term generativity is often used when talking about aspects of language composition. Generativity (similar to the concept of creativity) is defined as an ability to generate, or power to produce something. Using knowledge about the basic units and components that make up language and speech, or music, it is possible to create a vast amount and variety of combinations from these basic building blocks.

Language
Language is made up of phonemes (sounds) and morphemes (smallest units of meaning), which create words that combine using syntax and semantics, giving words order, meaning and grammar. By using these words, the generativity of language is infinite. This means that it is possible to create an endless number of different sentences by simply creating different combinations of these phonemes and morphemes. Some of these sentences are more common, and make sense to us, while others seem absurd, but are still "sentences" as far as language is concerned. This is where knowledge of grammar comes into play. Grammatical rules can be applied across situations, allowing us to identify sentences as correct or incorrect. Using these rules, we are also able to interpret the meanings of sentences that we have never heard before, which is a necessary skill when learning to converse with others. One of the most common examples of generativity in language was created by linguist Noam Chomsky. He put forward the sentence "colourless green ideas sleep furiously" as an example of a grammatically correct sentence, but one that has no understandable meaning. This is a perfect example of a sentence that we have likely never heard before, and know to be semantically incorrect, but are also able to identify as a legitimate sentence, having learned the rules of grammar and language. It is possible to create an infinite number of sentences using our knowledge of how language works.

Music
Music is similar to language in its hierarchical structure. Notes can be combined into chords or sequences, which can create a melody or harmony line of a song. Songs then have affect, which adds emotion and meaning to the music being played. Again, using these notes and musical "rules", the generativity of music is infinite. Endless numbers of melodies can be created using various combinations of these musical notes, and it is possible for everyone to understand and perceive them as music. Like language, some combinations are understood to be correct, while others sound incorrect. Often, this perception of "right" or "wrong" combinations of notes is based on culture and musical upbringing and influence. It has been shown that music and musicality are products of our cultural and social interactions. Though the ability to appreciate and understand music is universal, music IS and DOES different things in different cultures, and the boundaries of "music" may have a very minimal overlap between two different groups. That said, each culture is able to create a diverse collection of melodies, and everyone is able to perceive these as musical or "un-musical" depending on their cultural understandings.

Overall, language seems more universal, whereas music is more culturally-based. Both music and language show evidence of a hierarchical syntactic structure, whether it be the structure of the phrasing of sentences, or the emphasis of the melody lines. As well, both language and music have vast generative powers, as we could create an infinite number of sentences in language, or melodies in music. We will always be coming across new sentences or sound sequences that we have never heard before.

Language
The ability to speak human languages, and process and understand complex sentences has long been considered the distinguishing feature that sets humans above all other species intellectually. What evolutionary advances have allowed for this process in the human brain? ♦ Several specific areas in the brain have been identified as being crucial for language processing. Broca's Area (first identified by Paul Broca in 1861) is located in the left inferior frontal cortex, and was the first "language center" of the brain to be singled out. Broca identified that damage to this area of the brain affected an individual's ability to express verbal or written language, despite any deficits to the person's understanding of language, or any other physical deficits that could have been responsible for the impairment. ♦ Shortly after, a second language center was identified. Carl Wernicke discovered an area in the posterior left temporal lobe, which he named Wernicke's Area. He realized that damage to this area caused incoherent or nonsense speech production from individuals, implying that they had deficits in understanding language as opposed to actual language production. ♦ Viewed most simplistically, the left hemisphere of the brain (in respect to language) can be viewed as a neural loop, with Broca's Area at the frontal end controlling language output, and Wernicke's Area at the posterior end controlling the processing of language input. A large bundle of nerves complete the "loop", and join these two areas. This bundle of nerve fibres is known as the arcuate fasciculus. This configuration is not exclusive to people who rely on verbal communication, as the same set-up and functional distribution is seen in individuals that use sign language. ♦ One other area of the brain that is crucial for language was discovered much more recently. Introduced by Norman Geschwind, the inferior parietal lobule (also known as Geschwind’s territory) is connected to both Broca's and Wernicke's areas by nerve bundles, and is a key addition to understanding the neuroscience of language in the brain. So, not only can information travel between these areas by way of the arcuate fasciculus, but could also be passed through this inferior parietal lobule. It is necessary to note the important location of this lobule, lying at the connection between the auditory, visual and somatosensory cortexes, all of which the lobule itself is also connected to. Hence, the neurons in this region are capable of processing auditory, visual and functional aspects of words; a necessity for understanding language. Though very simple versions of the inferior parietal lobule exist in other species, it seems to have been one of the last evolutionary steps in the human brain, assisting in a possible explanation for why our language abilities are superior to our relative species. Though it seems that the majority of language processing occurs in the left hemisphere for most people, the right hemisphere plays an important role as well. Without the right hemisphere, it is almost impossible to distinguish between the literal (what words are being used) and figurative (how the words are being used) meaning of language. People who have damage to the right hemisphere of their brain are not able to distinguish between denoting and connoting; they can not understand parts of language such as emotional connotation or sarcasm.

Music
There are many different ways of thinking about music, and it can be argued that different methods of music processing require different areas of the brain. ♦ If you were to simply hear music, you would be relying on the auditory cortex of the brain, in which different cells correspond and respond to different frequencies of sound. The core auditory cortex regions are responsible for analysis of pitch and volume, whereas outer surrounding regions analyze timbre, rhythm and melody ♦ If you were to imagine, or think about a song in your head, the areas of the brain used would differ slightly. Again, the auditory cortex would be used, but only in a few isolated areas and to a lesser degree than when you listen to music. As well, imaging a song in your head requires you to first recall the lyrics and melody from long-term memory stores, and then hold the song in your working short-term memory. This means also using the inferior frontal gyrus (memory recall), and dorsolateral frontal cortex (working memory). ♦ One of the most widespread activities, in terms of brain areas used, is playing music. The auditory cortex is needed to hear the music, and different feedback processes occur here to let us know whether we have achieved the correct pitch and melody. The visual cortex is also active, as we read (or even simply imagine) a piece of sheet music. The parietal lobe is needed for several of the complex tasks such as estimating the positioning of our fingers to play a certain note. The motor cortex helps us perfect the fine motor skills needed to play. Our sensory cortex is activated with each touch of the instrument as we play. The frontal lobe is used in the planning and overall coordination of our movements and activities, and the cerebellum works with the motor cortex to allow the production of the smooth, flowing movements often required in playing an instrument. The premotor area is also involved in the timing and sequence of our movements, but the exact role of this area of the brain in processing has still not been concretely defined. ♦ Have you ever had the sensation of getting goosebumps when listening to a piece of music? This emotional reaction to music actually uses other areas of the brain as well! When you get chills or react to a piece of music that you like, this reaction is caused by the structures in the inner brain, such as the ventral tegmental area. These inner structures are responsible for the pleasure of "reward", such as receiving food when we are really hungry. If you hear a song you really like, the amygdala is also inhibited (as it is responsible for negative emotions, like fear). This emotional reaction can vary depending on our feelings about the song. Clearly, all of the various aspects of music production and processing use a large variety of skills and abilities, and therefore, a large number of brain areas and structures. When all of these areas of our brain start working together we are able to develop a full appreciation of all of the aspects of music.

Similar Systems
Music and language are processed using very complex brain systems, but it turns out that the processing of both music and language require many similar areas of the brain. One very interesting similarity is seen in the brain activity of processing language, and that of musicians specifically. Many scientists understand that much of language processing occurs in the left hemisphere of the brain (in the majority of people). Recently, it has been found that musicians use these areas in the left hemisphere for music processing as well, whereas non-musicians typically rely on the right hemisphere. This has led to the belief that musicians may actually process music more analytically (as they engage more of the left hemisphere, similar to the areas used for language processing) than non-musicians. Many areas of the brain that are used in processing language and music are almost identical. Now, with the help of positron emission tomography (PET), it is easy to visualize the active areas of the brain when attempting to process language or music. Some of the areas where we see the most overlap include the motor cortex (primary motor cortex and surrounding areas), Broca's area, both the primary and secondary auditory cortex, the cerebellum, the basal ganglia and thalamus, and the temporal pole. Also, though there is a general belief that language primarily uses the left hemisphere, while music primarily uses the right hemisphere, there were several structures in both hemispheres that were used for both language and music. Again, PET scans were used to record the electrical activity in the brain. When music was played, the front of the brain and the right hemisphere were primarily engaged, though the majority of visible areas showed some activity. When processing language, the front of the brain and the left hemisphere were primarily used, though again there was activation of structures in many different areas. Then, of course, when both language and music were being processed at the same time, there was a concentration of brain activity in the frontal structures and in both hemispheres.

Problems: What Happens When Things Go Wrong?
As with many areas of neuroscience, there is always a question of the possible effects that may be seen when imperfections or abnormalities occur in the brain. Brain damage can occur as a result of injury (damage to the brain, causing alterations), or as a result of congenital effects (abnormalities from birth). It is believed that humans are born musically-inclined, and the damage to related brain areas causes problems in the ability to process music properly. One of the common results of brain damage (as it pertains to music processing) is referred to as Amusia. One of the common names for amusia is "musical deafness", because it affects an individual's ability to properly detect pitch (recognize out-of-tune notes, for example), as well as to recognize familiar songs. In terms of music production, amusia affects an individual's ability to sing, whistle or hum, write music (also known as "agraphia"), or play an instrument (musical apraxia). It is difficult to discriminate between melodies, and songs that may have been well-known before the brain damage occurred may become unfamiliar. It has been thought that individuals with congenital amusia may also experience difficulties discriminating and identifying changes in intonations as well (raising the tone of your voice at the end of a question, for example), which additionally affects their language processing abilities.

Music and Memory: What Have We Learned?
The idea that music may help improve memory has been very well-researched over the years. With new information about the brain, and new techniques for neuroimaging, it has become possible to extend research further and further each year, targeting more specific areas of the brain, and observing different groups of people. For example, we are now able to observe the role of music in the brains of infants through early childhood. Research has shown that if babies actively engage in music education or musical participation, they will show increased brain development. As well, many of these musical abilities (though different across cultures) are essentially innate and universal.

Musical training early in life may also help improve one's memory. MRI scans allow researchers to view specific areas of the brain, and they have shown that, on average, the left temporal regions of the brain are actually larger in musicians than non-musicians. This area of the brain assists in verbal memory tasks, and therefore should allow musicians to perform more successfully on tests of verbal memory. Chan, Ho and Cheung collected evidence showing that individuals who had been musically trained before age 12 actually did perform better on tests of verbal memory than individuals who had not received this early training. It is understood that environmental factors will come into play, but still supports the idea that music could be extremely beneficial to memory and brain development, especially when it occurs early in life.

Try it yourself!
As we have seen, music and language are very closely-related. They use similar areas of the brain in processing, can be paired together to assist in memory formation, and even the structure and generativity of the two are very similar. Clearly, music is an important addition to the study of linguistics and psychology, and enhances the understanding of how these topics have developed. Now, using everything we have learned about language and music, test how much you now know about these topics below!

Interactive Learning Exercises
Apply your knowledge of psycholinguistics, language and music here