User:Jennica Lounsbury

I am a 4th year Psychology student at Dalhousie University.

January 14, 2011
It is extraordinary that there are areas of the brain known as language areas, and even more extraordinary there there are those who seek to map those connections at an even deeper and more micro level. The idea of mapping the cellular connections associated with something as seemingly abstract as language seems so daunting, and yet the technology that exists today, especially TMS, introduces the possibility of such a thing. What makes language so difficult to study is that it is so much more than words; it is the meaning of those words, the application of that meaning, the physical nature of producing them ouside of the brain, and, most cryptic, producing them in the brain as well, as thoughts. The fact that Wernicke's area is at a crossroads of three of the lobes implies that this region is, above all else, a junction of the regions of the brain involved in a variety of types of sensations, perceptions and motor function. Language, as it is represented in its physical localization, can be seen as an expression of so many other processes taking place. To map such connections on a cellular or molecular level would be to scientifically understand one of the higher processes that makes us "human".

January 17, 2011
The relationship between lesions or disorders of particular brain areas and the functional deficits with which they are associated is very interesting. They are both the behavioural nature of the deficit itself, while at the same time being an important aspect of neuroanatomical research and functional localization. Aphasia is often brought up in neuroscience textbooks and the literature in the context of localization: how it is important to our understanding of the various aspects of language and the physical represention of such things in the brain. In neuropsychology and psychology, it is both that and also a loss of language capability. There are cases such as K.H., the architect who, after undergoing surgery to remove a tumor from Broca's area, initially lost his ability to speak and comprehend both the oral and written word (though he later regained some of what had been lost), and one wonders how anyone could function with such an extreme language deficit. Aphasia must feel incredibly disorienting; to have had full use of the faculties of language and then to lose them must be a very frustrating experience. As language is so essential in the experience of human life, it must make the aphasic person feel disconnected and confused. I am not suggesting that one dwell upon the horrors of aphasia when studying it, but rather that the behavioural effects emphasize not only the localization of language function in the brain, as well as the importance of language in the human world. Aphasia is one of many examples which show that to study language is so interesting because it is to study mind, brain and behaviour, and often all at once.

January 30, 2011
Learning about speech perception, for me, always brings to mind how amazing people who speak multiple languages are. Several years ago I met my boyfriend's dad's girlfriend who grew up in Slovakia; as a result of her upbringing, she knew Slovakian, English, Russian and French. I heard her once on the phone with her father, speaking Slovakian, and when I asked her a question she effortlessly switched to English. I was amazed more so by her ability to understand me than speak both languages. It was incredible how easily she could understand one, and then the other, with not even a second pause. I suppose translators are the masters of such a skill, but it made me feel curious about the localization of language in people who speak so many languages. So at the time I read a paper (that I just reread, in trying to remember the name of the author) by Franc Fabbro called The Biligual Brain: Cerebral Representation of Languages (2001), which describes how the greater an understanding of a language one has, the more circuits there are devoted to the processing of that language. Ths, at the time, was the first I had ever learned of plasticity and language, and as I aim to learn more languages (at least on a minimal level), it was a reassuring contradiction to the developmental critical periods which I had thought were the be-all end-all of language acquisition and processing. Speech perception is an incredibly interesting aspect of the understanding of language, and it involves so much more than just the auditory processing system.

February 3, 2011
I watched a movie called The King's Speech this week, a true story in which the central character is the man who was to eventually become the King of England during the WWII years, George VI. He suffered from a fairly severe stutter, and I found that his speech therapy made me think of the units of speech topic. In order to help him gain the natural flow of speech, his therapist encouraged him to start by focusing on the smallest units of speech, and move up from there. Well, he didn't use those words exactly, but that is essentially what he was doing. As such, I wonder if what part of the problem is for people with this type of speech impediment is a misuse or non-use of phonemes and morphemes in words. I was also very curious about what the processing error in the brain is which is associated with this disorder, but I had a difficult time finding information on the subject. It was also the case that he had previously been left handed but had "had it corrected," to which the speech therapist responds, "that is common in cases like these." Perhaps the stutter occurred because of the forced reorganization of lateralized speech function as a result of his switching of handedness during a critical period of development. I suppose that brings up the question of which came first, handedness or lateralization, to which I would certainly have always thought lateralization but I guess that it could be the other way around.

February 11, 2011
Learning about sentence processing brought to mind the neural mechanisms behind such complex thought, and the plasticity that would be required when implementing these skills in bilingualism. I was working on my chapter all week on lateralization, and so I was thinking about that with regards to sentence processing, and whether that is an aspect of the localized overlap that they believe occurs in bilinguals. I am especially curious about the neural processing regarding cross-language homonyms, or words that have been appropriated by other languages and have more than one meaning as a result: would the word souvenir in english and the word souvenir in french activate the same neural circuitry? Their meanings are similar but with subtle but significant differences. The brain's ability to distinguish meaning between such words in bilinguals would seem to me to be a very physical representation of this notion of overlap. I would assume the same is true when reading as when it is spoken, although I also wonder if accent lends some sort of auditory cue for the brain concerning the language being used, and as a result a more specific narrowing-down to process the correct meaning of the word. The fact that bilinguals are often able to switch with ease between languages seems to me to be a product of overlap between the lexicons of each, but sentence processing is more complex than that, and it appears that, although there seems to be much room for error (as in the case of threse homonym-type situations), such errors are not the norm. I suspect that there are not two separate lexicons of words but one large one which utilizes contextual cues to determine which meaning is correct, and that "overlap" might not be a good way of describing it. Rather than seeing it as one language placed atop the other in an overlapping fashion, I think it is more reasonable to see bilingualism as simply a larger network of connections in which certain circuits are activated for the processing of each language.

March 6, 2011
One of the aspects of auditory processing that has always eluded my understanding is the human ability- and even stranger, the human affinity- for music. Music is such a complex process of understanding, and subjective meaning seems to be so inherent in it that one has to imagine that both auditory processing as well as emotional understanding are both being used when we listen to music. After writing my textbook chapter on hemispheric lateralization of language, I have to wonder if the corpus callosum, the bundle of fibers joining the hemispheres, potentially plays an important role in our listening to music. The fact that music combines what are thought to be "left" and "right" hemisphere functions seems to imply that the two are working together in this ability. That said, I have to wonder how split-brain patients understand music. Music perception is a sort of understanding of frequency patterns, and it would be assumed that that would not be disrupted by commissurotomy. Nor would the emotions associated be disrupted. But it might be that the two would be detached in some way, that they would be interpreted as separate aspects in a way that the "normal" listener does not draw them apart unless by force. It would certainly be true that the listener would have a difficult time reporting certain aspects of listening to music and factors of sound or emotion within the music. Perhaps the ability to understand music and the affinity for music would thus be separated in split-brain patients. It would be very interesting to do both emotional and auditory testing on such individuals while listening to music.

March 12, 2011
I was listening to a radio program called Radio Lab last week, and they sort of briefly discussed the experiment done by V. S. Ramachandran showing how the neural activation surrounding a finger wiggle occurs slightly after the decision or suggestion is made to wiggle the finger, but before the finger wiggle itself. This is put forth in the context of free will, proposing that is does not exist, a very interesting notion. People are told: "do not wiggle the finger", but they are unable to. I was reminded again of this idea during the lecture on gestures. I understand that gesturing is a more complex motor movement than a finger wiggle, because it is associated with language output, but theoretically the same could be true of the finger wiggle. Could it be possible, then, that the neural activation associated with a gesture occurs before the gesture itself? Based on the philosophizing done on the subject during the radio program, perhaps that would imply that there is no free will in gesturing; that it is an illusion, just like that regarding the finger wiggle. The implications of such an idea on free will in language production are strange and frightening, and the thought that one might gesture in order to convey an idea that is in some way unintentional leaves me with that old science fiction worry: do we control our brain, or does our brain control us? Also, perhaps this is the source of the gesture-mismatch sometimes shown by children.

March 17, 2011
The idea of protowords really piqued my interest. I have noticed that there are some who carry certain protowords with them into adulthood; childish words they used to denote something or other that never really became the correct word. This made me think about a girl with Down's syndrome who I peer-tutored in high school who had all kinds of protowords and had never been instructed on the correct terms. Perhaps because of the developmental nature of the disorder, and she was in some way verbally frozen in that state before proper use of language and words and word meaning occur. At the time I just thought she was being creative, and I guess I still do. She had her own private language that only those close to her could understand. Maybe it was an unconscious defense mechanism for dealing with her isolated position in public school. Or maybe it was just protowords, her language skills having never fully developed.

I am also curious as to whether or not there are protowords that are universal, that there are certain protowords consistent in cultures, or even perhaps in families through genetic characteristics of the articulators, or something of that nature. I'm not sure if that's even possible, but it seems like an interesting idea to explore. Protowords would have to be at least somewhat consistent within cultures, because there are only so many sounds to be made, and the children would be exposed to the same sort of stimuli, if not exactly the same stimuli. If tha were the case, we could publish a protoword dictionary and finally understand what all the babies are talking about... I'm mostly kidding. But it is possible that there could be genetic consistencies; I read an article by Willem J. M. Level entitled The Genetic Perspective in Psycholinguistics or Where Do Spoken Words Come From? which said that protowords usually have a uniform place of articulation; due to cultural linguistic consistencies in sound and place of articulation and the genetic contributions to the proportions of the articulators themselves, perhaps protowords are passed down through the generations.

March 25, 2011
The aphasia lecture was amazing. The man with receptive aphasia whose feedback loop regarding what he was saying made me think alot about what kinds of therapies can be done for people suffering from this disorder. When I was looking into stuttering earlier in the semester, I came across a paper by Stuart et al. from 2003 called Self-Contained In-the-Ear Device to Deliver Altered Auditory Feedback: Applications for Stuttering. I was thinking that a treatment using a delay of their voice might help people with receptive aphasia to learn, at least minimally, to control that feedback loop; perhaps with training, some form of plasticity may occur in the areas surrounding Wernicke's, such as in the connections going to both the auditory cortex as well as to Broca's area, that would allow there to be compensation following the lesion. I'm not sure if that's possible, but it certainly seems like one of the strangest aspects of receptive aphasia is this notion that the speaker, the person with the aphasia, is nearly unaware that anything is amiss. I think that perhaps addressing this unawareness might be a good place to start in treating them, following which this issue of the malfunctioning feedback loop might be, at the very least, aided by vocal delay- perhaps this would allow them to hear their own speech as if it were someone else speaking, and they could train themselves, over time, to use the correct words.

April 3, 2011
While getting ready for my debate on the topic of cochlear implants, I started thinking about the evolution of technology in science and what it means. I don't mean in the ethical sense, but rather in terms of the vast array of things that exist to aid in all sorts of areas of language. Most relevant to me personally is the idea of voice recognition technology; I work for a man who is quadriplegic, and we have been discussing the idea of him getting a computer. Before looking into it, I imagined that the technology required for voice recognition must be quite simple, but then I read Rudnicky's article Survey of Current Speech Technology, I am now aware how truly complex it is. What is more interesting to me than that, however, is how learning about the technology associated with speech makes you consider all the tiny components that make up speech and gesture, and how they can be manipulated to an end. For example that Google April Fool's joke- part of why it serves as a good April Fool's joke is because it seems so likely that such a thing will one day exist! The implications of such technology on what it means to be human are huge. These technological advancements, things like using verbal shortcuts to turn on a computer, will ultimately become more and more a part of our behaviours. Even the text messaging phenomenon eliminates the use of speech in many ways. I might sound like I've watched too much of the Terminator, but it truly seems as though we are becoming machines.

April 7, 2011
Well, this is my final blog post for the semester. I'm not going to write about anything that went on in the last week of debates, but rather about something I was thinking about recently. I took a class last year called Neurobiology of Learning, and we learned about the plasticity that occurs following damage to parts of the brain, and how the areas designated to localization of functions can expand and diminish in order to compensate for lost functions. I began thinking about children who are born speech impaired, and I was wondering if they, as a result, have more precise and developed motor function than those who developed with hearing. Hearing children not only don't learn the motor-oriented ASL, but they would in general not need to be so reliant on gesture. I did some research and found a very recent paper on the subject; Jana M. Iverson and Barbara A. Braddock's 2011 study Gesture and Motor Skill in Relation to Language in Children With Language Impairment, explains that gesture is well developed, but that hearing impaired children actually perform more poorly than their hearing peers. It is suggested that this is because of the fact that gesture assumes the compensatory role, in order to accompany speech. Perhaps it is because of the focus on gesture and language in development of hearing impaired children that the other motor functions are not honed. In hearing children, because language is not such a focus, motor skills are given much more focus. It seems to me that this is another argument in favor of the cochlear implant. Hearing-impaired children should be given every opportunity as all children are, including the ease of learning speech, and the resulting ease of developing fine and gross motor skills.

All of that said, these blogs have been a wonderful way to explore thought and ideas regarding language and speech in new way that I had no previously considered. Perhaps, as a result of all these considerations on the subject of psycholinguistics, my own language and speech abilities have improved.