User:Em736458

Bold textMy name is Emma Horner, I am a 3rd year Neuroscience major at Dalhousie University.

'

Blog 1: Language and Thought
The lecture by Aaron Newman on Wednesday December 12th outlined the idea of language and thought and further attempted to describe the relationship between them. What I thought was the most interesting was the diagrams presented under “the influence of language and thought”. The diagram was used in a study by Carmichel, Hogan, & Walter (1932), where the participants were asked to view the diagram of 2 circles connected by a line. They were then given 1 of 2 verbal captions, either stating eyeglasses or dumb-bells. What was really interesting was that the participants that were given the verbal captions then asked to reproduce the original diagram altered it fit the caption. I found this really interesting because the human brain unconsciously has a schema for the verbal captions and is able to distort the memory of the original diagram. I think this study was really very interesting and really got me excited about the dynamic between language and thought. I really look forward to unraveling this idea more. When doing research on this topic I found interesting suggestions that tests like this should be preformed on adults and children, because there is little evidence about the cognitive construction that is taking place, to me this was also really interesting.

Blog 2: Lateralization in the Brain
Brain lateralization was presented in Aaron Newman’s Psycholinguistics class on Monday January 17, 2011. I think that it is so interesting that some of the brains functions are lateralized. Although this topic is not new to me, every time it is brought up in one of my classes I always find it very exciting. The ability of the right side of brain to deceiver emotion and prosody and the left to be more focused on language and math is really very fascinating. Its so interesting that although the corpus callosum enables the communication between the hemispheres that it also somehow allows the inhibition of hemispheres to take place. For example I think its pretty interesting that if you were watching a ballet your right hemisphere would be firing like crazy trying to feel the emotion from the performance and ( at least for me) your left hemisphere that would be focused on the physical calculations of the dance would be almost silent. Conversely, if you were writing the MCAT or the LSAT you would hope that you left hemisphere would be dominantly firing and the right may be more quiet. Lateralization also brings up the fascinating studies on “split brain” patients who have no problems with their vision but since their corpus callosum is severed there hemispheres can act like they are blind. For example showing a word to the right eye, which would be represented in the left hemisphere, would be able to be spoken by a split brain patient. On the other hand if you were to show a word to the left eye (right hemisphere) the patient would not be able to say the word aloud but would be able to draw it. The dynamics between the hemispheres of the human brain are very riveting to me, and this class was a nice summary of these topics.

'

Blog 3: Speech Perception
For my blog this week I thought I would do some extra research on a topic that was presented in class on January 28th 2011. Oralism is the word used to define the techniques and approaches to teaching a hearing impaired individual to better communicate with the world around them. I found this topic absolutely fascinating. Aaron Newman suggested the idea of allowing the deaf child to hold the throat of the speech therapist and feeling the vibrations as the therapist speaks. The ability to do this gives the hearing impaired individual and mental representation of the word as well as some insight into how it may be pronounced. From doing research I also found that oralism includes the ability to lip read and exercises that involve putting your hand in front of your mouth and feeling the air that is exhaled during speech. The ability to attempt to form words from feeling their vibration is so interesting to me, I cant imagine the commitment and energy it would take to mentally convert one to the other. I have a new respect for the hearing impaired and the struggle they may have to communicate. I also found while researching that oralism isn’t always the best idea and can have large limitations but for those children with a small residual ability to hear or those who loose their auditory abilities later in life oralism can be quite beneficial.

Blog 4: American Sign Language
After another interesting class by Aaron Newman I am left extremely fascinated by American Sign Language. It appears I have developed a special interest in language in hearing impaired individuals. Dr. Newman presented the topic of ASL inflection in class and since I have researched a little more on this amazing language. American Sign Language is the third most popular language in the United States! Being previously naïve and uneducated on the language I thought that ASL only consisted of hand signing, I had no idea of the movements and orientations that are involved. Dr. Newman gave an example of how people who use ASL talk about people or things not in close enough proximity to point to. I think its so interesting that they leave the person or thing in an area of space and refer back to that area of space as they talk about it. I cant imagine the mental capacity and discrimination that needs to take place in order to deceiver the locations of the head, hands, and body. I would like to know more about the spatial abilities of those who speak fluent ASL to see if there capacity is different than those who speak a verbal language. This lead me to the idea of left and right hemisphere usage during ASL. I did find that ASL production and comprehension are developed in the same way as in other languages (in Broca's Area and Wernicke's Area respectively) but I am still wondering about the role of the right hemisphere as it is dominant for viso-spatial abilities which I think would be incredibly important to ASL users. Another intriguing idea is that if the right hemisphere is more dominantly involved in deciphering facial expressions, that in people reading ASL would most likely have a very active right hemisphere to determine the facial adverbials. So would that mean that the brains of ASL speakers would be overall more bilateralized for language and speech? I only attempted to make a dent on the topic of lateralization in ASL speakers and the most interesting thing I found was a difference in the white matter connections in the brain between ASL speakers and english speakers, and nothing else significant. I will continue to research it. Also when researching this topic I learned how people who use ASL write language, the term used for this is glossing and it looks like English words with all capital letters. I thought that this was so interesting because those who gloss would be defined as bilingual ( with the ability to speak ASL and to also communicate with English glossing). A fascinating example I found on the internet showed the translation of “what you see is what you get” to WYSIWYG in glossing. Overall I am utterly captivated by American Sign Language and I am inspired to learn it, I have always wanted to speak another language and I think this would be the most fun and interesting! Thank you for another interesting class! Site used for example of Glossing: http://faculty.valenciacc.edu/arasmussen/asl_1/asl_1_readings/ASL_Grammar/glossing_quick_dirty_2.htm

Blog 5: Watson
It has taken IBM 4 years, and a highly educated team to develop a computer that has the ability to answer questions in a natural “human” way. In order to function, the computer program named Watson, needs to take into account all aspects of language. IBM has made sure the that the program responds in a speedy and usually accurate way. The precise ability of this computing system to mimic the comprehension of human language is extremely fascinating. Interestingly, Watson will be participating in Jeopardy this Monday (February 14th) against the human word leaders. Jeopardy questions are not black and white questions, but rather they use puns, irony, riddles, and many others non traditionally styled questions. How does it do this? Watson rapidly searches documents that are stored in it memory and is not connected the internet. It uses speedy evaluations of the words in the question and associates them to form a response. Watson is also equip with systems that are able to judge the confidence in the formed answer and decide whether or not to respond by buzzing in. Watsons skill basically relies on the semantics of the words presented. For years computers have been able to out smart humans in computer terms, but the difference now is that Watson is playing on human turf. That means that Watson will compete using natural human language and wordplay. To me this is absolutely riveting. I cannot even fathom how the world of technology will change with this addition. The opportunities are endless with a program that can understand humans. In years would Watson, or similar programs, be able to change the field of medical diagnostics- providing and online system that understands your needs and wants. Or even in police work to respond to call ins and communicate and mediate with people in extreme stress. The Watson program does make me question how it would respond to philosophical questions and or it will ever have this capability. What is the meaning of life? Would Watson be able to structure a response from its data files to accurately respond. I wonder in how many years will there be a computer with the ability to communicate thoughts and ideas in a spontaneous way like humans do. I know that these ideas are far off, but Watson really got me thinking. We are again on the edge of a major discovery that will change the way we as humans do things, and give us more insight into the dynamic processes that form our language capabilities.

Blog 6: Aphasia
An interesting topic presented quickly in the past week was aphasia. Although in neuropsychology and cognitive psychology aphasia is a pretty common topic, treatments for aphasia are not. I thought this week I would do some research to find what is going on in the field of aphasia treatments.

I found a really interesting article on treatment for non-fluent aphasia. Non-fluent aphasia is when the production and expression of speech is impaired. Strokes or brain damage to Broca’s area usually result in this. Interestingly, it has been found that after damage to Broca’s area (usually located on the left) there is over-activation in the right hemisphere in the similar location. It has been suggested that this activation in the right hemisphere can interrupt or interfere with the individual’s ability to recover from the aphasia. In 2004, a study was completed using transcranial magnetic stimulation (TMS) on the right hemisphere as a treatment for non-fluent aphasia (Martin et al). They used slow frequency currents over 10 treatments and they found a significant increase in the patient’s ability to name pictures viewed. I think this is really interesting because it provides some insight into brain plasticity. I think its amazing that the human brain can recruit the right Broca’s area after damage to the left, and that its even more interesting that the right impairs the damaged left’s ability to recover. The TMS suppresses the inhibition of the right hemisphere and in-turn reactivates and promotes neural plasticity further compensation. This lead me to question whether this same effect would happen in left handed patients with right hemisphere dominant language capabilities. Can the lateralization be reversed? And if so can this sort of treatment be extended to apaxia patients? Or even to other forms of brain injury? Would TMS stimulation on the correct brain area help treat any lateralized brain damage? The brains ability to wire and re-wire continues to fascinate me. I think that the TMS experiments are only scratching at the surface of treatment for aphasia and at an even larger level the plasticity of the human brain. Em736458 01:50, 28 February 2011 (UTC)

Martin, P.I., Naeser, M.A., Theoret, H., Tormow, J>M>, Nicholas, M., Kurland, J., Fregni, F., Seekins, H., Doron, K., and Pascual-Leone, A. (2004) Transcranial magnetic stimulation as a complenmentary treatment for aphasia. Seminars in speech language May;25(2):181-91.

Blog: Aphasia and Society
Every year 10,000 new cases of aphasia are diagnosed. The typical aphasic is a mid Forties or early fifties individual has survived some kind of brain damage whether it be a stroke or tumor. Previous to this presentation I had never really thought about aphasia recovery centers or programs. I also had no idea that so many individuals were diagnosed each year. Why is that? Why is it that most people know very little about aphasic disorders? The presenter made an interesting comment on how not that many people know more about other debilitating disorders but not aphasia, for example why is there is no poster child for aphasia in the media like there is for say Parkinson’s or Alzheimer’s? It’s honestly saddening that aphasic individuals are not getting the full support that they could be because the general public doesn’t understand their disorder. In a society where language and communication is so heavily waited, you would think a disorder that interrupts the ability to communicate would be getting more funding. I think that education and understanding of aphasia needs to increase to the general public. I can’t imagine how life would be without the ability to clearly say what you mean, essentially to be trapped in your own mind. I have studied aphasic disorders for a couple years now but I have never seen first hand how individuals actually communicate, and how hard it can be to understand and comprehend what they are thinking and feeling. Our society takes communication by means of spoken language for granted, and even I did before this presentation. Really fantastic presentation on aphasia, I wish more people knew.

Blog: Oppositional Defiant Disorder
There were some interesting debates this past week in class. Very well communicated debate on Ebonics usage and the controversy behind it. What I found most interesting was a student who made a comment about the drop out rates of students in Oakland, California and in an attempt to rebuttal another students comment that students in that community may be more likely to drop out if they force Ebonics out (they may feel robbed of their culture) a certain student made a rash statement that definitely, at least for me, forced their team to lose the debate. She commented back about how the reason the students in that community may drop out is because they have oppositional defiant disorder. The comment was completely from left field and came across as seriously racist. I would have hoped that by 3rd year in a psychology class, that there would be a certain level of education and respect for topics of such nature. ODD on average in the United States is 6% of the population, in Oakland there is a population of African Americans of about 27.3% according to the 2010 consensus, and the prevalence is equal among all races. So as you can see the comment was not thought threw and came off as offensive. It is interesting that such an assumption can roll off the tongue of an obviously educated student, and I am aware in no way did the student mean to say something so offensive but it got me thinking about stereotypes and assumptions. I assume if this same group of students was talking about a community that was strictly Caucasians the comment would not have even been thought of. The whole debate encouraged me to be more aware of what I am saying and what I am communicating to those who are listening because pretty quickly you can turn a statement into something offensive.

Chapter Assignment: The Neural Components of Speech Production
Introduction

The ability to verbally communicate with the world is a precious and precise ability. Humans are the only specie on earth that produces such a dynamic form language to communicate. One important feature in the verbal communication of humans is speech production. Speech production is the conversion of individual thoughts into structured sentences for the purpose of communication. The ability to produce speech comes from the integration physiological, neurological, and cognitive efforts. Humans use these efforts to translate their emotions and thoughts into strings of meaningfully plotted words that are relevant to the topic at hand. It is interesting to see the dynamics of the human brain when producing speech and noting that there is not necessarily one area that is activated. When a human utters a word, the lexical, emotional, motoric, semantic, phonological, and syntactic neural areas are all activated. Speech production is a fascinating ability that humans use, and an interesting area of research.

The Physiological Production of Sound

The machinery involved in the physiological production of sound can be divided into two active parts; the source and the modulators. The exhaled air, being pushed out by the diaphragm, from the lungs is the source of sound in humans. As the air leaves the lungs and moves up the trachea it enters the larynx. The larynx houses the vocal folds, which are the most important organs in speech production. As the vocal folds, or more commonly called cords, vibrate the air that is channeled around them is converted into sound. Those cord oscillations are what determines the pitch of the sound being produced. A human can produce a frequency of anywhere between 200 and 7000 Hz. That sound then travels through the pharynx, and both the oral and nasal cavities where is it expelled. In order to articulate the sound the lips, tongue, and soft palate quickly modify the shape of the vocal tract, allowing for speech to be modulated and made. It is interesting to note that the only difference between the chimpanzee and human vocal tract is the location of the larynx. Human evolution has lowered the larynx, which allows us to produce a greater variety of formants. Phonetics is the study of speech sounds including their production and perception. In regards to the tongues phonetics there is a traditional and modern view of how to conceptualize the articulation of the tongue. In the traditional view there is discrimination between high, low, mid, front, and back vowels. An example of a low vowel would be the “a” in daughter and how the tongue makes a low flat position in the mouth. This view however, does not take into account the dynamic movements and changes that the tongue makes in articulation. A modern view was then suggested. The modern view takes into account the changing position of the tongue in relation to height vs. how forward or backward it is. The vertical and horizontal abilities of our tongue within our vocal tract are what provide humans with the precision needed to produce speech.

Anatomical Areas Associated with Speech

The localization of speech production in the human brain varies greatly. In order to visualize and better understand the neural components of speech production studies mainly use lesion studies or neuroimaging. This chapter will outline some important lesion studies as well as giving some insight using modern imaging techniques like fMRI and PET. The areas that are most frequently identified as playing major roles in speech production are Broca’s area and Wernicke’s area. Broca’s area lies in the left inferior frontal gyrus, just above the lateral fissure. It role is believed to be in the production of speech. Broca’s area stores the motor programs needed to produce words. Wernicke’s area is located in the left superior temporal gyrus closer to the end of the lateral fissure. Wernicke’s area has been suggested to be important in the comprehension of words, as it holds the meanings, or semantics, of words. Damage to these areas results in Broca’s aphasia and Wernicke’s aphasia, which will be discussed in the aphasias section. Although these dominant areas are usually located in the left hemisphere the right hemisphere also has important roles in higher order speech tasks. The right hemisphere has been shown to be involved in speech perception, grammar, and sentence level semantics (Shuster, 2010). Also, interesting studies have shown that patients with right hemisphere localized strokes or brain damage have a disrupted ability or inability to maintain or initiate normal speech prosody. As there are lateralized functions in the brain there are also important bilaterally activated regions. These regions include the motor cortex, thalamus, dorsolateral caudate nucleus, and the cerebellum.

The Wernicke-Geschwind model is a three-section model that evolved from lesion studies. It outlines the proposed mechanisms that control comprehension, speech, and reading in the brain. The Wernicke-Geschwind model initiates the production of speech with cognition, which is the mental representation or plot of what the individual wants to express. It then moves to Wernicke’s area, which would outline the meanings of the words needed, that information is then sent along the arcuate fasciculus to Broca’s area. Broca’s area is where the motor plans of speech are assembled. Broca’s area then sends these instructions to the adjacent areas in the motor cortex, which communicates with the brainstem to activate the facial motor neurons, and further activate the over 100 correct facial muscles needed to produce speech (Geschwind, 1972). Studies were done to support this suggested role of Broca’s area. It was found that stimulating Broca’s area, in awake patients, gave them the inability to make the voluntary facial movements needed in speech production. Interestingly, stimulation of this area also disrupted the patient’s ability to discriminate phonemes and also affected the patient’s ability to produce gestures associated with speech (Ojemann, 2003). This inability to produce gestures is because the areas used for gestural language and vocal language are mapped in adjacent areas, and it has even been proposed that they depend on the same brain structures. Using fMRI it was found that native signers (those who learned sign language early in life) have the same localized left areas activated during signing that a person who was communicating audibly would have (Newman, 2002). Other major brain areas have just recently been suggested to aid speech production. Having so many distinct brain areas activated in speech production has provided evidence that rather than focusing on localization, that speech production is more about the network connections. Studies have shown that the articulo-phonological brain network is much more dynamic that once proposed. Here is a small summary of the important secondary brain regions involved in speech production as studied using fMRI technology:

Motor & Pre-Motor Areas: quite obviously the motor areas are involved in speech production, as they are the centers of control for all the muscles and cranial nerves involved. Results from an fMRI study on speaking shows that these areas a bilaterally activated. Thalamus & Basal Ganglia: there are distinct parallel circuits between the basal ganglia, thalamus, and cerebellum. Initiation and control of voluntary movement has been seen to activate the basal ganglia. It has been suggested that the thalamus is involved in the preparation of the movements of speech. Cerebellum: studies have shown that in order to carryout speech production the cerebellum is one of the most important regions. The cerebellum also shows activation with vertical movements of the tongue and lip modulations. Insula: it has been proposed that the involvement of the posterior insula in speech production is to contribute to the conscious awareness of your own speaking. The left anterior insula, comparatively, has been shown to be involved in the formulation of an articulatory plan.

Newer studies have shown the involvement of many other aspects of the brain in speech production. In a meta-analysis it was found that in basic speech production the posterior, middle, and superior temporal gyrus are activated, compared to in more complex speaking tasks the left inferior frontal gyrus, cerebellum, and left caudate nucleus (Soros et al., 2006). This means that with varying intensity of cognitive effort toward speech production, the neural activation map changes. To conclude many areas of the brain are activated in speech production and more research needs to be done to specify the role of each of these areas. The next part of this chapter takes the neural areas mentioned and structures them into a map cross-listed with a classic cognitive model of speech production.

Interesting Findings from PET Imaging Studies

An analysis of PET scans revealed that when speaking there is bilateral activation the facial neurons, both motor and sensory. Interestingly when speaking verbs the areas of activation are; the frontal lobe, posterior temporal cortex, anterior cingulate cortex, and the cerebellum (Petersen, 1988). This suggests that when speaking an action word aloud the brain is not only considering it verbally but also activating the motor regions involved in partaking that action.

Weaver Model Linked with the Neural Regions in the Brain The Weaver model was designed to show, at a conceptual level, the processes and the outputs from each cognitive level within the brain. This section will attempt to link the classic Weaver model with newer research to propose locations and regions that these processes and outputs are taking place neutrally. This linkage was summarized nicely in a article by Gregory Hickok in 2010.

1.	The first process in Weavers model is conceptualization. At a neural level it is hard to definitively outline the area that this process activates. Studies show that conceptualization is widely distributed in the human brain. The output from this process is the lexical concept. 2.	The second process is lexical selection. The output of this level is lemma, which is the mental wordedness. The lemma is found by using abstract conceptualizations to find a word that you aspire to use by only taking into account the meaning you are attempting to suggest and not the sound. This lexical interface has been shown to have weak bias in left hemisphere and activation in the posterior medial temporal gyrus and the posterior inferior temporal sulcus. 3.	Morphological encoding is the third process in the Weaver model. Its output is the morphemes needed. The specific region of the brain responsible for morphemes has not been well defined. 4.	Phonological encoding, more specifically syllabification, is the fourth process involved, and its output is the phonological word. The main brain area involved in this process is Wernicke’s area (in the superior temporal gyrus in the left hemisphere). 5.	The fifth process is phonetic coding (syllabary). The output from the process is phonetic gestural scores. It has been determined that the area involved in phonological network is the middle posterior superior temporal sulcus. It has been proposed that the dorsal pathway maps phonological representations onto articulatory motor maps and a ventral stream take the phonological representations into lexical conceptual representations. 6.	The final process in Weaver’s model of speech production is articulation. The output of this process is sound waves. The main outlined areas for articulatory networks are the posterior inferior frontal gyrus and the anterior insula, both being left lateralized.

Disorders of Speech Production Speech production aphasia would be a defect or loss of the ability to express or produce speech. There are many different symptoms that can accompany these type of aphasia. The main symptoms that are present in speech disorders are; poor articulation, anomia (word-finding defect), paraphrasia (unintentional phrases), loss of grammar or syntax, inability to repeat, defects in verbal fluency, and aprosodia (loss of tone in voice). Aphasic Syndromes regarding speech production can be classified in two categories; fluent and non-fluent.

Fluent Aphasia means that the individual’s speech is still fluent and they do not make articulatory mistakes. They do however; have poor comprehension, poor ability to repeat, and frequent anomias or paraphrasias. Aphasias of this nature include: 1.	Wernicke’s Aphasia: Also called sensory aphasia. This is the inability to arrange speech sounds or to comprehend speech. Develops due to damage to Wernicke’s area. 2.	Transcortical Aphasia: Also called isolation syndrome. This is the inability to speak spontaneously or to comprehend words, although their speech is still defined as fluent and their ability to repeat words is intact. 3.	Conduction Aphasia: This is an interesting aphasia in that individuals with this would be able to speak fluently, understand, and name but are not able to repeat words. Develops due to damage or disconnection between the word image and the motor system to produce the word. 4.	Anomic Aphasia: this type of aphasia, as the name suggests, is the inability to name objects with all other speech properties intact. Interestingly people with this disorder are only unable to name nouns. If the word they are trying to express is also a verb they can usually mentally find and dictate it. Develops due to damage to the temporal cortex.

Non-Fluent Aphasia: this means that the individual’s comprehension is correct but they have trouble initiating, producing, and articulating speech. It is interesting to note that most individuals with these types of aphasias do know what they want to express and can structure it in their minds correctly; they just are challenged by their inability to convert those thoughts into words. Aphasias of this nature include: 1.    Severe Broca’s Aphasia: This is the complete inability to actively produce speech. Aphasias of this nature also show problems with repetition. 2.	Mild Broca’s Aphasia: A less severe form of Broca’s aphasia. This syndrome shows noticeable articulation problems, anomia, agrammatism, dysprosody. 3.	Transcortical Motor Aphasia: this includes the intact ability to repeat and name, but difficulties with spontaneous speech production. 4.	Global Aphasia: this is the most severe type of aphasia it symptoms include speechlessness, poor comprehension, and laborious articulation.

Conclusion

The human brains ability to produce speech is a well researched area of psycholinguistics. Although there is so definitive map that outlines the regions clearly it appears that science is moving in the right direction by use of fMRI and lesion studies. The neural precision needed to produce even basic speech activates many regions in the brain. Broca’s area and Wernicke’s area are the most mentioned but newer studies have placed emphasis on other regions to aid in speech production. Without the combined efforts of the physiological, neural, and cognitive processes humans would not be able to communicate in such effective ways. It is also important to understand the mechanisms of action by recording how damages or lesions affect the brain. At the completion of this chapter the reader should have a better understanding of the neural components of speech production.

References

Geschwind, N. Language and the brain. Scientific American 226:76-83, 1972.

Hickok, G. The functional anatomy of speech processing: From auditory cortex to speech recognition and speech production. DOI: 10.1007/978-3-540-68132-8_8

Newman, A.J., Bavelier, D., Corina, D., Jezzard, P., Neville, H.J. A critical period for the right hemisphere recruitment in American Sign Language processing. Nature Neuroscience 5:76-80, 2002.

Ojemann, G.A. The neurobiology of language and verbal memory: Observations from awake neurosurgery. International Journal of Psychophysiology 48:141-146, 2003.

Petersen, S.E., Fox, P.T., Posner, M.I., Mintun, M., Raichle, M.E. Positron emission tomographic studies of the processing of single words. Journal of Cognitive Neuroscience. 1:153-170, 1988.

Shuster, L.I. The effect of sublexical and lexical frequency on speech production: An fMRI investigation. Brain and Language, 111(1), pp.66-72, 2009.

Soros, Sokoloff L.G., Bose, A., McIntosh, A.R., Graham S.J., Stuss, D.T. Clustered functional MRI of overt speech production. Neuroimage 32,pp.376-37 8, 2006