User:Suzanne Walsh11

''' Suzanne Walsh, B.Sc. Neuroscience '''

Dalhousie Premedical Society Executive

Dalhousie University, Halifax, Nova Scotia, Canada.

Hello, My name is Suzanne. I am continuing as a student at Dalhousie University, until I write my MCAT and GMAT to apply to a combined program for my MD and MHA (Masters in Health Administration). As this year I am

taking only electives I chose to take yet another neuroscience course, Psycholinguistics. With little knowledge in the field, and a previously held

negative affiliation with the neuroscience of language due to its complexity, I figured with an otherwise easy course

load I may as well tackle it now while an expert in the field is available to teach me.

Outside of class you can find me with the Dal Premedical Society; I am on the executive committee for the

society and we try to inform and help other students with the application and interview process of applying

to medical school. I also volunteer two days a week at the Abbie Lane Hospital in Halifax on the acute

mental health care floors, and at the East Coast Forensic Hospital in Dartmouth with inpatients.

I am always trying to learn more about mental health, and the neural aspects to all types of disorders that we

all come into contact with in our everyday lives in one way or another. I am committed to further educating my-

self, and anyone else I can about mental health and brain health.

Check out my textbook I wrote for the psycholinguistics course on The Neural Basis of Speech Perception.

- Suzanne

Blogs
Hello all!

This is my wikiblog, it is based of my class on Psycholinguistics. Each week I will post an informal blurp on the topics discussed in class that week and my opinions and/or questions arising from them. More information on these topics can be found on the Wikiversity Psycholinguistics page made by our class, it will be finished by April of 2011.

January 17th, 2011
This week’s classes on psycholinguistics went through an introductory to psycholinguistics; highlighting all the major theories, a discussion of language and thought, as well as language and the brain. As the language specialization areas topic was not something I could really argue with, I found my self more inquisitive during the class that discussed the Whorf’s hypothesis in relation to it's validity. Linguistic relativity is the notion that the language that we use influences the way we think about the world. This notion has two approaches: that language variations have natural limits, and the opposing view that says that language is learned; no hard wiring that determines it. The classic problem put forth by Whorf decided to take these simple definitions and run away with them. His notion basically says that language determines our capabilities of thought. When I first heard this in class, before reading the book and re-reading the studies Dr. Newman was talking about, I thought, seriously? My thought basis at the time was if someone who uses three colour terms, compared to my many colour terms, if they are also homo sapiens sapiens and have overall the same brain as me, why should they see a colour any different then me? They too (and if not their neighbour), should have two eyes, and retinal cones with as many as 4 colour pigments to see the visible colour spectrum as I do. Thankfully, once I read the textbook I realized that in my frantic attempt at taking notes I missed the generalizing point the studies were making. They addressed just what I was thinking about, and concluded in the end that Whorf probably shouldn't have used perceptions to explain his idea. Before they made this conclusion, my diminishing question formulation increased as the textbook went into odor terminology. So... here they explained and showed through studies that you can’t remember odors if you don’t know what they are called. They used an example with piano tones, that you can differentiate pitches, but you can’t remember them if you can't name them, and another example about children and how they develop the ability to describe olfactory preferences as language develops. My contrast to the first example is that, well just because someone may not remember the pitch of a key, when you play it to them later, without a name for it; it doesn’t mean they can’t tell you they like the sound of a perfect C to F progression over a progression with many sharps and flats at the time you play it. Perhaps names and terms influence and facilitate memory consolidation, but that does not mean they determine your thought capabilities in an otherwise developed language system. Likewise, just because a child can’t tell you that they prefer the smell of baking cookies to their little brothers diaper, does not mean that they can’t show you they know the difference, and it certainly does not denote that ability is a function of language; in my limited opinion. With other forms of communication the child could show the distinction between the two, like with a grimace expression; it’s quite obvious they can’t spontaneously describe odor experiences if they can’t talk. They likely just have not paired the smell to the name yet. As far as I know we have not yet developed any mind reading techniques, and so, for someone who does not communicate using language the same as you and I, or does not use language at all, there is really no way we can tell or begin to speculate as to their abilities to "think" about a concept. If they have no language ability, or no terms for that concept, in my opinion their thought processes would likely be reflected in their emotions and their body language. Evidence as to the ability to memorize does not show thought ability; with thoughts and thinking being defined as the ability to think critically about an object or concept, with the use of descriptions, comparisons, and/or subjective opinions.

January 24th, 2011
Language and the Brain. There has been question as to whether the brain areas involved in various aspects of language function autonomously or work more like a network with interdependent cognitive functions. In my opinion the idea of autonomous processing seems to go against every other modality in the human brain. Already, we know that the occipital lobe is activated when reading and so the visual system is obviously involved. If comprehension were to only occur in Wernicke's area how would that even happen, logically? Neurons are little electrical wires, would they just hit a non-conductive boundary and remain in Wernicke's area until the boundary is removed? Since the brain is so conductive, I think that that's highly unlikely. To me it seems very unlikely that no signals could be sent elsewhere until the message had been decoded perfectly, that being the case, if other messages are being sent, or leaked out, what research shows that those signals are just ignored? Sure, I'm not language expert, but the theory appears to go against logic. These interdependent processes have certain cognitive limits, where developing them into a language has certain critical periods. While it remains true that it is significantly harder to acquire a new language as an adult, and deaf children acquire american (ASL) or any sign language better then adults do... what about the hearing adult? Does ASL truly act like a language when it comes to bilingualism in the hearing adult? If I decided to significantly improve my sign language abilities right now, could I eventually acquire the same talent and linguistical abilities as someone who knew it their whole life, or do those same critical periods apply? And even if I could learn to sign just as well as someone else, would my brain still show different activation patterns, and would researchers then consider me to be not truly bilingual in that sense. I'm not sure on that one or any of those questions, but they are interesting to think about (well not for me because I become irritated that I don't know the answer). I feel like if I could learn ASL and sign it just as well as someone else, regardless of when I acquired the ability, it should be considered bilingualism. The topic brings about too many what if's in my head to continue on the topic, but it's food for thought non the less! My big question that came up in this weeks class was sex differences in language lateralization. Now, I think that it was mentioned in class that this was a myth, it was on the same slide as the myth anyway. If there is no evidence to support this, why are their many differences between men and women and their linguistic abilities. Many guys that I know, could certainly not out talk me. Although, is that language ability, or is it processing speed? Perhaps I just process things faster and can thus respond faster. Where I become stumped is the fact that my mom is a relatively quiet person and my dad is the one who I talk with for an hour on the phone, he certainly can out talk me and use a much wider vocabulary! If men are not more lateralized in language, then why are many women innately better at language then men, or so it seems? Rather then pointing out all the exceptions that would follow this rule, it seems more likely that social factors contribute. Women do appear to more often have stronger language abilities, but just because men don't talk as much as we do doesn't mean they lack the ability, maybe just practice. Perhaps, it is more of the nurture debate then nature. Maybe women are naturally more social and so we develop stronger skills from when we are young. And of course there is the old time question, which came first the chicken or the egg. I think when it come down to it though, it's the fact that these are all interdependent cognitive processes, and so when you are a child, being strong in one venue that is tightly linked with another could increase your skills in that linked venue. As you can see I always seem to develop this sort of circling questioning when I'm asked to think about these sorts of topics in depth....lets just say I don't get into these talks with my friends.

January 31st, 2011
This is my first class covering all the logistics surrounding speech production and perception. To be quite frank the concepts are still some what of a blur in my head and will require some extra time to have the definitions ready on demand. Regardless, I find it quite amazing that speech is broken down into so many components, someone had to be so self aware to be able to decipher between various phones and phonemes and the application of reliable phonotactic rules. What I find extremely interesting is how all languages seem to develop with similar components, and how their is debate over if ASL even has versions of consonants and vowels using movement and holds. As simple as sign language appears it obviously is much more of a language then surface value examination makes it appear. When all of these aspects of language are surfaced it becomes obvious that language is a very complex ability for our brain. It is fascinating how even the smallest of injuries can completely take away one of these components. Typically I like to think critically about the material and come up with reasons for or against each topic, however when it comes to a body of information that refers to factual knowledge there is less for me to argue with. The theories of speech perception allow me to have more of my own opinion though, even with my limited knowledge in the area; hopefully as the class progresses I'll be able to return to my blogs and see how my thought processes have changed. In term os the Motor theory, Cohort theory and the Trace model. Since I have more knowledge in functional neuroanatomy I'm more inclined to respect the Dual-Stream model of processing and the influence of some of the Trace model perhaps. Without a doubt I'm sure Motor and Cohort Theories could play a role also, but either of them alone are very shallow arguments in my opinion. My reasons for including some of the Trace model is the inhibition/excitation of various words or phonemes, this reminds me of the visual system when processing images and boarder perception. When light is shined on certain cones they inhibit others. A dual-stream model is also present in visual processing. Logically, for me, these two theories make sense. The other two are slightly more abstract, and I can't think of a way that they would work anatomically on my own and so I'm therefore less likely to agree with them. Perhaps with more information on What, Where, and How I'll be able to concede with their influence.

February 7th, 2011
Since we had a short week, I will discuss morphology. My reflection resulted from the "Rulelike Processes in Language" section in the class reading (referenced below). The section discusses regular inflection, adding the affix -d to the stem of the verb; contrasted with irregular verb forms that are relatively unpredictable. Now the models that are proposed are all rather different, but I don't really feel convinced by any of them. I like the notion under the associationist theories' connectionist model that the regular and irregular patterns are computed using a single mechanism. The idea of stem sounds and past sounds being remembered by recording and superimposing associations, sounds more on the right path. When I was learning to speak in different tenses I can confidently tell you right now I did not sit down and memorize anything, definitely no attention span for that, and my parents were probably not the type to correct me repeatedly to get it in my head in a form of memorizing lists. Creating patterns based on the sounds and links between them in relation to verb tenses causes me to wonder about the influence of the auditory cortex in this area. Doing a quick search on Pub-med I found research on it's influence in analyzing speech signals and so forth, it would be interesting to see some select lesion studies and the influence on language development. Although the associationist theories do not account for inflectional mappings. Why do they have to be considered in this group? They are non language mappings, it just reflects the brains innate ability to learn new things, it is plastic. Perhaps, these are memory based acquisitions, or maybe the patterns become expanded and altered to accommodate the new input-output mappings. Or perhaps the unsystematic and semantically unrelated homophones with different past tense forms are remembered using the same sound processing the connectionist model suggests based on the context of it's use. I guess all in all, I feel the models are all a little simplistic. To be honest, I think it will be really difficult to learn exactly just how the human brain creates networks and the linking the lexicon entries. All we can do is speculate, none of us know all the information even if we think we do. Even in my meager attempt to rationalize my opinions on the matter I can see it must be quite difficult.

February 14th, 2011
This past week we discussed words and word recognition, as well as sentence processing. I wasn't too argumentative in these classes, the information seemed straight forward. However, while continuing my readings on sentence processing I began to ask questions on the section concerning working memory. Perhaps we will discuss this further in class today or Wednesday, none the less I'm going to talk about it because it was just after the syntactical and grammatical processing sections. So here is my argument. The book explains how working memory is like the set of tools or resources that you have to perform operations, and is limited by tasks that demand more resources, and particularly ones with higher complexity. The product of these demanding and complex tasks is errors in processing sentences or delayed processing times. So, that's all fine and dandy... the book goes onto explain it's relation to individual differences in processing syntactically complex sentences and those that include ambiguity. Put simply, the book states that if you have a smaller amount of resources in your working memory bank, you will process sentences slower; you will take longer to understand something and less accurately at that. Conversely the person who has a higher working memory will understand something faster. So now I will take on my devils advocate role. What about the student who has attention deficit hyperactivity disorder (ADHD)? Their working memory is said to be slower, or hindered, compared to the non-ADHD student. Yet, with my experience, these students understand what you are saying before you are even finished telling them. They pick up what you are saying and then start cutting you off so you don't have to finish your sentence. How is it then that their impaired working memory, which is suppose to hinder their comprehension skills, can still allow them to process language as fast or faster then the average person and do so flawlessly? Perhaps they may jump to the wrong conclusion, but that is not because there was ambiguity in the word choices, more because people with ADHD are impulsive and may not listen to the whole story. The authors explain how you cannot use traditional working memory tests to predict sentence comprehension, but you can use alternative working memory span tests. Are these models actually testing working memory though? or are their other factors at work? When an IQ test is administered, the end result will give you a percentile for working memory span, and another for processing speed. It seems to me that the processing speed score would more likely equate more accurately, or at least be a contributing factor to the speed of comprehension. For example; a person with an overly high processing speed, and a limited working memory, may perform just as well as their comrade. Therefore, I have to say that this model is limited. Maybe it does account for this difference, but it certainly doesn't clearly state it in the book. I'll have to point out that there is obviously more to the story when it comes to working memory and it's relation to sentence comprehension times, in my opinion.

February 27th, 2011
Reading week is over, and now it is time to get back to reality. This weeks blog entry is based off of the week of classes prior to the break. In those classes we discussed sentence processing and discourse. The topic that sticks out in my mind when referring back to these classes was the discussion of mental models in the discourse section. In class Dr. Newman showed us some examples of these by reading a bit of text about a man and his wife going to a restaurant. Afterwards, he asked us to respond to some questions based on what we had heard/read. He showed us that our mental model of surface representation is really not that good, we cannot accurately recall the individual sentences and propositions very well. It is more of the text based mental model that we remember, a macrostructure of what the story or text means as a whole. With the questions it also became apparent that the situational model is always present. In the story, there was a description of what the couple was eating. From that, he asked what type of restaurant they had eaten at. One of the other students said "Italian," but in fact the text didn't say that at all, it just described an italian dish. Later, Dr. Newman explained how a week or two down the road most of us probably will not remember this paragraph, we might recall there was a couple eating, but that may be all; we might not remember that it was his third wife he was with. However, now that it has been a week I actually find that I remember it better then I did right after I heard it and he was asking questions. Which makes me ask some questions of my own. There has to be more to the story then just those mental models. Because, normally I would not remember almost anything about that story today, but because he brought my attention to it afterwards and made me start thinking about it again, now I do. How much of a role does attention play on things? Of course I have a striking interest on the discussion of attention and so perhaps I will naturally revert back to it. Although, it does seem logical to me that having an increased attention on something will cause better recall of the event or situation later on. Where does that boundary occur though, and is attention all that is involved. Probably not. After the reading of the story I wouldn't say I was that much more attentive, although I was actively thinking about the situation. So to what degree does using and manipulating linguistic stimuli account for a better mental representation of that material? Seems to me that things like inferences that we discussed and that are in the textbook, would probably constitute manipulating the material. An inference means to link two concepts, so that can't be done on it's own, you have to do that actively your self. Perhaps that is why you remember it better, using the information more causes you to remember it better; even if it is just for a few split seconds.

March 7th, 2011
This week we had a lecture discussing language and music. This is an interesting topic for me as speech and music are both on my favourite things list. It was very interesting to learn that some of the same brain areas that process language, also process music. I'm not sure why I assumed they would be different, but I suppose I just never thought of music as being similar to language. I suppose it is sort of a language of it's own, the auditory cortex still needs to process the sounds, so their obviously has to be some overlap. In terms of musical abilities being innately determined, I'm not so sure I believe in that statement. Prenatally I think the brain has better things to do (like develop) then to interpret and differentiate between new sound signals, not to mention the obvious impedance of the sound when transferring through the mother. Somehow I feel that the most important thing to be focusing energy on, would be developing the essential brain areas before developing new specialized musical centers. But maybe thats just me. Once babies are born it does not surprise me that they are just as sensitive to non-native musical sounds until the age of 5 to 7. Those little humans are sponges! They can learn anything until their brains get moulded by their surroundings. That being said, music or language, it's learning in general. If you talk a whole lot, you'll probably become better at it then someone who doesn't. But if you change the word talk in that sentence with sing, play music, play basketball, swim... I think the sentence would still be logical, especially if you start those things at a very young age. Okay, so music is a little bit different if they are detecting differences in the intervals of musical styles that are unknown to them... but music is still very systematic. Who's to say that they are not just picking up on the patterns sooner? Are these studies giving the babies a one time shot to listen and detect changes, or do they get to listen to them for a little bit? In terms of cross-cultural features of children's songs, why should that be surprising? We all talk to babies similarly, treat them similarly, and why would we make complex lyrical songs for kids when we are trying to teach them? Repetition, rhyming, alliteration... they are learning tools, and they make it easier for the child to listen and sing along without getting confused with too many new words and sounds. Personally, I think we should use these features on trying to teach teens too, maybe a little informative jingle will catch their attention better these days!

March 14th, 2011
Our discussions in class have moved onto the production aspects of language. With that we discussed gestures and writing. Unfortunately, due to an annoying illness I caught, I was unable to attend Fridays lecture, but I did get to listen to the podcast. However, I think it was last years, but it will likely contain much of the same information. In the discussion on gestures, we had a group discussion on why we use gestures when we speak. An important aspect of that is to communicate information, but they also provide emphasis, and an extra layer of information in what we are saying conveyed through our body language. Although, there are some other subtle functions of gestures that we may not have thought of. Turn taking, how do we know when the other person is done talking and that its our turn to jump in? Gestures can provide those cues. Then there is the view that they facilitate lexical access to the speaker, as Robert Crous agrees. Our conversation then steered into, well if we are gesturing for the listener, why is it that we still gesture when we are talking on the phone, when our listeners cannot see us? This question makes me half want to say, well that provides evidence towards Mr. Crous's argument, it must be to assist ourselves. But, then the other side of me says, well telephones have only been developed recently, and we were talking long before that. Since gesturing was some of the primitive languages, to me it makes sense that when we started speaking, we retained some of those gestures with our speech to help explain what we were saying when language wasn't as concrete. Over the years, they have just become smaller and more ambiguous as they are not taught or used regularly. That being said, gestures while talking on the phone may simply be a habit of talking to people in person. While you are thinking about what to write, or if you are having a conversation over the computer, you will notice small gestures that you make then. Or at least I make small head movements, or face expressions that reflect what I'm thinking about. I'm sure they do not facilitate my lexical access, but they would create no ambiguity to a person who was watching me if I was speaking. Our next class discussion was on writing. We discussed the writing process and the Flower and Hayes Model. This model incorporates all the influences that have an impact on your writing. This includes things like your Long-term and working memory, your task environment, and aspects of the writing itself like planning, translating, and reviewing. Dr. Newman brought up about how when he is typing fast he makes spelling mistakes more related to the phoneme then to letters placed closed on the keyboard. I too have noticed that this is basically the only type of mistake that I make when typing, for instance, typing fone instead of phone. Why is that? Why is it that I always make these mistakes when I'm typing, but I hardly ever make these type of mistakes when I'm writing by hand? Is it because I'm much faster at typing than I am at hand writing? Perhaps because we type so fast and it becomes such an involuntary thing, some correcting center that evaluates our writing gets lots along the way? Because once I start to slowdown my typing and I begin to think about it and making every correct key placement, I don't seem to make these mistakes as much, instead the mistakes I make are of hitting the correct key. The next part of the lecture that made me do some thinking was when we discussed some of the neural basis of writing. The inferior parietal areas like the angular gyrus, are said to be involved in writing complex written forms. Although, the current neuroimaging information shows no difference between letters and pseudoletters, clinical evidence showed certain patients with these deficits could for example write out pseudoletters, but not real letters. This made me think about the information I learned while writing the chapter on the neural basis of speech perception. The inferior parietal areas, including the angular and supramarginal gyri as being a potential location for phonological encoding. If that were the case, perhaps that would be the location of the problem when people are typing fast. There may be another brain area that reminds you 'okay I'm typing out this language' and if you are typing too fast, perhaps that region just get bombarded and the phonological encoding phase goes straight to the motor control areas. Sure it may be a little simplistic (I know our brains don't say 'okay I'm typing out this language'), but it's the type of connections that float around in my bubbly head. Hope my rant wasn't too difficult to comprehend!

April 4th, 2011
This week we have begun the presentation of our debates. We choose a topic and there are is a team that is For the topic and one that is Against. Unfortunately, I missed Monday and Wednesday's debates due to the unfortunate passing of a friend, however I did get to attend Friday's debate. The topic was based on the type of language instruction that should be given to children who have undergone a cochlear implant surgery. Each side presented their case to the best of their ability, and it was interesting to watch. While I feel that when debating topics such as these, each position must be largely left winged or right winged, there is no between ground. My opinion on the matter would cover the between ground. I casted my vote based on how each time presented the material to convince me, and how I would vote if I had no prior knowledge in psychology to generate my own opinion, although in reality I would not have chose either. One side argued for only oral communication training for the child, while the against team argued that total communication training in multiple languages (i.e., american sign language etc.) is the better way to go. Each side had empirical evidence to support their claims as to why we should choose their side. However, (and I feel this occurs in many clinical settings as a whole) too much emphasis is placed on the studies. What about the individualistic nature of assessment and learning? There too many ways that people can differ when it comes to learning to group 181 students who share one common trait and then say because a large percent did well that it was effective for all of them; this is referring to one study mentioned examining oral communication instruction. Many of the students probably did very well based on the instruction, their home support system and environment, and their own innate abilities. What about the ones who did not? Well those outliers likely got omitted from the statistics, although I didn't check the data. But that does happen very often in research, so everyone just gets grouped under that umbrella because it works for most people. This argument can be translated onto the other position as well. Some children obviously do do well in oral communication instruction as is apparent from that one study alone. Some peoples environment is better suited for the child to get neither oral communication instruction nor total communication instruction, rather solely focusing on american sign language because their parents both speak it. On the other hand, maybe the child just doesn't want to learn all these different methods for communicating, maybe they would just like to master one and be able confidently rely on it. The bottom line is that it is too simplistic to group all children, especially with varying dates of implantation in their development, into one neat category of 'following implantation you will study this..'. Children require individualistic attention and assistance. A research paper does not have a brain center specifically designed for reasoning.

April 11th, 2011
The topic of my blog today is whether or not people who speak minority languages should learn the dominant language as early as possible. On Wednesday we watched a debate on the topic and each side presented their opinion. But, regardless of what their studies found it has to be really difficult to sit in a class attempting to learn the material and not speak a word of the language. Although in the future we could create schools that are taught in some of these languages, the fact of the matter is that currently we don’t. Plus, there is no way in the future for us to have a school for every single minority language in every province, let alone every town. It is just not feasible. I grew up in Bedford, just outside of Halifax, Nova Scotia. It is a preppy little town, and very multicultural. Growing up I had friends who were from Iran, Nigeria, Taiwan, Hungary, South America, and England; to name a few. Of course some of these people already spoke English, but those that didn’t still went to the same school as I did, they went to a special class each day initially to help them learn English. But, some also went to an afterschool program once or twice a week to help them retain and improve their native languages. Every single one of these friends now speak perfect English, without an accent, and most can speak their native language very well also to my knowledge. What is the benefit of moving to a new country, only to retreat from the local people, and segregate yourself into your own culture? That would not be very good for the stigmatization that many minority cultures already experience. Aside from the intuitive nature of the matter, if the children are very young they should be able to acquire both languages without a problem. This is a well-known fact, and was quite apparent among my peers. Therefore, I see no reason to change the current system that is in place. For families or cultures, that feel they are loosing their heritage, it should be their responsibility to retain what they can and develop an after school program for their children or something similar. Canada is embraced for it’s ethnical diversity, especially apparent in our schools. Lets keep it that way.