User:Jennfleur

I am a 4th year psychology major at Dalhousie University. I am participating in a learning project with my Psycholinguistics class. We will be creating a textbook on this subject (Psycholingustics). I am responsible for producing the chapter on the development of speech production.

Debates Week 2- April 4th- April 8th, 2011
This was the final week of classes. I think as the debates when on, either students become better at arguing the topics or the topics became more difficult as I had more and more trouble choosing a side. The last debate ("The language we speak constrains and shapes the way we think.") was especially difficult because there is evidence for both sides and it is such an abstract concept (in fact I had difficulty following at times!). We really can't separate language and thought and effectively study this topic as we need language to explain our thoughts effectively. You can't explain what you think without using language or some kind of representation (ie: gesture, and even then it's difficult or there is lots of room for miscommunication).I think one of the purpose of language is to communicate our thoughts. Maybe we can think all we want without language, but if we want to communicate our thoughts we need some kind of system to get our idea across.

Wednesday's debate had me thinking the most. I think because it was similar to my topic (AGAINST "The resolution of the Oakland, CA School Board, stating that Ebonics (African American Vernacular English) should be the language of instruction in classrooms where this is the dominant language of the majority of children, should be implemented in schools where appropriate"), so I had some background knowledge on the topic. As well, we live in a multi-cultural country where this issue is very relevant. Wednesday's topic was: "It is best for children of linguistic minority groups to be educated in the society's dominant language at early stages of schooling (ages 4-12), because this will ensure their success in later stages of school. Ethnic languages can be introduced later when they will not interfere." One argument the against side made was an example of a case study done in a northern Europe country that introduced the minority language as the primary language of education in the early years and saw success with its students in learning the dominant language later on. However, as the other team pointed out this is a case study. I'd also like to point out that we don't know how prevalent that minority language is throughout the country. Perhaps that language is similar to French in Canada. French is not used everywhere in Canada, but there are multiple communities where a knowledge of this language is helpful or necessary. I think Canada's official languages should be educated in all schools (or students should be given the option, knowing the benefits of bilingualism).However, if a language is less prevalent throughout Canada, I don't think it should be emphasized in the schools. First of all, if we did apply this model to Canada, how would we able to determine which language should be taught as the minority language? There are many cities within Canada that are extremely multicultural. How can we accommodate all these languages? Surely it's not possible to have public schools teaching multiple languages. I believe that is what private schools are for, or language classes. I have Chinese friends that have attended Chinese language lessons to maintain their language and culture. Also, what would be the repercussions of teaching the minority languages on Canada? One team noted that there are more and more immigrants each year. If the majority become educated in a minority language, is it possible that Canada would adopt a new dominant language? I think these are important questions that need to be considered before adopting a plan such as the one implemented in Europe.

I'm not sure if the for side made this argument, but in class we also discussed the critical period hypothesis and the research showing the early the acquisition (before puberty) the better. In fact, with age, their proficiency scores decreased. I think that this research is relevant to the debate as it clearly shows that the likelihood of proficient acquisition (of the dominant language) would decrease after age 12. Why begin instruction at that age when it possible to start younger? Again, I emphasize, there are other ways to maintain their minority language without sacrificing the many opportunities that come from full proficiency in the dominant language (higher education, communication outside the community, bilingual benefits, etc.).

Debates, March 28 - April 1, 2011
After a recent conversations with friends about bilingualism, I've decided to revisit this topic. In class, we looked at a study by Kim et al. (1997) where early (before puberty) bilinguals had overlapping areas of activation in Broca's for tasks requiring them to think in English or their native language. This study raised many questions for me. If there are such overlapping areas, how does a bilingual distinguish between languages? I believe that most bilinguals are able to distinguish between which language they are speaking, even in young children. For example, a friend mentioned that friend's 2 or 3 year old child was learning French and English simultaneously and was able to point to the color red and say "Mama rouge" and "daddy red". Furthermore, there are cases of bilingual aphasia where a patient loses only one language. If language is representated in the same area, wouldn't they lose both languages?

I am also curious about particular cases of bilingualism and second language learning. What about individuals who can comprehend a language but cannot speak it? For example, a child growing up in a Spanish family in the US. Would it be easier for them to learn the language and would they see similar brain activation to individuals who are early bilinguals? I suppose there is less research done in special cases, but I find them interesting. My mother experienced a similar situation. She spoke French as a first language until age 5 when she started school in English. She then stopped speaking French and had to re-learn it as an adult. When I hear her now, she doesn't sound like a native speaker of French as her pronunciation is sometimes off and her syntax structure is sometimes incorrect. Thus, I don't believe she would have the same brain activation as early bilinguals (although perhaps in Wernicke's area as this is seen across all bilinguals regardless of age of acquisition). I think my mother's situation emphasizes the influence of social context on language acquisition. Other children would not have understood her French and thus she would have been motivated to speak English.

What about bilingualism in a spoken language and a gesture language such as ASL? I was watching Sue Thomas: F.B.Eye this weekend. Sue has a hearing impairment, but speaks English and ASL fluently. I noticed that while she was having an intense conversation in ASL with another character who had a hearing impairment, she was mouthing something - as if there were words to go along with what she was saying. Yet, from my knowledge of ASL, it has a different structure, so she couldn't be mouthing matching words and sentences in English. Is this the influence of spoken language on her ASL or is mouthing part of ASL?

Bilingualism and Aphasia, March 21-25th, 2011
I have so many questions about aphasia. I have not encountered anyone with this condition before and the videos made it easier to understand. In Broca's Aphasia, one's expressiveness is affected, but comprehension is good. The video we watched was of a 30-40 year old man. He was able to use gestures, associations between words, and writing to help him communicate and pronounce words. I wondered how a female patient with a similar condition might differ? I know there has been much debate about whether there are sex differences in language lateralization. My original thought was that women might have a better rate of recovery because they usually have greater expressive language. However, in a study by Rogalski, Rademaker, and Weintraub (2007) it was found that verbal fluency and rate of decline were worse for women than men! It should be noted that these patients suffered from primary progressive aphasia (PPA) or Alzheimer's, which may be different than patients recovering from a stroke. These PPA patients would continue to deteriorate whereas stroke patients would show improvements. I think this is an interesting area of further research.

I was also interested in global aphasia. Would it be possible for these patients to learn sign language? This would reduce their need to rely on intonation of their only word/sound/syllable (ie: "no-no", or sometimes no words at all) to express their desires and thoughts. In one case study (Moody, 1982), it was found that Total Communication was effective at teaching a man with Global Aphasia. I think this is an interesting direction, but I was unable to find much recent research on the topic. I'm not entirely sure how it would work either. In the textbook (Jay, 2003) it mentions that signers with left-hemisphere damage have difficulty much like those with verbal aphasia, suggesting sign language is represented in the same area. If this is true, learning Total Communication (or sign language) after suffering a left hemisphere stroke would require you to train the right hemisphere to acquire the sign language. Is this actually possible? If so, why can't adults who suffered a stroke simply reorganize verbal language in their brain to the right side?

I think it's also interesting to see how cognitive functioning is dissociated from language. The patients we saw in class had normal cognitive functioning despite their language problems. This allows them to be able to use their intelligence to find other methods of communication (gestures, writing, etc.). In one case, a patient with blind wife, took his wife's hand and allowed her to feel the item (ie: fridge) he was trying to communicate information about. I think this shows their continued ability to be creative and productive in their forms of communication despite difficulties with language.

Language Development March 14-18th, 2011
We discussed language development this week. It was interesting learning about infant's perception of speech and the development of syntax. In Kuhl and Meltzoff's (1996) study, adults were able to distinguish between native and non-native babblers at 6 months. If adults can do it, maybe the infants can too? In order for an infant to discriminate, I think the infant's babble will need to have either consistent syllable structure similar to their native language and/or prososdic feature of their native language. Early babble doesn't necessarily have follow these patterns, and thus it would be harder to discriminate. I think the infant performing the discrimination must be old enough to recognize the phonemic sounds in the babble are similar to the ones in their native language. Perhaps once the infant is able to produce some phoneme-like babble in their native language they may be able to distingush between native and non-native babble as well. Infants in the variegated babble stage are able to produce multiple syllables that follow the phoneme pattern of their native language, thus they may be able to perform this discrimination. I also think had a few questions based off Kuhl and Meltzoff's (1996) original study. If an infant is learning two languages and thus babbling in two languages, is the adult able to make the discrimination as well? I also wondered if adults are able to match babble to speech if they are given samples of speech in various languages and patterns of babble in various languages (or are they only able to tell that it's a non-native language)? We looked at Genie's case, where she was only able to learned to string together a few vocabulary words and wasn't able to grasp the concept of syntax. We also looked at pidgins and creoles, where the syntactic structure doesn't show up regularly until the second generation of pidgin (creole). I think these cases may be telling us something about the order of the development of language - semantics are learned before syntax. Perhaps an extended vocabulary is necessary first before one can attempt to understand a grammatical structure. When you think of a child learning to speak, you see a similar pattern - the child will start using words without any structure; sometimes one word is representing an entire sentence (holophrasic sentence). It is not until the child is able to produce multiple words that rudimentary grammar will develop.

Speech Production, Gesutres and Writing, March 7th- March 11, 2011
This week I am going to focus on the writing section from Friday's lecture. This lecture was of particular interest to me as I am currently tutoring a boy, C, with dyslexia and dysgraphia. We discussed the Japanese written language of Kanji and Kana. Kanji,a logographic system, which uses symbols that are representative of concepts, whereas Kana is representative of limited set of syllables. Immediately, I thought of C and his difficulties. If Japanese were his native language, would he find one of these written forms easier to read or write than the other? I think those who suffer from learning disabilities would struggle with Kana comapred to Kanji. Their problems usually stem from problems in phonological awareness and that would be much more evident when using a system based on syllables. Furthermore, in C's case, he memorizes the words rather than sound them out, and he is particularly good at it! Thus, memorizing concepts may be easier for him. It's a shame English doesn't have a similar parallel written language to help students. Maybe Japanese have lower rates of learning disabilities that are rooted in phonological problems (or they are simply less pervasive). They would be able to avoid the phonological portion of reading by focusing on Kanji.

We also looked at the Flower and Hayes Model which integrates the task environment, working memory and long-term memory on the topic to be written about. We didn't discuss how this model breaks down (as would be the case for C).In C's case, his memory is great and he has knowledge of topics to write about, but his planning and organizing skills are very poor. Thus, I would conclude that part of the problem for those who struggle with writing may be an integration problem. You must organize the meaning you are trying to convey, while maintaining the proper structure and following appropriate procedures for the audience - somewhere along the line, something gets left out. Thus poor working memory may the key that breaks down the integration. But could there be other factors? I would suggest that physical difficulties may prevent this system from working as well. For example, C struggles with handwriting, he looses track of his train of thought because he is putting energy into physically creating those letters. Would this necessarily be a working memory deficit? I know if I put too much attention into creating perfectly shaped letters, I lose my train of thought. Perhaps there is another breakdown in the model? It would be interesting to see how writing models incorporate various writing difficulties into their models. Maybe some kind of motor mechanism should be included in the model?

Sentence processing and Music and Language, March 2 -4th, 2011
This week we began to learn about speech production. As this is related to my chapter on the development of speech production, I generate many questions from this lecture, but I am going to focus on finding words and naming studies. One example of a study involved an image of a circle and participants had to name it (circle) or read it (letter 'O'). When they read it they were faster- most likely due to the direct route of the dual-route model (learned in an earlier lecture) where participants would be activating directly the phonology from the orthography whereas the naming they were activating a semantic network. How might those who are illiterate do on this experiment? I think their lack of knowledge of the alphabet might cause them to perform faster at activating the naming route. Similarly, how would young children who are pre-literate perform? Although they would have an understanding of both the alphabet and shapes (due to learning at preschool or daycare), I think they would likely perform simiarly to the illerate group. I think at that age, they are taught shapes more frequently and thus I think they would have a faster activation to name the shape. At this age, they do not have an understanding that the orthographic shape O maps onto a phonological sound /o/.

We discussed other studies that showed a conflicting word with an image (ie: a picture of a lemon and the word apple) causes interference when it's semantically unrelated but when it's phonologically related (ie: picture of tank with word bank) it causes facilitation. What about morphology as a facilitator? In a study by Feldman (2002), morphological (vowed-vow), orthographic (vowel-vow) and semantic (pledge-vow) primes were compared during a lexical decision task, naming task and a go/no-go naming task. They found that participants were fastest at the naming task when given a morphological prime (followed by semantic and orthographic primes, respectively). Interestingly, compared to the other primes, when the morphological prime duration increased, the facilitation effect increased. Thus, it would seem that morphologically related primes are an even greater facilitator than semantic or orthographic primes.

I wondered further whether the fact that there seems to be a dfifferent method in the way we access phonology vs semantics for naming tasks (ie: we have to access the semantics of the image we are trying to name and then associate it with the phonology), might account for those who have deficits in naming familiar faces. I have many friends and family members who are awful at remembering names yet can remember someone's face, where they met them, how they felt about them, etc. Could they have a particularly weakness in their association between phonology and semantics? Or perhaps their ability to remember the specific details about their encounter with the person is causing interference in their ability to retrieve their name? I wonder if these people also struggle with naming things or if it specific to faces? In a previous class, we learned about the fusiform gyrus where faces are processed, thus perhaps it is separate from other naming tasks.

Sentence Processing and Discourse, February 14-18th, 2011
During the lecture on discourse, we learned about story grammar where each story has a setting and episodes (initiating event, internal response, establishment of goal,trying to achieve goal, outcome, and reaction). We learned that stories don't need to have all these features but require: initiationg event, goals, outomes, (reaction). I wondered whether this affects the quality of the story. If you have more detail in the story does it make it better? Not necessarily - too much detail can cause the story to be long and boring and you can lose focus on where the story is heading. Some stories can also seem pointless - the speaker just tells a connected thought for no apparent reason but to say that something happened. Is the ability to tell stories and to pick the right information to fill the story grammar an innate characteristic or do we learn to tell good stories and the more we practice filling in the story grammar, the better we get at it? From previous courses, I've learned that children follow a developmental pattern where they continually get better at telling stories. At first, their stories may be fairly simple, but eventually develop more details. Therefore, it may not be innate. On the other hand, we see that some people are very good at telling stories - giving just the right amount of detail.

During the discourse lecture, we also talked about remembering the gist of a story rather than the exact word for word of a story. I found this interesting because I know that children with learning disabilities may have trouble gaining the gist from stories. Sometimes they may focus on something that is more relevant to them. For example, a story may mention briefly that the boy brushed his teeth with a red toothbrush. Since the child him/herself has a red toothbrush, that part of the story may be remembered more easily. I'm curious about why this happens? A child who doesn't have a learning disability, may also have a red toothbrush, but can remember the gist of the story. Does it have to do with filtering relevant information or maybe the child's working memory?

I don't understand how grammar started or why we follow these rules. It's not something you think about when you speak because it's automatic. In class, we looked at an example "Phil put the books" and how in English the verb "put" requires a preposition. But in another language the similar verb might not need to specify a preposition. Who decided that it didn't need a preposition? I know when I learned a second language, that was confusing because if English requires that additional preposition then I would try to add it when speaking French yet that would be improper according to French's language rules. Wouldn't it be easier if every language had the same rules? It would certainly help learning another language - you would only have to worry about pronunciation and vocabulary, not structure, organization and new rules.

I was curious about children who have reading disabilities and parsing based on garden path sentences. Would they be able to parse them properly? I've seen children that children who has learning disabilities forget commas, miswrite or say a sentence, therefore I am guessing they may struggle with the perceptual ability too. Would their impairment also affect how they parse those sentences from an auditory sample?

Word & Word Recognition + Speech Perception, February 7-11, 2011
In class, we talked about the preceding context activating certain words more quickly if they are related. For example, a sentence about bugs will activate "ant" more quickly than "spy" or "sew." I began to wonder about individual differences.What can we tell from individual variations in retrieval of words? For example, if you had a more ambiguous sentence such as "There were many bugs in the room." would "spy" or "ant" be activated more quickly? Would it depend on your experience - if you are working as a spy, or are a spy-gadget fanatic, would "spy" activate more quickly than "ant"? I also wondered about reaction times in bilinguals. Do they have semantic networks for each language or are they connected? If they were given the above sentence, would it take longer for them to activate a related word because of their connections between languages - would they have to go through words in both language before they reached the correct label?

The idea that language abilities are independent of cognition is intriguing. In many of classes we've touched briefly on this subject and noted that while many individuals with cognitive disabilities struggle with language abilities, those diagnosed with William's syndrome seem to have normal verbal skills. Therefore, it would seem that in this case, cognition is separate from language abilities. However, when we consider other disorders such as Autism or Down's Syndrome, language is impaired. Is this language impairment separate from their intellectual disability? Those with Down's Syndrome also seem to display intact pragmatic and communicative abilities. If they have a language impairment, how do we account for their intact pragmatic skills? Could this tell us something about the way language is organized? Maybe language isn't one big package of pragmatics, semantics, syntax and morphology, maybe they are somehow separate entities represented in the brain?

I also wanted to comment on the syntax lecture. We looked at how language has rules for how it organizes words into sentences. But we can still understand the telegraphic speech of a two-year old, can figure out the meaning of words that have typos and understand someone who misses a verb tense or misplaces a word in the sentence. Is grammar truly necessary then - do we need to follow these implicit rules? (side note: I am still having trouble wrapping my head around the fact that we implicitly follow these rules and have difficulty expressing them. Why is that?? We don't have trouble expressing rules for games, driving a car, etc.) Although grammar is not always important, I think it does play an important role, it displays more precise information about the meaning we are trying to convey. For example, "the dog chases the cat" is different semantically than "the cat chases the dog" but we only see that difference when we move the words in the sentence. The individual meaning of each word on its own is the same in each sentence, but the meaning of the whole sentence is different because of syntax. Similarly, if we changed the original sentence to "the dog chased the cat" the verb tense rule change from "-s" to "-d" gives us information about the timing of the utterance.

Morphology, January 31, 2011
When we were discussing ASL morphemes I began to think that this language's "morphemes" made intuitive sense. It was easy to follow "I give to you" because as we speak we tend to make that gesture anyway. I wonder how that would transcribe in another sign language? For example, French and English has different gestures to represent the same meaning. I wonder whether this difference would also be present in French vs American Sign Language?

When looking at regular rule morphemes (ie: "-ed"), how do we understand that even though there are various pronounciations of this morpheme they have the same meaning? I wonder if it has to do with our categorization (as mentioned in the lectures last week). The sounds are not different enough that we would categorize them separately. Or maybe it's due to our reading awareness. We know it is spelled "-ed" in all cases, therefore we would categorize it similarily. But what about pre- or illiterates? Another suggestion might be that they use context to understand the grammatical class of the word and thus know recognize its morpheme as belonging to that class. Therefore, even though "walked" has the /t/ sound, with a sentence such as " He walked to the store yesterday," one can understand the meaning of the morpheme based on context. On the other hand, my experience with children has shown me that when they spell, they may not understand the rule this way. If asked to write that sentence, a child may write "walkt" because of it's sound rather than their understanding that "-ed" is a rule that should be applied.

I was also interested in the cross-linguistical data. The Athabaskan data showed that one word was a combination of morphemes expressing different meanings. I thought it was interesting that one word represented a whole English sentence. As I am currently writing the chapter on development, I was curious about how Athabaskan children might learn to produce language. For example, I know in English, depending on the child's personality, they may language more socially to engage others and thus use phrases such as (bye-bye, come here, etc) whereas other children might use language for labeling and understanding their surroundings (ie: ball, doll, juice, etc). I wonder if we would see a pattern where children learning Athabaskan would use language more socially since learning that one word contains a sentence worth of meaning. Also, would we see these children skip morphemes they haven't learned yet, for example, missing the "s" in "niloston"?

Speech Perception 2,3 and Reading Lectures - January 24th - 28th, 2011
I enjoyed learning about the McGurk effect. In everyday life, we don't usually see a dissociation between the visual perception of a word (produced by someone else) and the auditory perceived. I thought it was interesting that our visual perception had such a large influence on the sound heard. Some students thought it sounded like "gaga" and others thought it sounded like "dada." Could this possibly show to what extent their visual perception is affecting the sound? I believe the man in the video was visually producing a "g" sound" (correct me if I'm wrong). Therefore it may be possible that those who "think" they hear "gaga" are focusing more on the visual than the auditory. Those who think they hear "dada" may be using a greater combination of the auditory "baba" and the visual "gaga." I wonder whether how this effect would look with infants who may be less familiar with the mouth formations of words. Maybe it would help to use a preferential looking paradigm. Present the infant with a visual of "fafa" and "baba" and auditory of "baba". Would the infant look longer at the image that clashes with the auditory sound?

We also did a phonemic restoration effect example in class where we treated a gap differently than white noise. For example, when there was a gap, we heard "water pitcher" and when there was white noise, we heard "legislature". We are using our top-down knowledge of words we already know. I wonder how this effect would work if we tried it in another language? If one does not speak French and is presented with a word with a gap or white noise would I hear the two sounds similar or differently? I think one might perceive the heard sounds as similar because they wouldn't have top-down knowledge of that language's sound combinations and words. Perhaps they would perceive the sounds more accurately based on their bottom-up processing.

Language and the Brain + Speech Perception - January 17th - 21st, 2011
Although only briefly mentioned in class, I was intrigued by the representation of language in the brain of bilingual populations. As a bilingual, I was curious about how French and English might be represented in my brain. The textbook (Jay, 2003) suggested that overall there is a greater activation for maternal languages. Furthermore, foreign languages learned in adulthood are represented in a different location of the brain, whereas foreign languages learned at a young age had corresponding activation of Broca's area in both languages. They noted, however, that Wernicke's area showed no distinction in activation whether the foreign language was acquired early on or as an adult. This was surprising to me as I thought it seemed logical that a child has overlapping activation for both languages as a child's brain is more malleable and thus able to easily incorporate both languages. Why might there be no differences in activation Wernicke's areas? It is known that the role of Wernicke's area is often focused on comprehension. I think that early and late speakers of a foreign language might have similar comprehension of the foreign language. This similarity would correspond with their similar activation patterns in Wernicke's area.

As I was attempting to make sounds from the IPA chart, I realized how hard it is to produce (articulate) and distinguish between some sounds. This may because they are not a part of our maternal language or because they are usually found in combinations with other sounds. For example, I found it very hard to produce retroflex sounds and to distinguish between G and g. Why is this? Every human has the same physical apparatus, yet it seems that learning new sounds is very difficult. Maybe practice and exposure can improve production. For example, children can easily learn a new language and have fluent speech. On the other hand, my grandmother learned English at the age of 18 and still has a strong accent. Even children may struggle with sounds that are similar, such as "θ" and "f". This may provide further evidence of the critical period hypothesis. I also wondered why some phonemes not occur together? Is it because of the place of articulation? Maybe the transition between certain places of articulation are easier than others. Moreover, why is spelling not like the sounds? For example, ph = /f/. This makes written language much more difficult to master. I wonder if other languages have such differences between the sound and the symbol or whether some languages might have greater coherence between the two?

Language and Thought Lecture - January 12th, 2011
In class, we discussed whether language was necessary for thought. Can we think if we don't have language? It immediately reminded me of the movie "l'Enfant sauvage" (1970). It's based on a the true story of Victor who was found around the age of 12 alone in the woods with no language. He was eventually taught how to live among others and that words have meaning. However, he had little production of speech; the speech he did have was focused on the present (what was in front of him, and what he desired). I wonder what his thought patterns were like. Was he able to think the same kind of abstract thoughts that we learn to do as we age or were his thought patterns more focused on current events (like his speech pattern)? Then I wondered more generally, do our speech patterns reflect how we think or what we think about? How would this manifest itself in ASL? I have little experience with this language, but I wonder those who sign as their primary means of communication think in the same ASL structure? I learned in a previous class that they have difficulty achieving in writing in school. Maybe this is because they think in a ASL pattern and have to convert it to written language that mostly follows an oral language structure?

I was also intrigued by the idea of linguistic relativism and linguistic determinism. I think it needs to be emphasized that just because there are differences in categorizations doesn't mean there is a difference in perception. For example, we all perceive the tree the same way regardless or whether it's called "tree" or "arbre" (French). Perhaps what's happening with Heider's (1972) experiment is that the people of New Guinea are not experienced in sorting colors. Our culture requires that we learn colors from an early age and that we distinguish between them. Maybe the New Guinea culture doesn't emphasize the importance of such distinctions and thus they would not see the necessity of having other labels for colors. This would impact their success on such tasks.

January 15th, 2011
I just had another thought as I've been writing an essay for grad school today. I often have thoughts and ideas, but difficulty conveying what I am thinking in words. Other times, we can see something but have difficulty naming that thing. I think this would suggest that we can think about things without having language.