User:Bengisug

Psycholinguistics

THE AUTHOR

Bengisu Gonul, third year psychology and french student at Dalhousie University.

JANUARY 10-14, 2011
INTRODUCTION TO PSYCHOLINGUISTICS

In the begining of our class lectures this week, we were primarily introduced to language and were asked to describe what language means to all of us. We all agreed that language is a way to communicate with people. It is productive and generative. New words enter our vocabulary everyday. Language is symbolic and have extensive rules that are agreed upon. Last but not least it is a significant part of our culture. These were some of the examples that we all came up with and I am certain there will be many more to consider as we explore the use of language and its depth. Moreover, we touched on many other facts about language including its complexity (levels of structure like sounds, words used to describe the phonology and morphology of language) as well as its history and major theories that made it so unique to human kind.

Out of all the information we acquired from those lectures, what interested me the most was the different components of language and their similarity with sign language. I learned that sign languages are equally complex as any other language and there is not any universal sign language therefore they differ from one country to another.

Coming from a background of three different languages, including my native language Turkish, followed by English and French, I can say that almost every word used in each of their vocabularies represent the same thing, only written in their individual language. For example, language is related to ideas and concepts formed in the mind, therefore conceptual. If we say horse in English in our minds, we have an image of a horse. This happens also both in French for 'cheval' and in Turkish for 'at'.

So my question is if there was a universal sign language created for every person to use in the world as an alternative, would this have benefited human kind, deaf or not, by giving them the ability to communicate easily with every nation's person?

We human beings are capable of thinking and language determines how we organize and shape our thoughts (how we think). What if language did not exist, would we not be trying to make signs by using our bodies to communicate, therefore, why can we not create a whole new alternative sign language by making up new words for each existing ideas and concepts, and by also adding some exceptional bits and parts of each language/culture of a nation? Using universal sign language might become limiting in expressing thoughts this way, but does the language we speak right now not have its own limitations anyways, where some ideas in one language can never be accurately articulated like explaining what a Keith's beer taste like?

January 17-21, 2011
LANGUAGE AND THE BRAIN

This week’s lectures got me interested in learning more about lateralization of the brain. We know that human brain is separated into two hemispheres, left and right, by longitudinal fissure. Despite their similarities, their function significantly differs when it comes to language. It is founded that core aspects of language like grammar and vocabulary are lateralized to the left hemisphere whereas prosodic language, organization of discourse, emotion and metaphor (having sense of humour, irony, and sarcasm, etc) are lateralized to the right hemisphere of the brain. Viewing all the functions of both left and right hemispheres we can say that if there was damage to our left hemisphere it would be much more severe, therefore it evidently has a dominant role in the brain. Sex and handedness are two functions associated with the hemispheric location of language functions in the brain. There tend to be differences in functional lateralization between those displaying right and left handedness, and males and females.

My question: Is hand dominance always directly related to brain dominance? The majority of people are found to be right handed in the world and since most of them use their left side hemisphere they have more advantage over others in acquiring and stimulating their language skills. So if the majority is right handed, what about the minority of people who are left handed or equally skilled with both hands called ‘ambidexterity’? Left handed people tend to have a more even distribution of language in both hemispheres, just like ambidexterity people I assume. In that case, what I wonder the most is if it is better to be right brained than left brained? I started doing some research to find out if hand preferences appear by nature or nurture and I found out that left and right handedness is determined inside the womb of mother by the nature which can be monitored by which hand is held close to mouth of a baby (fetus)(Roja, 2010) If we chose our dominant hand naturally while we were still inside our mother’s womb, could this mean the hemispheric dominance was then determined (controlled) by the preference of our hand? What about ambidexterity? Can we say that it is nurture rather than nature since there will always be a dominant hand chosen for each person even though they are able to use both of their hands equally throughout their development? I have been a left handed person ever since I was a baby. I have never tried to learn how to use my right hand and I find myself talented in learning many languages. So if we say that since you are born right handed, you will use your left hemisphere the most which will advantage you in language production and comprehension and if you were born left handed, you will use your right hemisphere more even though you are using slightly your left hemisphere.

Are there any ways of determining hemispheric dominance in a person, because I really want to know how I learned and kept three languages fluently whereas most of my right handed friends have a hard time learning their native language, English, let alone a second or third language?

January 24-28,2011
PHONOLOGICAL AWARENESS

I would like to start this blog by saying how interested I am becoming in this class every week as we explore different subjects. I was specifically happy to be in Friday’s lecture (not that I missed any classes before) when we were discussing phonetic awareness. Phonetic awareness is the awareness of, and ability to manipulate, sound structure of a language. When Dr. Newman wrote the example “ghoti” on the board and asked us how to pronounce it, the pronunciation in my head sounded different than that of my classmate who volunteered to read it aloud. I then began to think consciously about phonetic awareness of the Turkish language, and how it should be a lot easier than in English since Turkish only requires to know the first and last of the 4 steps of acquiring phonetic awareness. Because we read as we write and there is no change in the sounds of the consonants, we do not need to have the onset step. The vowels also sound the same within each word they are being used; therefore, there are no differences in rime and so there is no need for that step either. For example, if I were to say the word cat and the name Kate in Turkish, the vowel ‘a’ would sound the same for both of them; however, the ‘c’ and the ‘k’ never sound the same even if there was a different vowel beside them. This is because they both have their own unchanging sounds throughout the entire Turkish language.

I hypothesized, while unaware of the upcoming slides, that Turkish would be a much easier and easier language to learn and read. Of course, as we then saw in the following slides, it was true. We also continued talking about deaf reading and how they may have no experience with phonology at all. Based on our class discussions and research, I am going to put forward my belief and findings to say that it is easier to learn and read Turkish than it is to learn English, since there are not many steps to go through, and the letters in my language have only one unchanging sound, and also do not change the sound of any letters beside them.

January 31- February 4,2011
MORPHOLOGY

This week due to the weather conditions we only had one class on Monday and we talked about morphology. As we learned in our lectures, morphology is the smallest unit of linguistic meaning or function. (Newman, 2011) There are several morphological typologies that represent different features of morphology changing from one language to another. They are called analytic (isolating), agglutinative, synthetic, polysynthetic, and infixing. In isolating language, there is a low morpheme to word ratio meaning that morphemes are almost equal to words (1:1 ratio). In agglutinative languages, there are words containing several morphemes that we can differentiate from one another and each morpheme represents only one grammatical meaning. Synthetic languages have one morpheme with various meanings. Polysynthetic on the other hand, opposed to analytic languages, have high morpheme to word ratio, meaning that one word could equally be a whole sentence. Lastly, it is the infixing languages where affix is inserted inside base. Further research explains that analytical language includes Chinese and English, whereas Russian is an example of a synthetic language. As I was searching to find which countries use agglutinative languages, I realized it was my native language, Turkish. For polysynthetic language, we can give an example of Amerindian languages. I can definitely see why Turkish would be considered an agglutinative language since its word structures are formed by productive affixations of both derivational and inflectional suffixes to root words. (Ex: cat-cats is a inflectional rule, whereas derivational is also considered a word formation forming new words like catfish)

Regarding all these possible typologies, I realized how each language’s simplest structure and content could be so different from each other and I am not sure if I can tell you exactly why that is.These typologies however would be complicated in learning a new language for someone who is not familiar with those that are not used in their own language. So if I know English, Turkish and French that means that I am familiar with an agglutinative, synthetic (French) and analytic which is considered moderately to be in English language. It is already frustrating to distinguish morphological rules between these three languages sometimes. My question then is that if I wanted to learn another language, would it be easier for me to learn one that is also an example of one of these typologies since another type of morphological typology would probably make things even more complicated? I personally always wanted to learn either Chinese or Japanese. Chinese is analytic language whereas Japanese is agglutinative. Their both structures are not new with the rest of the languages that I know by having the same type of morphological typologies. Therefore, my question now is: which language would have still been easier for me to learn, Chinese or Japanese, considering that Turkish is my native language and I have been talking English for 6 years?

February 07-11, 2011
LEXICAL ACCESS

This week in class we were introduced to lexical access, which could also be called as the 'mental dictionary'. Tapping semantic knowledge refers to the study of meaning in language and changes of meaning. It is used to refer to the meanings of individual words, sentences and longer texts. There are 8 different classes of semantic knowledge. Word association is one of them used when something comes to someone's mind, they immediately say it out loud, so the strength of associations between words is known. There is also lexical decision task which is easier to implement with computer. One button if it is a real word, another button for unreal word. The reaction time for real word in this case is important. A priming example would be a donkey image presented before an ass word, so you are more likely to jump right away to a donkey, rather than the word buttom or jerk. Eye movement is another type of knowledge used to track peoples' eye movements by presenting a paragraph and seeing which word they most frequently stare at. Self-paced reading is another type where there is a space bar included to go to a next word and it looks at the amount of time that has passed in trying to read that word and press the space bar. Semantic judgements, neuroimaging, and naming are also other types of semantic knowledge. These are all universal types used throughout all world languages to illustrate the relationship between words and meanings. Since they all require an advanced language acquisition, what would other possible types of semantic knowledge for infants be since they have not yet developed language? By researching, I found that gesture can be both a source of semantic knowledge and an expression of that knowledge. Gesture provides a window onto evolving semantic representations and, therefore, can be a method of assessing what a child knows at a time when oral language skills are limited and are, perhaps, an unreliable indicator of what the child knows. Given the debate that gestures are learned and not innate, how early can a child learn gestures compared to learning spoken language? I ask this because oral language skills are limited at the time and not because they are an unreliable indicator of what the child knows in order to communicate. I just think that infants will have semantic knowledge close to age 1, but since they cannot fully produce speech yet, they would use gestures and that is not necessarily because they do not know the semantic meaning of things.

February 14-18, 2011
SYNTACTIC COMPLEXITY OR WORKING MEMORY

In this week’s lectures, we have learned that the working memory plays a central role in all forms of complex thinking. Syntactic complexity is a great example of one of these forms of complex thinking. We examine how human cognitive capacity accommodates or not to the complex demands that occur in language and we also examine human’s performance to certain regions of sentences. For example, when a person is reading a sentence, he or she spends more time on the main verb and on the last word of the subordinate clause than on other parts of the sentence. (Newman, 2011) The person also takes a longer time to read object relatives than subject relatives. (2011) The reason why the person does that is because the verbs are grammatically related to a large number of other constituents, such as subjects and direct objects. The person therefore has to spend more time to think and evaluate those verbs.

A research study done by Kemper et al (1992) displayed the relationship between working memory and reading comprehension. The study included two experiments done to both college students, 18 to 26 years of age, and adults, 60 to 92 years of age. There were tests of working memory, a standard time reading comprehension test, and a reading test designed to explore how syntactic complexity effects comprehension. In Experiment 1, they found that the age group declined in working memory and reading comprehension. In Experiment 2 on the other hand, they found that age group declined in reading rate but not comprehension. The results therefore suggested that working memory limitations affect older adults’ ability to process complex syntactic constructions, lowering comprehension in the timed test (Experiment 1) and reducing reading rates in the untimed test (Experiment 2). (1992) This experiment was great evidence to the effects of working memory in reading comprehension between different age groups by contrasting two sources of syntactic complexity. We also find in this study that syntactic complexity can also be due to the processing of left branching (LB) sentences. It explains that LB sentences are difficult to understand because an embedded clause forms part of the main clause subject in these sentences and precedes the main clause predicate. (1992) Regarding this study, we should be able to say that an adult’s reading comprehension would be enhanced by eliminating LB sentences and lowering syntactic complexity. The study however found that even if we eliminate all these negative aspects there are still other factors of reading comprehension, such as those involved in the analysis of discourse structures and the retention of anaphoric and propositional information which are also affected by aging. (1992)

After reading about this study, I started wondering how different the results would be if this study was comparing the native readers and the ESL (English Second Language) readers. I did some research and found that for language learners, cognitively complex tasks have an effect on the complexity and fluency of their reading and speech. (Iglesias, 2008) As the complexity of the task increases, learners will prioritise complexity or fluency to read and communicate effectively (2008). If there is a significant difference in language comprehension between native speakers and learners, from a cognitive point of view, should we expect that the task of reading will be more complex and will take much longer time than those native speakers did in the research previously mentioned? What about the age difference between ESL learners? There were still limitations to Kemper’s study where it suggested other manipulations designed to facilitate semantic and pragmatic analyses. What about for language learners? What possibly can facilitate their performance on these tasks? I can see much better now why my mother is still struggling to fully understand, speak, and read English language. Being 50 years old with no previous experience in this language, she has been taking classes, having multiple tutors, and volunteering in English speaking environments for 5 years hoping to improve. She is doing great for herself but compared to a younger generation, who also began 5 years ago, her success is not even close to being equal to their accomplishments.

February 28-March 4, 2011
SPEECH PRODUCTION

In this week’s lecture we began to talk about speech production and how we get from thought to speech. Before going into speech production we were introduced to two traditions, speech errors and the lexical bias effect. We were told in the lecture that examining speech errors were very easy and did not require technology to see when and why people make errors in speech and the kinds of errors they make. It was found in a study that people were less likely to make an error when the error results in non words than it was in real words, so these errors were in a lexical level as opposed to a phonological level. This is called the ‘lexical bias effect’, hypothesizing that people were more likely to substitute in a real word so there was a bias for phonological speech errors to select true words compared to changed phonemes (non-words). There was another study where people did random phonological exchanges in words. They took a bunch of English words and exchange the phonemes. They found that one third of the words would turn out to be real other English words, which is a high number. Actual number of speech errors people made in various recorded studies turned out to be real English words (40 to 45 percent) so there was a lexical bias effect in speech production. I started wondering right away if there was a lexical bias effect in second language production and so I did some research. I found in a study done by Costa et al (2006) that the lexical bias effect (LBE) reveals the presence of interactivity between the phonological and lexical levels of representation. These effects suggest that there is feedback in second language production and that it extends the two languages of a bilingual. Costa et al tested for LBE in second language production where highly proficient Catalan–Spanish bilinguals were asked to perform a SLIP task in their L2 (Spanish). The second experiment tested for LBE across the two languages of a bilingual person when performing the SLIP task in Span¬ish. Spanish–Catalan bilinguals showed more errors if the resulting error was a word in Catalan than if it was a non-word. My question in this case is what would happen if the person was tested for only words from one language? Would the LBE across languages be reduced? For example, if the production of Catalan errors were comparable to that of non-word errors meaning that the experiment contained only Spanish words. I know it would be impossible to conduct such experiment with the present combination of languages in these experiments, but I still wonder about the difference between the lexical bias effect in one language and lexical bias effect across languages (bilinguals).

Later on in the lecture, we learned many other facts about how we access semantics before we access phonology in generating words. It is apparent that we initially access the meaning then phonology comes afterwards. There was another study shown in class where it was looking at the facilitation or an inhibition that either fastens or slows the time in phonological priming. In the experiment, if a distractor word is presented before the object was named, you will get semantic interference, but if a picture is presented before, you will get semantic facilitation. It is indicated that by seeing a picture, you access semantics and the activation spreads to related things and you get names of the words which facilitates the name in that word, whereas if you see the word going through different process where you are biased by the phonology of that word, you want to say that word and it interferes with your naming it. (Lecture, Newman, 2011). There was also a phonological facilitation if the distractor word is simultaneous, before or after the object to be named. In early stage of semantic activation, it is important to present the thing early enough that you activate the concept to related concepts so you are faster whereas if you get these semantically related words too close then it will go to inhibition rather than activation. We also looked at both the syntactic info and phonological info and asked the question of which one do we access first, or if we access both at the same time? This has to do with motor responses in our brain, indicating that it is lateralized. This is when a person is cognitively active on either side of their body and the syntactic and phonological info in their brain is present on the opposite side of that body part they are using. For example in one study, whichever hand participants respond with, the other side of their brain is working. They also found in this study that you always go with syntactic info when you have made a decision and then you go to phonological info and it is never the other way around. I thought this lecture was crucial for speech production showing how this process works in our brain as we think and produce speech. I have always been curious about people who know multiple languages so they are not necessarily bilingual or trilingual, but multilingual. I would love to experiment with a multilingual person’s brain activity as they are being exposed to these tests in various languages they know to find if there are any differences in brain activity from one language to another.

March 7-11, 2011
TIP OF THE TONGUE PHENOMENON

In this week’s lecture, we were introduced to a phenomenon called ‘Tip of the Tongue’. This phenomenon occurs occasionally to everyone when they have difficulty retrieving words they know. I see this phenomenon in equal amounts between my friends from Canada (monolinguals) and my friends from foreign countries (bilinguals) often when they are speaking. I was not 100% sure however if it would be the same amount between monolinguals and bilinguals and so I decided to research it. I found a study that gave me answers, but it also caused me to ask more questions. The experiment was done to twenty-two American Sign Language (ASL)–English bilinguals, twenty-two English monolinguals, and eleven Spanish–English bilinguals where they named fifty-two pictures in English. The study found that bilinguals reported more tip-of-the-tongue (TOT) failures than monolinguals. Three accounts of this disadvantage are that bilinguals experience between-language interference at a semantic and/or phonological levels, or that bilinguals use each language less frequently than monolinguals. The result provided that the increased TOT rates associated with bilingualism cannot be attributed exclusively to competition between phonological forms. Bilinguals with no possibility of competition between languages at the phonological level also experience more TOT than monolinguals.

My question about Tip of the Tongue phenomenon is whether or not it is memory related or language related? I get confused as I think about my own personal experiences with speech errors. I personally make more speech errors and face the TOT phenomenon more in my native language than the two other languages I speak. I have been talking my native language ever since I was born and I recently became fluent in English and French, taking an extensive amount of courses while of course moving to Anglophone and Francophone country, Canada. This study argues that there are no phonological overlaps between the languages that were being tested and it was found that reduced frequency of use in those languages is the more comprehensive explanation of TOT rates in all bilinguals. I agree with this statement since I have been speaking my native language the least amount compared to others, having no possible opportunities other than my home. On the other hand, I have been learning and speaking English for years and I can say that I am almost as confident as native English speakers. Therefore, I find these to be associated with the frequency of use of these languages creating two different TOT rates. The study also suggests that all speakers who divide their language use between two languages, whether signed or spoken, experience increased TOT rates. This supports the reason why I would experience such errors since I talk three languages on a day to day basis. How is it possible then to make such errors in your innate language instead of the learned ones? Does it really have to do with the memory and the frequency of use of these languages or does it have to do with semantic and phonological differences in these languages? I think more research could be done for this underdeveloped phenomenon.

March 14-18,2011
DEVELOPMENT OF SPEECH PERCEPTION

I am very happy about Wednesday’s lecture as we started talking about the development of speech perception. It is the topic I picked to write for my chapter and that is because I am really interested in finding out how children can learn to produce language and how this process happens and how long it takes to happen. I also edited the development of speech production chapter in order to have a perfect understanding of children’s abilities, not only in perceiving speech but also producing speech later in their development. One of the most important reasons why I want to know so much about this part of children’s development is because I am a Turkish girl living in an Anglophone city, and currently dating a Lebanese guy who speaks Arabic. Hypothetically speaking, if I ever decided to have a child with him, how possible it will be for our newborn to learn so many languages separately without getting confused and overwhelmed. I have always been worried about this and I cannot really say that I know for sure if children are capable of discriminating between many different languages which could all really be regarded as their native language. On the other hand, let’s say if he/she did not encounter other languages than just one while in the womb and right after birth, my question becomes, until what age a child can learn to speak how many languages without an accent and with fluent grammar, almost as if others were also his/her native languages? We also learned this week that infants are capable of recognizing speech sounds, but as they get older (12 months) their perception to consonants decrease in any foreign language and it increases in their native language. Therefore, how effective would this statement be if the child is encountered with various native languages?

I have done much research about infant development of speech perception for my chapter and found many interesting facts about infant’s ability to respond to speech like sounds and discriminate between them. I have also found three different procedures used to test these speech perception capabilities in young infants including high amplitude sucking, head turn, and visual habituation. They all have their own ways to test different age groups. Sucking is found to be an ideal technique since it is one of the few activities over which infants have good motor controls and therefore is easily conditioned. Visual habituation is more sensitive than other techniques for revealing infant’s discrimination capabilities and it also test many aspects of visual perception. Lastly, it is the head turn technique which tests habituation simply by recording how newborn reacts to new stimuli. We learned in the lecture that these techniques all prove to be effective in healthy infants. It makes me wonder however as I think about infants getting older or as we experience infants with deficits in their language acquisition, there would be a need for different techniques to determine their abilities in speech perception and production. I would like to search and find if there are any development lists who invented some different kinds of techniques for those infants with deficits in language acquisition.

March 21-25, 2011
This is the last week of Dr. Newman’s lectures and we have touched on incredibly interesting topics. I do not even know where I should begin to give all of my thoughts about the new things I have learned this week. We initially talked about bilingualism and the ability to learn more than one language early or later in life. We also asked if there is a disadvantage to being bilingual since two different languages in one's home will confuse children if one is not consistently used, but there is very little evidence for that. Evidence suggests that at early ages, kids exposed to more than one language may be delayed in uttering their first words, but certainly by grade 5 there is no disadvantage to having learned multiple languages. We also talked about languages being learned in different ways, associated with different brain activities for each language. I found a study done in 1997 by Kim et al who used the fMRI method to examine cortical activation among a range of bilinguals who were proficient in a number of languages. They divided the participation pool into two groups: early (L2 acquisition before the age of five) and late (after the age of twelve) bilinguals. The results suggested anatomical variation between early and late bilinguals such that although early bilinguals showed similar activation in both Broca's and Wernicke's areas during a silent sentence generation task while late bilinguals displayed common activation in Wernicke's but not in Broca's area. (Dave, 2007) These results indicate a late language acquisition in children, suggesting that their "critical period" cannot be ignored and that language learning after a certain age is differentially represented in the brain. My question in class was about the older ages and their ability to learn another language and if there is any change to the function of their brain that is different from younger bilinguals.

Next lecture, we learned about aphasia and its negative influence on people's lives. As we were watching videos of people with expressive (non-fluent), receptive (fluent) and global (both expressive and difficulty with receptive) aphasia, I could not stop thinking about how it would be if these people were bilingual. Since most people in the world know more than one language, bilingual aphasia should be an important line of research for psycholinguistics. I have learned that aphasia has a devastating impact in peoples’ lives from as young as 18, running up to 78 years old. It brings along problems such as not being able to get a message across to someone, difficulty in naming things and therefore substituting words for one another, having trouble with numbers, reading and writing, and having an overall frustration. Regarding all of these negative aspects, I cannot imagine how challenging it would be when providing treatment to bilingual aphasics as opposed to monolingual aphasics. I question if our brain really provides separate stores for each language we know. And if it does, how are we able to switch between languages and use them appropriately and without interference? My other question is can one language experience aphasia while not affecting another one? And if both languages become affected, which one should therapists focus on more in aphasic treatments, the L1, the L2, or both?

March 28- April 1, 2011
In the three years I have attended Dalhousie, I can say that this is the first time I have ever been asked to participate in a debate in a class. I am very thankful to Dr. Newman for creating a whole new subject format in his class this semester. By doing so he has also encouraged us to integrate information from different parts of the course and reflect deeply about the content we learned each week, not only in our chapters but also in our debates as well. The topics Dr. Newman picked for the debates are current issues that we can relate to with what we have been learning in class this semester and that of course makes it more interesting to participate and therefore learn more.

Our debates started this week and there were some really interesting topics that I did not know about before. The first team that presented their debate was arguing either for or against the usage of Ebonics as the primary language in the Oakland School. I never knew that Ebonics is a recognized language so that was a surprise for me to begin with. Being the first team, we also have to be more lenient with them since they did not have the chance to view other presentations before hand in order to improve their own. Overall, they did a good job with finding information. The side for the usage of Ebonics argued that it should be used as the primary language in schools in Oakland since it is the primary language there, and if they diminish the language by telling children it is wrong to speak it in school, it will lead to a loss of cultural identity. They believed that fewer children will drop out of school if they will not be judged for what and how they speak. The competing side on the other hand was very intense and had very good points supporting their argument, arguing that speaking only Ebonics may be very problematic for children in the long term and create low self esteem and therefore low success. I found the topic interesting because of how similar it is to our debate topic that we will be presenting next week.

Our topic is also about the usage of minority languages in Canada and our position is against having only the majority language (in this case English) being taught to children from 4-12 years old in schools. We were able to work on our debates more sufficiently by viewing other debates; especially the one about Ebonics, which really helped us become more prepared and state our position clearly. I am so far enjoying the idea of having these presentations, which makes it much more fun than typical oral presentations.

April 4th-8th, 2011
I find it hard to believe that we are ending our studies this semester and writing our last blog posts. I have stated many times that I am an international student from Turkey and bilingual. However, I do consider myself trilingual now. Therefore, I had great reasons to take this class and I am really glad I did it. Just like other students, the evaluation format was surprising in the beginning and I could not stop wondering if it was really going to be a better way to learn the material compared to other psychology classes I have taken at Dalhousie University. Well it surely was Dr Newman, I can assure you that I have enjoyed and learned so much in your class that my friends are interested now in taking your class as an elective.

Before this class, I took Psycholinguistics in French and then decided to take it in English so I could compare and understand the differences in many aspects between the two official languages in Canada. I was lucky to do this because I got to learn not only about those two languages, but much more. Each class lecture was an interesting topic and we had the chance to relate each topic in our blog posts every week and in our debates close to the end of the semester. The blog posts that we wrote each week were very helpful to keep students on task and it also gave us a chance to become creative thinkers and writers by doing research for all the questions we haven’t gotten answered during the class. Wikiversity chapter on the other hand, was one of the most interesting projects I had to do throughout my university years. I have to be honest that when it comes to being creative on the computer I am the worst person to ask for that. I remember having hard times trying to figure things out in terms of its format and almost cried at some point, but that is all in the past now. I can say that I was really lucky to pick one of my favourite subjects which was the children development and speech perception to write about and therefore, it was fun to make research for. There are couple of issues however that should be considered in terms of making the project easier for students (ex: the pictures were really hard to find for my topic however for other broad topics it was much easier to find on Wikimedia Commons). Lastly, I found the debates very helpful for students to learn how to make effective research and an argument to support their side of the debate. My subject again was of my interest (minority children continue learning their primary languages from 4-12 years old and not seeing them as interference) and I found many cultural aspects affecting it. It is usually challenging to have group presentations but this time it was much more productive and we had a really good stand in our argument.

Overall, I really enjoyed participating in this class and I definitely support the idea of having a different format in this course. It really helps our writing abilities and our effective research experience. I want to thank not only Dr Newman, but also Sarah Dhooge for being really understanding with all the complications I had in my assignments. I really appreciate your help and your positive feedback. This class motivated me to become a better linguist in my personal world and I will retain everything I learned for my further studies.