User:Joel Sardinha

Jan 5 - 7, 2011
The start of a new year and a new semester. This annual event brings a lot to look forward to such as familiar peers in new classes and new evaluation strategies like those implemented in this psycholinguistics class. The amazing yet intimidating world of Wikiversity opens a new dimension of online learning and interaction to enhance our learning of psycholinguistics. Although i admit that the 50% weight tagging along with the chapter requirement in this course is terrifying I stay positive and hope to learn a great deal about the topic I choose to write about. I picked the topic on multilingualism because this topic greatly fascinates as well as applies to me personally as I am a bilingual and have been exposed to multiple other languages throughout my life. Human capacity to retain and fluently use multiple languages without confusing lexicon and grammatical structure is quite impressive. Being bilingual there are instances where certain words apply perfectly in one language but make no sense in another and expressing a thought becomes quite frustrating. Through creating this Wikiversity chapter I hope all multilingual enthusiasts will learn something and I also hope to learn a bit about myself.

Jan 10 - 14, 2011
I agree with the concept that language is an innate part of human nature and we need language to survive and communicate with the people around us. Although i do not necessarily agree with the concept of being unable to think without language. The main reason is mental manipulation of the world around us do not necessarily require a language as people can use visual representations or mental models. Linguistic relativity has some merit but I would argue that advanced societies are linguistically evolving at a faster rate in terms of our lexicon. As new technology develops we constantly have to create new words to describe them and gradually, as mentioned in class, nouns get transferred to verbs and so on in our sentences (eg. google-noun -> googled-verb). What seems fairly straight forward but I never really thought about before was coping mechanisms to communicate. When you put two different speaking language cultures together, they somehow find a method of communicating which makes me wonder about what if all our cultures were forced to interact early in language development, would we develop a universal language and would it be an amalgamation of all existing modern language?

I found it quite heartwarming that certain aspects of our society try and instill positivity such as the "Like" button on facebook and the motivation to create Toki Pona. Prior to this lecture I was aware that different cultures may not agree on certain contrasts in colors but the fact that some languages have no distinction for different colors like blue or green (Setswana) totally surprised me. The fact that as humans, they have the visual capability to distinguish the colors yet, due to their language seem to group blue and green in the same category makes me wonder what the English language constrains us from.

The lecture on brain anatomy was a good refresher about the techniques used in brain imaging. It was fairly straight forward material that I had been predisposed to in previous classes so I will not elaborate any further.

Jan 17 - 21, 2011
On the surface aphasia seems like a fairly straightforward deficit, but the different types and their specific damage was astounding. I totally agree with the concept that people with different aphasias cope with their deficit by using alternative routes through the language network to achieve their goal. The example that left me confused was when Dr. Newman mentioned people with a specific deficit (i believe pure word mutism) were unable to answer back, but were able to understand what was said. The reason being the connection between understanding and the production of speech was damaged and in order to overcome this people could write down their thoughts, read them and then say them. However my question arises because while reading the person would have to comprehend the sentence and then would they not consequently end up in the same pathway as the initial comprehension.

Prosody between different languages is also a concept that made me ponder. Languages differ in great contrast in the importance of prosody from the boldness of Arabic to the melodic flow of Spanish and even to the uniqueness of the African (click) languages. Organization of discourse made me wonder about Apache and the aboriginal language discussed in previous classes. In these languages each single word is an expression of multiple concepts, therefore organizing the ideas and concepts of stories is a much more complicated task. What astounds me further is that story telling is an essential part of this culture for passing on information to future generations. I would not be surprised if these cultures had a significant advantage over others in tasks heavily reliant on discourse organization.

Sign language as a whole seems limiting in terms of phonology and prosody due to their restriction to vision. Sign language seems hard to break down in terms of phonemes due to their specific movements and also in terms of phonological rules because sign language does not seem to be an innate language in humans. In terms of prosody also sign language is restricted because visual cues are much harder to convey than acoustic cues.

Jan 24 - 28, 2011
The lecture on acoustic phonetics was quite interesting especially all the different techniques and experiments done. The Spectrogram is an intriguing manipulation of acoustic information into visual information. The pattern playback device is quite a unique method of testing human acoustic understanding as it shows us that we rely on a lot of supportive information along with the acoustic information. This made me wonder about how congenitally deaf people learn to interact with other people. It can be hard for people of different countries to communicate in English but congenitally deaf people have exceptional ability to understand through lip reading. I have 3 aunts in my family that are congenitally deaf and have never learned any sign language but yet are perfectly capable of communicating with any stranger through lip reading. Contextual variance also cannot be applied to deaf people as they cannot interpret the changes in acoustic frequency. Considering how acoustically dependent English is and how language affects brain development, I wonder if congenitally deaf people have the same brain development but transfer their dependence from acoustic cues to visual cues to obtain their information. Categorical perception is also another interesting area because all humans have the same acoustic hardware, but depending on the language in which we develop can have a huge influence on how we hear and perceive different sounds. The second formant transition feature, where we hear discrete sounds even though they exist on a continuum made me wonder if cultures not dependent on the place of oral stop such as the African 'click' languages would perceive the continuum as discrete rather than continuous. The McGurk effect brought me back to the congenitally deaf idea where either visual or acoustic information is restricted and how the brain compensates for the lack of such information. I would assume that congenitally deaf people would be way better at identifying the true meaning because they have trained to identify minute cues. It surprises me that some cultures have not developed reading and as a consequence also writing. The data on the saccades and fixation on content words while essentially skipping fillers made me wonder if this trend also affects blind people reading braile. Coincidentally the data about human perceptual span was linked to a story I read earlier about how 3D media can be visually taxing and not worth pursuing. The reason being our ability to only focus on the small region of the fovia whereas in order to get the full effects of a 3D movies it requires us to focus on the entire screen, which is not humanly possible.

Jan 31 - Feb 04, 2011
I was quite surprised by the complexity and diversity of language even at the allomorph level as there are differences within a single language that is agreed upon by the societal norms (eg. differences between American and Canadian English). The recursivity phenomenon gives the english language huge flexibility in developing new words and meanings from these words can be easily generated. I would assume this gives english speakers more plasticity advantage over other languages such as the Navaho or aboriginal languages that require a string of concepts to create a word so the recursivity phenomenon would be extremely limited within these languages. The diversity of morphological typology is also quite astonishing and raises a few questions. Since polysnthetic languages are a combination of agglutinative and synthetic, would it be easier for a native polysynthetic speaker to learn an agglutinative or synthetic language? Also since infixing languages requires the brain to know the template word, development of the brain around language of an infixed native language speaker will be quite different than any other morphological typology different language speaker. This has great impacts on linguistic experimentation as different languages cause different responses in different people making controls extremely hard.

Feb 07 - Feb 11, 2011
It is unique how the English language has the capability of phrasal meanings where a string of non related words make up a whole new meaning. Phrasal meanings are subject to community support therefore different meanings can arise in different populations speaking the same language and this further increases the novelty of the English language, constantly adding new words and meanings. This concept of community generation also transitions to the creation of multiple words for a similar meaning if the word is used frequently enough. Different cultures use different words in different frequencies, so the necessity arises for generating more words with slightly altered meanings in different contexts. The word type dimension raises an interesting though because it activates a complex network of sensation for concrete rather than abstract words in normal people. However, would the same activations be seen in congenitally blind or deaf people? I would assume their activations would be different since their communication is heavily reliant on making the concrete into abstract and manipulating these abstract concepts as seen in ALS when describing people. Language register also brings a few interesting thoughts since we become more aware of our language when speaking to authority and different languages even convey diverse meaning when speaking in different registers. It is astounding that because of our discrete combinatorial system, English has the ability to create and infinite grammatically correct sentence and is only limited by our human mind. It is astonishing how the order of the sentence components vary between languages such as if the head proceeds the role players or vice versa. It was also mentioned in class that the learned language determines the specific parameter, so I was wondering if the L1 parameter would have any effect on L2 parameter and vice versa in an early acquisition bilingual. Also would the L1 parameter prevent the learning of alternative parameters in L2 learned later in life?

Feb 14 - Feb 18, 2011
It is quite fascinating how languages differ in their structure such as head first or head final languages. All language arose from a need to communicate and the different structures of languages are evidence for the capability of the human mind to comprehend both structures. However in terms of multilingualism would the brain be capable of comprehending both head first and head final languages simultaneously during early leaning? Is is possible to learn the opposite structure after the first is already established as the native language? The garden path sentences are also another fascinating exercise in linguistics. I would assume however that this phenomenon is not possible in the languages of the Navajo or other aboriginal North American languages as described in previous classes where words are drawn together from a multitude of meanings and therefore sentences could not be formed into a garden path formation. It seems astounding that while reading we are restricted to a single meaning and only once we completely parse a sentence and figure out it is semantically wrong do we return to the beginning and look for solutions. These sentences provide evidence for support that humans process information serially and incrementally where each word is processed individually and expectations of the next words are formed based on the information already possessed. It seems rather strange that the experiments of Hahne & Jescheniak with the Jabberwocky sentences showed identical paths between syntactically correct and incorrect sentences, with only the early negativity difference in the incorrect sentences. The difference between regular German words and nonsense syntactically correct Jabberwocky was also very minimal and this could possibly hint at a system that is specific for detecting correct syntactic structure within the human brain. The experiments of Stromswold, Caplan & Waters on syntactic transformation proposes that the human brain uses the most convenient structure which is most likely right branching. However there are certain instances where the sentences are quite complex and the brain may adopt a center embedded structure, which could propose that the brain can decide which structure to use depending on the conditions of the different sentences.

Feb 28 - Mar 04, 2011
The language and music lecture brought up a lot of interesting ideas especially within the similarities and differences section. I was aware that there are some similarities between language and music but the extent of overlap exposed during the lecture was completely astonishing. Language and music are similar in many aspects ranging from medium, to structure, to meaning, to organization and even to generativity. It seems almost as if music can be considered a language and that adds to the whole notion of considering music as the universal language. In further support of this claim is the innateness of music within humans and our ability to distinguish correct vs incorrect musical progressions even early on in life without previous exposure. The extreme amounts of similarity make me wonder if the deficits encountered with language are also applicable to music. These would include things like aphasia, developmental problems and even cortical organization. In terms of aphasia, language is extremely susceptible to a vast array of different aphasias, as discussed in previous lectures, so is music also susceptible to the same extent of aphasias in individuals? In terms of developmental problems there seems to be a critical period for successful acquisition of language during development, therefore I wonder if we are also restricted in our ability to attain "native like proficiency" in music ability by a critical period. As far as cortical representation is concerned, the question arises whether a highly proficient musician activates similar cortical populations to his native language. We see activations in Brocca's area for professionals while listening to music, but to extend it further would other music tasks similar to language tasks activate similar neural populations? Furthermore to extend the similarities between language and music I would assume that different genres in music could be paralleled with different languages where different genres have different structures and organization similar to different languages. The orthographic component however is identical between genres due to the universal standardization of musical notes. Speech production is also another fascinating topic and the multitude of speech errors, especially slight errors with semantic and structural meaning intact, suggest that speech production has various levels of processing. The different layers could correlate with a many different aspects of language such as phonological, semantic, syntactic, etc. It seems rather strange that there is no cross linguistic lexical bias and more astonishing that English has a much higher rate of errors than other languages like Spanish. It seems counter-intuitive that syntax would precede phonology in a reaction task because one would assume that the sound would have to be processed first in order the determine the correct reaction before processing the sentence structure.

Mar 07 - Mar 11, 2011
The class on models of speech production was quite interesting, especially the Tip of the Tongue Phenomenon. It is quite amazing to note that this simple common phenomenon provides evidence that there are multiple layers of linguistic access and the only the lexical retrieval level is hindered because we know everything else about the object, just cant retrieve the word. I wonder if this has some relation to Brocca's aphasia where the words cannot be spoken, but everything else about speech is fine. The progression of speech was also quite straightforward once it was laid out, starting with semantics, moving to syntax and morphology, and finally going to articulation. I thought that Dell's model and its relation to substitution errors was quite interesting especially because I notice myself producing these errors often in speech. I usually tend to substitute or combine words that are similar in semantics and produce words that are either wrong in the context or completely odd. In addition the Weaver model helps strengthen this process because self monitoring occurs at a conceptual level therefore speech is more prone to whole word substitution or errors rather than smaller errors if it were monitored at other levels. The lecture on gestures was also quite interesting and made me more aware of the gestures I produce while speaking. I found it rather intriguing that we anticipate speech from the gestures produced by a speaker. In terms of gesturing with congenitally deaf people, they would obviously be more reliant on gesturing to compensate for their hearing loss and to aid in communication. However I was wondering if they produced any unconscious gesturing since most of their communication is through sign language, which is conscious. In addition I found it odd that people still gesture when they know that the person they are communicating with cannot see them. I understand that guesting has some benefits to the speaker as well but what got me thinking was do congenitally blind people also gesture. Since they cannot see and do not know the patterns seen by others would they still gesture and would this be similar in representation to normal gesturing. Also is gesturing similar between congenital blind people or are there tremendous individual differences based on personal interpretations of the objects and subsequent gestures. This question ties into the different types of gestures, especially metaphoric gestures because it is a concrete representation of abstract concepts. It is also interesting that different cultures differ in the amount of gesturing and this makes me wonder if high gesturing cultures have lower verbal communication or less meaning in their sentence that they need to gesture more in order to compensate. The Krauss study which restricted hand motion also bring up questions with people who lost their hands early in development, would they also produce less speech output compared to a normal human? It is quite interesting to note the logical progression of symbols to letters and the evolution of writing language. Rebus writing seems like a logical start to language and I wonder why it did not become the main template for language communication in different languages. It is also quite interesting that different languages innovated their own language and added things from previous languages.

Mar 14 - Mar 18, 2011
This week we discussed the concept of Language development which is a critical and interesting component of linguistics. The whole concept of a critical period of language acquisition, even in ALS, indicates the innate and essential role that language plays in normal functioning. This is further supported by the unfortunate case of Genie who was never exposed to language for all her early years of development. Even the concept of pidgins and creoles is quite fascinating to know that communication occurs no matter the barriers and new language can develop from a need to communicate effectively. The fact that creoles are created by children when no structure is present to guide them, indicates that as humans we have an innate need for a structured grammatical language and as long as we are within the critical period we have the ability to create one. Even the creation of INS by kids is a great testament to the need of communication between humans when no language is present. This situation also provided no barriers to the organisation of the language and therefore we see that INS is much more segmented than traditional signing indicating that children found this method more easy and efficient and therefore adopted it. The experiments by Kam & Newport also lend support to the fact that kids are better at extrapolating information from inconsistent data since they are still in the learning phase and have not developed strict rules, making them much more flexible. The concept of sensitivity to prosody in infants, where a familiar language is preferred, raises question about preference in congenitally deaf infants. Since they have no exposure to communication till they can see, would the first language they are exposed to be reinforced for preference. In addition do early acquisitioned multilingual children show preference for one language or are all languages equally preferred. Evidence that infants as early as 4 weeks have the capability of categorical perception indicates that this phenomenon is innate and depending on our language, certain categories get reinforced and therefore maintained, whereas other degrade. This made me wonder about multi-linguals, since they are exposed to many more languages, would their categories be much more diverse if regularly maintained, and could this have potential other benefits other than simple discrimination. The perceptual Magnet effect in children also raises some thoughts, such as is this phenomenon bound by a critical period and could this phenomenon be seen or even developed in a child like Genie. I have always been amused and slightly perplexed by the motherese phenomenon because it would seem counter-intuitive to reinforce speaking in a higher tone in children especially because almost no normal language in adults communicate in that fashion. However the reinforcement patterns outweigh the potential harms of reinforcing a false tone of communication. Also do we see any motherese strategies in ALS and if not are there any significant differences like learning time between children of the different languages.

Mar 21 - Mar 25, 2011
This week consisted of two classes on bilingualism and one guest lecturer on Aphasia. The topic on bilingualism has always fascinated me and also applies to my life. The fact that bilingual children follow the same developmental path as monolinguals and have no disadvantage, made me wonder if humans have an innate language capacity. The concept of metalingusitic awareness indicates that bilinguals are much better at word switching tasks especially bilinguals of different branching languages, because they do not strongly map the word with the meaning and are less prone to confusion. The whole critical period and age of acquisition of language is quite fascinating. The notion that learning a language later in life is much more difficult makes sense because the native language will be more prominent and therefore consequent languages will be based on the native language as a template. Difficulty rises when branching structure and organisation of the second language is different from the native language and this supports the idea that native language is used as a template. Acquisition early in life, within the proposed critical period, does not depend on a language template therefore multiple languages can be acquired without detriment. This made we wonder if simultaneous bilinguals are much more susceptible to confusion or mixing up their languages since they learned both languages at the same time. All the theories supporting the critical period make sense, but the one that stood out the most was the maladaptive gain theory. This theory took a unique approach to state that the smaller working memory of children is advantageous because it helps retain only the relevant information in a sentence. This approach is further supported by how we teach language to children by stressing on certain parts of a sentence to indicate importance and essentially ignore unimportant parts. It is interesting to see that early bilinguals show neural overlap in structures for both languages whereas late bilinguals use separate cortical areas. This suggests that early bilinguals are more efficient because they use less neural populations. However I wondered if over prolonged exposure and increased proficiency, would the activation profiles eventually overlap after high proficiency is achieved even if the languages were acquired late. Furthermore are there activation differences between young adult and senior learners to indicate if there is a steady decline or a sudden drop and plateau effect of language acquisition. What was interesting is that Wernicke's area showed similar overlap for both early and late bilinguals. This indicates that language processing and feedback is not affected by a critical period. The study by McLaughlin et al. (2004) was quite surprising in their finding where students were able to detect pseudowords even in the first session. I wonder if this indicates that some understanding of rules and structures of foreign languages are developed once a native language is established. This notion is further supported by the fact that studies have shown implicit correction in the brain even without the subjects awareness. The videos on the different aphasics and the way they respond to the same stimuli was quite interesting. The one question that I wondered with fluent aphasia was that there was damage to the feedback loop, therefore people could not regulate their speech, but why would this affect speech of their own thoughts? Since the thoughts are their own, they would not need a verbal feedback to regulate and subsequently alter their speech. Furthermore with a fluent aphasic would they be able to write down their thoughts and then properly verbalize them, effectively overcoming the auditory feedback detriment.

Mar 28 - Apr 1, 2011
This week we started the class debates which was quite interesting because of the hotly contested issues being presented. First was the debate about whether Ebonics should be the primary language of communication in the school system where a majority of the community speaks Ebnonics. The main point from the group supporting this proposition was that many children have a difficult time understanding the language taught in class and therefore do not do as well. By integrating Ebonics into the classroom these children will have a better opportunity to understand and have a better education. They gave valid evidence to show that Ebonics can be considered as a separate language therefore learning English in school would be like learning an L2 which is extremely hard given that they have not been previously exposed to the language. The against side stated that by learning the major language of the country the kids will have better options in the future because Ebonics is only restricted to certain communities. Ebonics can be learned and reinforced at home or their community. What I wondered was is there any bilingual program in these communities that help transition the students at early grades from their L1 to the common L2 of the country. This way they can have a proper shift to the major language spoken and as well retain their native language giving them the advantage of bilingualism. I thought both sides presented very convincing arguments and I can understand why this is such a debated topic.

The second debate was whether the Fast ForWord program iss greatly beneficial to dyslexic children and whether this program should be integrated into the school system. The side supporting this decision stated many point as to the advantages of the program including that it was an addition and not replacement to reading. The program was very personal as it recorded and upgraded the scores daily of each person and therefore the exercises were progressing at the pace of the individual. Also the program was very incentive driven to keep the children interested in the exercises which I thought was a very ingenious idea. The against side has some good points stating that all the research done on this program comparing it to other similar programs were done by the agency that created this product. Also independent researchers could not have open access to this product since it was under patent protection, so the data cannot be completely reliable and may be biased. What made me wonder was how different this program was from the leading dyslexic program. What makes it stand apart and what is its main asset that abides by the leading dyslexic research. Is it possible to have similar gains in improvement by using similar strategies and not being in the program.

The final debate was on whether only oral communication was to be used for children with cochlear implants. The against side presented some strong arguments stating that oral on top of other normal forms of communication for congenitally deaf children should be used since it will help reinforce communication and improve oral ability. The children will also be able to interact with normal children who are unable to interact through ASL, reducing their isolation when compared to only oral thought children. In addition they stated that only oral communication works for about 50% of the population so implementing it to all subjects will result in success for only half of them which is no feasible for implementation. The for side had some good points, but were mostly theoretical like faster learning is accomplished through greater exposure. Therefore if high oral proficiency is required, the students have to be completely immersed into only that form of communication and will have larger gains than those learning a combination of oral and other forms of communication. That being said I would have to concede that trying to defend something as extreme as learning only oral communication when you child is congenitally deaf can be quite hard especially due to the huge weaknesses in the argument. One of the biggest one is learning a new way of communication when you have never been exposed to that stimuli before. Obviously it would be beneficial and children would learn more if there is more exposure, but if they cannot understand or interpret the stimuli then excessive exposure can be frustrating and detrimental. This can be paralleled to learning a new language without any translation or reinforcement to the language you already know. As a result the best solution would be a synthesis of the two arguments where a lot of oral communication is needed but it has to be properly complemented with other forms of communication already established to get a better transition to normal oral communication.

April 4 - April 8, 2011
The final week of classes were concluded with three additional interesting debates. The first topic was whether right hemisphere activity helps aphasics in left hemisphere damage recover. The for side made some valid arguments and stated many programs that showed right hemisphere activity to compensate for left hemisphere damage. The first program is offered here at Dalhousie and is called inteRACT, the only adult aphasia treatment program in North America. This program is for adult aphasics and is an intensive 4 week training program with excessive exposure to speech therapy. This may seem like a positive program, but its relativity to the debate was a bit vague. The for side did not mention much in terms of increases in right hemisphere activity before and after the program to strengthen their point. The one question that came to my mind was which part of the brain was heavily activated with the intense exposure to the treatment and how does that area relate to the results produced. The second program mentioned was the melodic intonation program, where singing is used rather than talking. From the data presented by the group, it was shown that aphasics are much better at singing certain information compared to talking about it, therefore this program encourages melodic communication. What seems weak about this program is the practicality issue. I understand that aphasics just want to be able to communicate with others no matter the method, but it seems rather impractical to be communicating everything through melody. Furthermore what makes the melodic stream different from the verbalization stream, in the end the person is still producing the same words, it would seem then that the melodic stream is just another level over the verbal stream. However if the verbal stream is damaged, causing aphasia, then how does melody overcome this barrier? The final method shown was intension treatment, where aphasics select only one output and stress on that particular concept to produce it properly. This would entail a much longer time to communicate but at least it is some progress. The against side also made some very valid points with their main argument being that the left side and right side have different functions, therefore compensation cannot occur across the hemispheres. They showed that neural areas still intact around the lesion site seem to be activated excessively to compensate for the loss. This seems rather intuitive as many cases have show neural optimization in people with congenital problems. Mostly cases of undeveloped senses are shown such as congenitally blind people, where the area normally used for sight in the brain is taken over by neighboring neural populations and other senses are slightly heightened. My overall thought on this matter is that at older ages right hemispheric activity is not drastically enhanced, but if the problem occurs early in life then, complete right hemisphere activity will be seen. This is on the evidence of complete lobe removal in young children and their subsequent normal functioning into adulthood. The extreme neural plasticity of the early brain can compensate for large left hemispheric lesions.

The second debate was by my group about whether the majority language should be thought to minority speaking groups in school. Since my opinions on this topic are quite biased I will not elaborate any further.

The final debate of the class was on whether language affect the way we think. The for side produced some valuable points and different theories like the determinant and constraint theory of language. A rather ingenious analogy of the constrains of our senses determine out outlook was used and they wrapped up with some data on performance studies comparing different language speaking cultures and even touched on bilingualism. The against side used some valuable data and points such as the content and the context limits your ability to think rather than the language you speak. They brought up the evidence of the color categorizing study between different cultures, where certain languages had only specific categories for colors, but overall all cultures grouped the colors similarly regardless of the diversity of their categories. The one question that I thought about was studies on infants before they develop or are exposed to any sort of language. The results on these subjects would help determine if language did in fact have any influence on human thought. It would seem rather straight forward to think that experience would have a greater impact on restriction on the way a person thinks rather than the language they speak.