User:Jchetwynd

I am a 4th year Psychology student with an interest in completing a masters degree in Speech Pathology. I also have a BAA in Information Technology with a certificate in Business Administration and have worked for 4 1/2 years in Public Relations in between university degrees.

Blog Entry #1- January 17, 2011 -The use of language is a fundamental aspect of many of our lives that can easily be taken for granted, but for others it can be a challenging barrier to daily interactions. The class discussion of various forms of language, such toki pona and sign language was very intriguing to me. It is fascinating to examine the diverse means in which we communicate and the various cultural aspects affecting this communication. Language spoken in different areas of the world share many similarities in grammar or syntax but are composed of different gestures or vocabulary. It would seem that culture has a direct influence on the differences observed in various languages. Culture and language appear to be intrinsically linked and can impact the way we consider the world around us. In conclusion, a question arises from this discussion are language differences a direct result of cultural norms or does language influence culture?

Blog Entry #2- January 24, 2011- Can we think without language? Life without language is difficult to imagine for those that use it so frequently. Many of us cannot begin to understand the limitations to our daily interactions without the ability to use language as a communication tool. A question arose is class that really intrigued me; can we think without language? Clearly, it becomes challenging to view or think about the world around us without using language when we are involved in reasoning and rationalizing. I believe that without language we can still think on a more simplistic level. However, language allows expression of thought but it also constrains the manner in which we think. Take for example a biological instinctive response (the fight or flight system), which is initiated without the use language. The aforementioned example seems to indicate that we can think/respond to our environment without the use of language. In conclusion, it appears we can think without language but the grey area becomes: what level of thinking is permitted without language?

Blog Entry #3- January 31, 2011 - American Sign Language To continue on from my last post of “Can we think without language?”, it seems interesting to examine alternative methods of communication. Upon recently completing a Level 1 introductory sign language course it is amazing to me the depth of the vocabulary and expression used within this form of communication. When we think of language we typically think of it being acoustically conveyed sounds, but sign language is expressed through visually expressed signs. Communicating through sign language involves movement of the hands, arms, body or face to convey an individual’s message. In previous class lectures we have discussed American Sign Language. When using this form of communication it is easy for miscommunication to occur due to something as simple as a misdirected hand motion etc. Prior to completing the introductory sign language course I was under the impression that American Sign Language was the universal/globally accepted method. During the course I learned that there are many different forms of sign language that have very region specific signs in deaf communities across the world. From this brief discussion a question arises - Do individuals using sign language develop language in the same manner as someone learning an acoustically based language such as English or German?

Blog Entry #4- February 7, 2011-Do individuals using sign language develop language in the same manner as someone learning an acoustically based language such as English or German?

To follow up on the question I presented in my previous blog entry – Do individuals using sign language develop language in the same manner as someone learning an acoustically based language such as English or German? It is interesting to examine how language develops in an individual, regardless if it is an acoustically based one or not. The environmental situation in which the child is raised should be initially considered to begin answering this question. The point at which a child begins to first acquire language of any type depends on parental interaction, home environment, key stimuli, as well as developmental abilities of the child. If there are any learning disabilities or hearing impairments associated with the child then the natural learning progression will be delayed. If the aforementioned is removed from the picture then my hypothesis is that a child using either ASL or verbal communication should begin to communicate at roughly the same developmental stage, because this is the method of communication they have been exposed to. According to several studies discussed in the text book there are many similarities in language development stages for a child using ASL or a verbal child. A study by Brown (1973) found that speaking and signing language acquisition in a two-word stage form are relatively similar developmentally. To conclude, I believe that the development of either an ASL or verbal based communication system depends on key environmental stimuli, as well as the biological make up of the individual child. Also, an important consideration is if ASL and verbal communication are used in combination or as separate communication tools.

Blog Entry #5- February 14, 2011- How is the perceptual process both universal and individualistic for people?

During a previous class we listened to audio clips of various words and were then asked what we perceived the words to be. What I found interesting is that many people heard differing words and some heard the same. I began to think about individualized perception processes and how perception is both individualistic and universal for people. Everyday life situations inundate us with a perceptual plethora of words and images, which can be viewed either similarly or differently between individuals.

Perception is considered the way we interpret our worldly experiences and appears to be a very unique aspect of an individual even when people share similar experiences. Speech perception can be placed into two categories; either individual speech sounds or fluent speech. The concept of speech perception has been hotly debated for many years and as a result several theories have emerged; top-down/bottom-up speech processing, motor theory, cohort theory and the trace model. The theory that I found most intriguing is the motor theory of speech perception. This theory was developed by Liberman et al (1950) and hypothesizes that people perceive spoken words through vocal tract movements rather than the speech sound patterns themsleves. Liberman suggested that the perception of the vocal tract movements was a fairly innate process, specific to human cognitive abilities. The fundamental question underlying all these theories is how innate is speech perception? If speech perception is congenital in nature where does it evolve from and why do differences exist in how people perceive acoustic stimuli?

Blog Entry #6- February 28, 2011- Development of Language in Children With Down Syndrome

After doing the peer review I became more interested in the underlying mechanisms that cause children with Down Syndrome to have language development issues or delays. I volunteer extensively with children with Down Syndrome and other learning/cognitive disabilities and always appreciate gaining further insight into these areas.

Children with Down Syndrome are quite cognitively delayed in many areas compared to other normal functioning children. Specifically, language development rates are greatly delayed and may not begin to develop until twenty four months of age. The normal time frame for language development of first words is about twelve months. Many questions arise from this language delay observation/realization; what structural/functional elements are at work in the brain that greatly retards the language development of a child with Down Syndrome? Can they understand their world around them on the same developmental level as other children? Do children with Down Syndrome simply have strong receptive verbal capabilities but just do not have the verbal ability to express themselves in various linguistic ways?

In the peer review chapter it was cited that Cleland et al.,2009 noted that individuals with Down Syndrome have anatomical differences which make proper phoneme production more challenging. These anatomical differences include smaller oral cavities and low muscle tone around the periphery of the mouth, causing voice articulation problems. Another issue that contributes to the improper production of phonemes is that close to 75% of children with Down Syndrome have co-morbid hearing loss.

Along with genetic factors that contribute to the language delays there are also environmental factors that can be responsible. These environmental factors can include the individual's home or school life but also includes the parents or caregivers communicative interaction to their child with Down Syndrome. Many children with Down Syndrome are able to say single words and then in being able to produce sequences of words.

With all the life struggles and successes individuals with Down Syndrome experience, language development issues create even more of a struggle when trying to adapt to the world that surrounds them. From the discussion above we can conclude that one of the main contributors to these language delays is due to anatomical differences but also environmental and genetic factors. As with any disorder, there are many factors that contribute to the causation and ongoing issues faced by an individual. Mostly, it is important to recognize that disorders are multidimensional and are not always easily understood.

Blog Entry #7- March 7, 2011- Language versus Music Perception

Individually we are all similar, as well as unique, but one ability we share in common is that we can speak fluently in our native language. Even though we share some commonalities we do not all posses a natural talent for singing or playing a musical instrument. The other day in class the lecture addressed one theory about language and music. This theory proposed that we actually use similar brain systems or overlapping cognitive faculties for perceiving and comprehending the syntactic structures in both language and music.

Recently there has been growing research interest in the neural brain basis of music perception and how this is differentiated from normal language perception. This discussion prompts the question; does a cognitive/neural relationship exist between language and music? Do separate brain systems/areas operate in differing ways when perceiving language versus music or does significant overlap occur in the brain? Does syntactic processing in music perception share anything in common with syntactic processing of language perception?

As research progresses and new findings add to our current knowledge base we can better understand the connection between language and music perception in various brain regions. Are certain individuals “hardwired” with the ability to play a musical instrument or to sing in tune? Do they have the capacity to perceive music on a different level? Music and language perception seem related in so many ways and much debate is centered on whether music is in itself a language. Music clearly conveys information to some degree and invokes a variety of differing moods and sensations within an individual. The true debate becomes what exactly is the nature of that conveyed information and how is music coded in our brain; is it different than verbal language? Can we define music under the same categorization as language, in that it can be held to a set of symbols which represent the world around us while still confined to grammatical rules?

Research is now suggesting that language and music are more closely related than previously believed. This increasing body of research has the ability to redefine our understanding of relevant brain regions and the role they play in processing music versus language.

Blog Entry #8- March 14, 2011- How do we get from a thought to a vocalized speech pattern?

Is the human ability of forethought directly connected to blended speech errors?

In many of my previous psychology classes we have discussed the concept of how forethought, rather than simple intuition, separates human beings from the rest of the mammalian kingdom. We have the advantage or sometimes disadvantage; depending on what angle you address the issue from, of planning days, weeks, or months in advance of an event. In a recent psycholinguistics class we examined speech error types and I began thinking about the connection between our ability of forethought and speech errors. Specifically, I am interested in blended speech errors because they occur commonly in the English language and always have an inherently humorous aspect to them. Take for example the statement “At the end of today’s lecture”. A blended speech error for this sentence could be “At the end of today’s lection” because we may be thinking about the dual meaning of lecture/lesson and then the combination of the two emerges vocally.

In order to address the aforementioned question we must look also at lexical selection. With lexical selection an incorrect word selection can be made based on partial meaning similarity(lecture/lesson = lectson). When thinking about vocalizing a thought pattern we select certain words to express our thoughts but sometimes we may select one or two with a similar meaning and this leads to the use of a combination of the two. Perhaps it is the speed in which a thought turns into a vocalized speech pattern that lends to the tendency to create these blended speech errors. One research study found that it takes approximately half a second to transform a thought into something we vocalize, with three types of speech processing occurring in Broca's area in order for this to occur (Sahin et al, 2009). So, in light of the aforementioned research study it would seem that there is an immense amount of “neural” work being performed in order for a simple thought to then become vocalized. We are only beginning to scrape the surface of the underlying mechanisms that allow our brain to function on a variety of levels. It is only with further exploration and research into these neural pathways that we will begin to find the appropriate answers.

Blog Entry #9- March 21, 2011- What/if any are the neural/structural brain differences in individuals who are bilingual?

Individuals who are fluent in two or more languages have always intrigued me. I am not simply referring to the ability to converse minimally or recognize a few words in another language, I mean the ability of being fully fluent (read or converse in that language for an extensive period of time). For the purpose of this discussion bilingualism will be considered the aptitude to write or speak fluently in two languages, anymore than will be considered multilingualism. What intrigues me about the ability to speak multiple languages is the potential neural benefits this poses for the individual. What are the neural brain differences in an individual who speaks more than one language and what are the potential benefits as a result of being bilingual?

After completing a significant portion of the Psycholinguistics course, I have learned about the various effects of learning on brain structure and neural pathway development. As a result, we can assume that learning more than one language will have an effect on the “neural wiring” of the human brain or certainly the cognitions. While I was doing some reading in regards to this topic I came across a study that discussed very interesting and relevant results. A study conducted by Jay et al., 2003 found that a language learned in adulthood that was not the participant’s native language was spatially separated in the brain as a result of the acquisition. As information and sensory processors, humans are continually filtering stimuli and learning new abilities. It was quite interesting to discover that the brain had spatially separated a new language from the participant's native language.

Learning another language enages the individual to think in a conceptually different way. This is because many languages outside of English have words that represent dual meanings. For example, the word Himmel in German is used to refer to not only sky but to heaven. In previous courses I have learned that it is much easier for a child to learn a new language than an older individual. There is a significant amount of research that deals with the concept of a critical period, which suggests that language acquisition is restricted by a certain time frame during development. However, the concept of a critical period is still hotly debated among researchers. It is clear that many factors influence the neural organization of language such as the age of the individual and the ability to speak the specific language.

Blog Entry #10- March 28, 2011- TOT (tip of the tongue) Phenomenon

After discussing the tip of the tongue (TOT) phenomenon I began thinking about the occurrence rates in various demographic populations. From what I have learned in this class and from previous classes, I can only hypothesize that this phenomenon happens to every individual of any cultural origin at some point in their life. This aforementioned thought pattern led to the inquisition: if TOT is relatively common and not isolated to a specific demographic population, what are the neural workings underlying this phenomenon? What is the commonality in the TOT phenomenon that makes it observable in so many individuals? In order to discuss this linguistic phenomenon we need to define TOT. When a tip of the tongue (TOT) error occurs an individual is trying to recall/retrieve a word from memory but simply can not no matter how long they allocate conscious cognitive resources to the task. This phenomenon has been reported in the various languages located in France, Portugal and Romania etc. It is also a phenomenon that appears to be age dependent, with seniors experiencing TOT twice as frequently as a younger sub population of university students (Brown, 1991).

It is intriguing that when the word itself will not come to mind we still have the cognitive ability to describe certain characteristics or elements of information in regards to the word. One aspect of the TOT phenomenon I find so interesting is that minutes or hours later the word(s) suddenly comes to you, seemingly out of the blue. Perhaps this all relates to the human ability of forethought. As we are speaking we are continually using extensive neural capabilities and thinking of the next word(s) and sentences. Seemingly, it is possible that if we activate too many of these neural networks in advance all you will get is a retrieval block or a synaptic pattern that cannot emerge due to the overload occurring. Also, TOT could have increased occurrences when we are placed in uneasy or anxiety provoking situations. TOT is such a frustrating aspect of the brains inability but it does provide a good example of how there is a limit to cognitive resources and neural networking.

When discussing the TOT phenomenon a question came to mind and seemed to connect to my last blog post. Since bilingual individuals communicate in more words than monolingual individuals, is there an increased chance for TOT occurrences?

Brown, A. (1991). A review of the tip of the tongue experience. Psychological Bulletin, 109 (2), 204-223.

Blog Entry #11- April 4, 2011- Brain & Autism

How is the neural networking in the brain of a child with Autism different from a typically developing brain?

I am always intrigued to learn more detailed information about differing brain networking or structural abnormalities in various individuals with specific disorders. With the range of volunteer work I have been involved in I have had the opportunity to interact with a specific demographic population of children. These children have had various psychopathologies such as autism, down syndrome or ADHD. When researching this question I came across a recent research study that examined the pattern of brain activity in children with autism spectrum disorder and found that it is quite different from that of children without the condition (Kaiser et al, 2010). This research is fascinating because it provides insight into the brain of an individual with autism and a better understanding of which parts of the brain are affected due to the development of autism. A genetically influenced condition, autism spectrum disorder, affects brain development and has identified symptoms including social communication difficulties and interaction.

A typically developing brain experiences significant growth between the time of birth and age two. Approximately one-third of children with Autism have a disproportionately large head and brain, which is the result of excessive brain tissue growth(Kaiser et al, 2010). This excess brain tissue is believed to alter the neural networking of the brain, but environmental and genetic factors are also key contributors in the developmental pathway of autism. A child with autism has extremely delayed brain development; approximately three years delayed compare to a “typical” child and lends to the theory that this disorder is a result of developmental delays and not bad behaviour tendencies (Kaiser et al, 2010). The research study also found that these developmental delays were most significant in the lateral prefrontal cortex, which is the part of the brain responsible for suppression of inappropriate thoughts and actions, attention etc.

Advances in neuro imaging have provided the ability to gain further insight into the inner neural workings of the brain. The aforementioned research study has provided significant insight into the neural pathways of the autistic mind. Advances not only in neuro imaging but in medicine in general have provided the ability to improve diagnosis, prevention and treatment methods for a variety of disorders, such as Autism.

Kaiser, M. (2010). Neural Signatures of Autism. PNAS, 107 (49), 21223-21228.

Blog Entry #12- April 11, 2011- Gestures

Are gestures as individual as our personalities? Do we each have unique gestures or are there universal gestures?

In a recent psycholinguistics class we touched on gestures when speaking. Many of us gesture unconsciously and out of habit, with different individuals gesturing more frequently than others. Gestures have several categorizations but may be subcategorized as cohesive, iconic, beats etc. What is interesting to me is why we gesture when speaking and really how unique are the gestures to the individual? Do gestures aid in the communication of meaning or information or are they transferring additional information? Sign language is an entire method of communication that is based on gesturing and is very effective in a specific population. Many gestures are made when we are communicating size of an object or even a specific shape and add to the value of the verbal description of something. Take for example a situation in which there is too much background noise, a listener may look to gestures to interpret something they may have missed on an acoustic level to provide the extra information.

Across age demographics gesturing seems to be very important and even for very young children it becomes a very integral part of their communication methods. This relationship between gesture and vocal language seems to be quite significant and widespread in our culture. Gesturing could have a significant role in the way an individual remembers a situation or an interaction. In conclusion, gesturing seems to be a very significant component of human communication and would be a very interesting component of psycholinguistics to study in depth.