User:Egreer

Eleanor Greer is a third-year student in linguistics and English literature.

January 13, 2011
While attempts to teach language to and prove that chimps have the ability to learn and use it yield interesting results, I must admit I am opposed to these experiments both academically and ethically. Academically, I find it hard to fathom that an animal as highly social as a chimp could have the innate ability to use language and NOT use it naturally. Ethically, I find these experiments' anthropomorphic attitude regretable: the researchers claim (no doubt truthfully) to feel a deep emotional and personal connection with their subjects and treat them like humans, and yet I'm sure the issue of consent is never raised (and if it were, I suspect what ensued would be something like the end of "Rainman"). It seems to me that while the researchers purport to treat the chimps as human, they contradict themselves by also treating them as animals under human control. I'm of course not suggesting chimps should be asked to sign a consent form, just pointing out the assumptions of human superiority and dominance over the natural behaviour of a different species that are inherent to these experiments.

January 23, 2011
I found our discussion about the lateralization of different language functions interesting, particularly the part about the role of the right hemisphere in processing prosody. We talked about prosodical features in terms of the emotional cues they give, and aprosodia being the insensitivity to these cues. I'm curious about sensitivity to tone, though, in languages where it is used as a phonemic feature rather than an emotional or syntactic clue. Would a speaker of a tonal language (ex. Thai) who suffered right hemisphere damage and developed aprosodia be able to distinguish between a set of minimal pairs differing only in tone? In other words, is the right hemisphere associated with tone/pitch purely as an acoustic phenomenon, or is it associated with tone only as much as prosody is carried by it?

January 31, 2011
Phonological awareness if fascinating because of its largely unconscious nature--children presumably aren't aware that they are acquiring and improving their phonological awareness. The centrality of the syllable to phonological awareness seems to lend credence to the notion of the syllable, rather than the phoneme, as the unit of perception. I admit I know little about the evidence supporting the stages of phonological awareness (counting syllables, identifying onset and rime, etc.), but if it hasn't been done already I think an experiment involving people's ability to manipulate phonemes within the syllables of multisyllabic words would yield interesting results, both about the development of this ability and about the patterns of consonants in English onsets and codas (when one is changed, are there circumstances where the surrounding ones would also have to modify themselves?).

February 7, 2011
Chinese speakers have low phonological awareness because of their logographic writing system, which is not at all based on mapping sounds onto visual letters and strining their sounds together. I wonder about the effects of deafness on reading a logographic system--would deaf people still be at a disadvantage, or if there was no picture-phoneme correspondance, and the meaning was deduced immediately from the visual representation on the page, would deaf people perform better than reading an alphabetic system? It would be interesting to look at data on deaf Chinese reading first to shed light on this question.

February 14, 2011
I found our discussion of assimilation of ASL signs (like how the sign for "film" gradually became less iconic) quite interesting. This phenomenon can also be seen in spoken languages in words borrowed from other languages. A good example is the word "beef" in English, which was borrowed from French "boeuf" shortly after the Norman Conquest of 1066. This was before the English vowel shift, which changed the vowel sound to the present one (the vowel shift applied to this word as if it were a native English word). Interstingly, more recently, the word "beef steak" has been borrowed back into French where it appears as "biftek". I wonder if the first syllable will ever return to its original pronunciation?

February 28, 2011
I think the product view of discourse might give some support to the Sapir-Whorf hypothesis of linguistic relativism. It is fascinating to examine the social artefacts of language such as schemata and scripts, and comparing these representations cross-linguistically and cross-culturally would undoubtedly yield interesting results. Another interesting approach would be to examine these differences among different social groups of a language's speakers. For example, the restaurant script we discussed in class would likely look very different from a cook's perspective, from a diner's perspective and from a server's perspective. The events are the same and occur in the same order but all the implications are different, right down to the very language involved in the process (kitchen jargon for the cooks, perhaps business jargon for the diners, etc.) As discourse is a very visible artefact created by a linguistic community, it throws light on social differences both between and within these communities and makes the differences in their mental representations obvious (as Whorf's hypothesis would predict).

March 7, 2011
The similarities we discussed between language and music were fascinating. It seems, though, that all the studies we looked at examined music as a separate entity from language that uses some of the same areas. I am curious about how the brain processes the two entities at the same time, i.e. song lyrics. Most people (not only trained musicians) are able to detect harmonic violations, but what about prosodical violations in song lyrics? While musical beats can enhance the prosody of lyrics, this is not always the case. Many songs' stressed syllables don't necessarily match up with the music's strong beats. Is this detected as naturally and in the same way as harmonic violations, or is it differently processes because of its great complexity?

March 14, 2011
One of the most popular discussions amongst worriers over what is becoming of society is the decline of so-called sophisticated means of communication (like letter writing) and the rise of the evil text message and wall post. But in reality, a lot of the same skills are used when composing a text message of max. 140 characters and writing an essay of max. 3000 words. You may have to shorten sections (through the cunning use of smileys), exclude things, and you absolutely have to plan out in advance what you will say. It must be succinct, but not rude. One of the most interesting challenges in writing such short compositions as these is conveying tone of voice. When gesture, expression and prosody can't be detected, the text-er is left with a unique problem: there usually isn't room to write "I'm happy about this" (not to mention that that would be weird), and seemingly straightforward messages can become ambiguous very easily. So far, it seems the best solution to this problem is the use of smileys. I would even go so far as to say that these are the iconic gestures of the texting world!

March 21, 2011
Kuhl's research on motherese provides a strong argument for children having innate language-learning ability. Motherese is exaggerated and simplified. But babies who have been exposed to motherese don't talk like that. In fact "baby talk" is often discouraged by parents who don't realize that they are doing it too. If language development was just about copying parental behaviour, then surely children would speak to each other and to their parents with the exaggerated prosody and phonemics of motherese, but this certainly doesn't seem to be the case.

March 28, 2011
I found the guest lecturer on aphasia very eye-opening. I was shocked to learn that aphasia is more prevalent than Parkinson's disease! I especially thought the videos she showed us were fascinating. It's one thing to read or be told about the condition, but quite another to actually watch and hear someone struggling to make themselves understood like that. So first I wanted to say thank you so much to both Dr. Newman and the guest lecturer (I'm sorry I can't remember her name) for including that class in our curriculum. Second, I have a question about the differences between fluent and non-fluent aphasia. Frustration was listed as a symptom of non-fluent aphasia. This is entirely understandable, as it must be awful to not be able to express thoughts in language. But I don't understand why frustration is not associated with fluent aphasia. How is it not frustrating to think, again and again, that you are making sense and talking coherently, only to be told repeatedly that no one understands you? Obviously this must be very challenging to study as a researcher couldn't rely on speech to communicate with a patient. It would be interesting, though, to examine in more detail the symptom of frustration in aphasia. All things considered, it must be one of the most difficult symptoms to deal with both for patients and researchers.

April 4, 2011
The debate on oral vs. total communication training for deaf children was fascinating, particularly because it is actually current and relevant in the world today. Regardless of which side presented a stronger argument in class (and both were very good) I support total communication training for children with cochlear implants. Even if a child has cochlear implants and so can hear, he or she will never be able to function in mainstream school as well as a hearing child will. This is why these children need special training to help them cope with their disability. Given that special education is provided, I see no reason not to use as many means as possible to communicate with these children. Certainly oral communication is necessary given that they are able to hear, and obviously oral communication is more helpful in society at large than sign would be. But why shut these children off from the deaf community when they are, in fact, deaf? Why not teach them the skills to get by in both deaf and hearing contexts? My position was strengthened when Dr. Newman played us a recording of what speech sounds like with cochlear implants. The poor sound quality makes it seem ludicrous to provide only oral communication to these children when sign must in fact be clearer.

April 11, 2011
On the whole, I enjoyed this course and found it interesting. In particular, I really enjoyed the guest lecturer who talked to us about aphasia. I also liked that due dates for assignments were well spread out over the semester. This was particularly helpful around midterms and finals because there was nothing major due in this class! I did not like how the grading scheme for the blog posts changed in the middle of the semester and also the confusion over the chapter rubric. I know this was the first year for this assignment, but next year it would be nice if the TA and the prof were more on the same page from the beginning of the semester about how things were graded. That being said, I thought the blog posts were a good and fairly low-stress way of synthesizing material learned. I liked the debates during the last two weeks, but although the topics were interesting I found they got boring halfway through. Because of the time frame, no one could really bring anything new to the discussion during the rebuttals and so there was a lot of repetition. My suggestion for improvement might be to have both sides' presentations and then have a more informal discussion with the whole class rather than a formal debate format. The wikiversity chapter was a weird assignment. It was interesting to research a particular topic and write a paper, but because there was no final or midterm it meant that the only thing we ever studied in detail was our debate topic and our chapter topic. This makes it too easy not to learn anything all semester except those two things! (Of course I'm not talking about myself here...haha.) I know I'm contradicting what I said above, but while it was less stressful it was also difficult to know if you were understanding the material correctly.