User:Moraghjang

Hello, My name is Moragh Jang and I am in my fourth year of Neuroscience at Dal. I have taken several classes in development and speech production and perception, all of which I have enjoyed thoroughly. I am currently working on my honours thesis under the supervision of Dr. Dennis Phillips, focusing on hearing. I am interested in pursuing careers teaching and/or hearing and language research.

Blog Post 1
"Moragh your asian....your supposed to good at math!"- a not-so good friend of mine from grade 4. I used to hear this all the time; being half chinese, I should be good at math. Now thats a pretty bold statement. Where on earth does this stereotype come from? Well, if you look at the stats, its true that chinese, japanese and korean children tend out perform north americans in basic arithmetic and other mathematical problem solving tasks. Where does this advantage come from? Many argue that it is the higher in expectations set by teachers, parents and asian school systems, the fact that asian students on average spend more time in school and on homework nightly, but could the advantage come from the difference in their languages of math? There is lots of evidence to support the idea that the mathematical language (the system for naming numbers) embedded in most asian languages has a profound influence on the way that their users solve problems; that in asian countries, the languages spoken there influences the very way that they think. I am going to talk about a few argument that have lead me to believe that this could be the case, that the language used by asian cultures could very well be what is influencing their mathematical ability. Lets look at digit span for a starter, a very useful tool when considering calculations requiring you to remember long strings of numbers. Many asian languages have numbers names that are very short relative to the names for the same number in english. If you are asked to memorize a string of numbers, wouldnt it be advantageous to be able to repeat it in your head more times before forgetting it. Well if your number names are very short, than you could hypothetically repeat the string more times within that short period before forgetting them thus have better rehearsal of these number, in turn translating to potentially better recall. Asian speakers also learn to count earlier in life because their number names are simply combinations of a few base numbers (1-10), while english speakers have to learn unique names for numbers. Take the number 13 for example, in japanese, korean and chinese it would be called something like "one-three", while in English it would be thirteen. In English you need just under 30 unique name codes to count to one hundred compared to ten codes in Chinese for example. Well their system makes a heck of a lot more sense to me. It seems subjectively to be such a more logical system, while the system for naming numbers in English is so weird and irrational. If math were that easy, perhaps I would like it more, perhaps I would enjoy doing my math homework, perhaps I would do better on my tests and my parents and teachers would praise me more which I would continue to strive for and perhaps, following this self fulfilling prophecy, I would be better at math! Perhaps language is not what perpetuates this advantage, but it could certainly kick start the system so to speak. Though i am not one hundred percent convinced that differences in mathematical languages are at the core of the differential in mathematic performance between asian and western cultures, I do believe that is warrants some thought. "So listen hear not-so-good friend from grade 4...I don't speak a word of chinese....so no! I am not "supposed" to be any better at math than you!"

Blog Post 2 "Thats just not punny!"
"A bicycle cannot stand on its own because it is two-tire" .....or how about "what did the aspiring Tao master say to the hot dog vendor?....Make me one with everything." , oh wait! this one is applicable "A rule of grammar: double negatives are a no-no."...Are you shaking your head yet? Every body (who understands it at least) gets a kick out of a really good pun. We applaud "good" ones and shake our heads at "bad" ones, but what determines what is good and what is bad? I have spent my whole life not understanding puns, being the last one to get the joke, or being the first one to make a pun and have people say "really?....are you ninety? That was lame." These unfortunately-corny, yet clever, jokes that make people giggle and/or groan mystify me. They raise so many questions in my mind. Is it right for us to take credit for being quick and getting the joke right away, or being the clever one to come up with them if we could not really explain to nyone how we figure them out? Are you are just good at them or bad at them, or can you GET good or bad at them? What does our brain activity look like online when reading or hearing puns? Does it resemble a garden path sentence? In order to maximize the enjoyment you get out of a good pun you really need to understand what it actually says semantically and what was actually said. I bet our ERP look similar to those of garden path sentence readers or sentences with something wrong in them. Where does this myth that puns are old-people-jokes come from? Does our ability to understand and generate puns change with age? Why might this be? Does it make a difference whether we hear them or are able to read them and get additional cues from the syntax of the sentence? How do puns work in individuals who communicate using sign language? There are so many questions that I would love to see answered. Perhaps answering them will be my home work for the week and post some answers or more puns in Blog 3, after all "seven days without a pun makes one weak!"

Blog Post 3 "the chicken or the egg?"
I am going to throw some ideas out there that came to me during lecture and unfortunately these ideas are almost completely unrelated to one another. As such I will simply mention them in the order that they were scribbled in the margin of my notebook. Firstly, I am wondering what the correlate of prosody would be for individuals who use sign language to communicate. Would it be an abruptness of fluidity to their gestures, or perhaps the addition of exaggerated facial expressions? I understand that you could convey sarcasm through choices of words or signs, where individuals who use spoken language would use prosody, but there would be a certain amount of information lost in the inability to use certain inflection I would think. Do individuals who are aprosodic have the same difficulty with sarcasm as these individuals who use sign? Similar in thought to the motor theory, would aprosodic individuals be able to produce prosody in their speech and use it in the same manner as individuals without this deficit? What came first; our ability to understand or produce prosodic elements of speech? Is aprosodia a problem with speech perception or a fundamental auditory deficit? There is lots of research on this and i intend to elaborate on it in future posts once I decide what my stand point is. In keeping with the who-came-first-the-chicken-or-the-egg argument of the motor theory, I propose a rebuttal to the argument that because aphasic individuals can still understand speech in light of not being able to produce it or vice versa. My rational is this; individuals who acquire brain damage were obviously not born with it, as such, at one point in time they might have been able to learn the association between their the motor output and auditory input, so even if they lost the ability to articulate words vocally, they would still have the memory and template for comparison between what they say and what they hear. SO using this as an argument against the motor theory is not very strong. I still don't believe in the motor theory, but I am simply arguing that aphasia as evidence against the theory is not a particularly valid point. On a completely different note, am also very interested in why on earth animals would have categorical perception. Might this imply that they have the propensity to learn language or understand it? Also, even though we are not very good at it, would we as native english speakers ever use within-category discrimination? I simply cant think of any examples off the top of my head where we do use within category discrimination (i.e. different kinds of /ba/ or /da/). Is our poor within category discrimination a function of the fact that we don't use it in our language or is it not used in our language because we are not good at it? Again, that darn chicken and that darn egg!

blog 5: Spoductions in Preech error
While writing my text book chapter on models of speech production, I could not help but fixate on how fascinating speech errors are and how much they really do tell us about the "mind-to-mouth" process. I was also speaking with a friend who is writing their chapter on pidgins, chreole and home-sign, and we started talking about how the evolution of home sign and chreoles in sign language; she also came across alot of literature on production errors in sign language and how it they are produced. We noted especially how interesting that the same sorts of mistakes seem to happen in both verbal and non-verbal languages. More specifically that in both types of language are produced holistically (whole phrase at a time), and that within each phrase there was an internal structure that whose linguistic components are produced in a very specific order. It was neat that our "tip of the Tongue" was their "Tip of the finger" and that our "Slip of the tongue" was their "SLip of the hand", and that for both modalities of language speech errors were able to tell a lot about the order of this processing and that both languages follow very specific rules of production. More over that both languages store language in simple basic units that we can make many combinations with and that regardless of how you speak, humans have a single language faculty. In my literature review I came across a paper called read my "Read My Lips", about speech errors and Kenzi the Bonobos. They were able to demonstrate that bonobos also seem to have complex categorization systems when discriminating lexigrams and that when errors were made in lexigram dicrimination it was almost always between semantically related lexigrams. For example, when she wanted blackberries, Kenzi often mistakenly selected the lexigram for blueberries, grapes or other edible fruits, and that she and other bonobos only ever confused lexigrams of verbs with other ones of verbs, and nouns for nounds, never nouns for verbs, indicating that they may be all stored within the same "node" and that perhaps the way that Bonobos and other apes produce overt communication could be modelled by spreading activation theories similar to the way that human speech is currently modelled. In this same article (which then led me to look at others of a similar variety), they also talked about research conducted matched types of speech errors with their level of speech development. They found in their research that adults actually make more anticipatory (frive fat frogs) that perseveration speech errors (five fat fogs). This is right in line with research conducted by the infamous Dell (psycholinguist founder of Dell model of Speech production), who found that more errors of activation of future or present nodes occur than past nodes (referring to nodes representing components of the phrase). Thus the more practice one gets, and the more nodes one has acquired, the high the propensity to make errors, or a Dell says "Whatever makes you more error- prone makes your errors more perseveratory." I also found some stats that say that signers make fewer speech errors than speakers and it took me a minute to realize why. Then it struck me, and this idea was also backed up in a bunch of papers; that signing is much slower than speaking, so signers have a better chance of catching and fixing their mistakes than do speakers. This is right in line with the idea tacit to Dells model that spreading activation extinguishes over time or earlier models where there is a self-monitoring component, which needs time to work. Another idea, a new way of thinking about spoonerisms, also immerged in the papers i was reading on signing errors. First the authors outlined how slips of the hand often involved blends rather than whole symbol exchanges and similar studies of speakers found that when spoonerisms occur, the shape of the tongue and mouth nd vocal tract are actually trying to pronounce the two sounds (target and mistaken one) at the same time; that the mistake is not merely a substitution of mental symbols but a collision of motor commands. This research was fairy recent (2008 i believe), I would like to see whether they have made any further claims about these speech errors, and whether if they teach bonobos actual sign languages and not just lexicon disriminations, whether they too would show the same sort of errors.

blog 6
Blog 6: I have two separate thoughts that I wish to discuss in this entry, the first involves the idea of speech impediments that go unnoticed, and the second investigates speech production models and ways to test them. I had a dear friend of mine come to visit over the break whome I have known for several years now, and though he is a very interesting character, I have never had trouble understanding him in conversation. It became clear to me though that this is likely due to practice because listening to him converse with my housemates and seeing their reaction to the things he said made it clear that he does not annunciate very well, in fact he is very difficult to understand. He not only speaks very, very quickly, but his Ms and his Bs kind of sound the same and he has a very nasal quality to his voice which makes it hard to used the vowel sounds to deduce what he is saying. I used to speak in a similar fashion, so quickly that no one could understand me, but I have since grown out of this, but my friend has not. Why is it that I and not he were able to overcome this? Was it because people persistently told me to slow down, or that I eventually heard what how unintelligible I sounded. Surely, being a very social person he has lots of exposure to well articulated and annunciated language, doe she not realize that he sounds different and that he is difficult to understand? This question comes back to different theories of speech acquisition, he is clearly not matching the sounds that he produces with those that he hears, unless he does not hear the difference. So is this a hearing or a speech deficit? How could this be tested and how could you discriminate between the two deficits if they present the same way? How many people’s speech deficits stem from or are confused with fundamental auditory deficits? Are conduction and mechanical hearing impairments at the heart of speech production errors and if so which ones? He was also telling me about migraines that he gets which brought about the conversation we had in class about strokes and migraines and how they can affect speech. I showed him the video of the news reporter reporting from the Oscars a few years ago (or last year not sure), the one that Dr. Newman showed us in class and something occurred to me. Though the reporter clearly had a momentary lapse in her ability to produce intelligible spoken language, it was really only the words and syntax that were lost, while intonation and gestures remained intact. Another important thing that I noted was that she said the word tonight seemingly in the right place and time in the sentence without any difficulty, but that she had made errors in production leading up to and following the word tonight. How could this be? Can this tell us anything about the order of events that went into planning the sentences that she was going to say? Also, was she reading a teleprompter, had she already read and planned what she was going to say and had plenty of time to rehearse? Why would a word in the middle of the phrase, the intonation and the gestures all be intact while the rest was lost? Is there any way of studying the effect of migraines, loss of blood flow or response of various brain areas and their affect on speech production online? Perhaps using local anesthetics or Transcranial magnetic stimulation. I will do some research on this and perhaps add it to my chapter on models of speech production; I think I could find a way for it, or something like it to apply.

blog7:Music makes the world go round
It has been known for a long time that humans process speech and non-speech sounds separately. It seems fitting then that there would be aphasias that produce deficits that implicate speech or non-speech sound processing in isolation, such as amusia, music being composed of non-speech sounds. I am doing my honours research on human’s ability to detect irregularities in otherwise periodic sounds (the ability to detect a small amount of jitter in a train of clicks). Our ability to detect these irregularities and the accuracy of our auditory systems temporal acuity at low frequencies (the frequencies of clicks in all of the test trains are considered to be low frequency) depends on, among other factors, the ability of our auditory nerves to phase-lock the timing of their action potentials to the period of the stimulus waveform. This phase locking is what allows our auditory system to track smooth, continuous frequency modulations, like those produced in speech as intonation contours, as well as those produced in music as melody or vibrato. Individuals who are unable to phase-lock properly, due damage or pathology implicating the auditory nerve and its spike timing, have difficulty understanding speech and not surprisingly, their perception of music is impaired as well. If these individuals were able to describe music, I wonder how they would describe it. Would they be able to appreciate an opera singer’s tremolo or the quality of sound produced by a skilled sax player? Or would it sound like a series of discontinuous pitches that are just too sharp or just too flat. These individuals are considered to be amusic and aprosodic, for both speech and non-speech sound. I am currently determining the minimum detectable irregularities in adult listeners with normal hearing, but I would love to see how amusic individuals perform on this task, whether their thresholds would be significantly worse than controls. I would also like to see how individuals with specific communication disorders perform on this jitter detection task, particularly those who have difficulty understanding prosody and intonation. I wonder whether individuals who are unable to detect or perceive emotionality in speech, as is seen in many individuals diagnosed with autism or autism spectrum disorder, would perform poorly on this task. I hope that eventually, we will be able to develop a clinical test that would pinpoint the origin of pathology of many communication disorders like those mentioned above; individuals who perform normally on this task would have intact auditory nerves and nuclei (because this task requires it to perform phase-locking) and if not, than it very well could be a problem with the auditory nerve. This could save hundreds of hours and dollars and effort and headaches for people who receiving treatment of something that is untreatable at present. Dr. Newman mentioned in class that like certain phonemes, during development, there is a critical period for humans to be exposed to certain types of music and certain scales in order for us to be able to recognize and discriminate them from non-musical combinations of notes and rhythms. I wonder then whether other organisms, never having been exposed to music in their natural environments could discriminate between what is musical and what is not. It is something that so many grade five students set out to test with their science fair projects; measuring the effect of classical music exposure or the like. Well maybe they are not that far off, I wonder whether there are certain melodies or chords or combinations of the latter that animals prefer; whether animals stress hormone levels, eating/ sleeping/ mating habits and behavior in general is affected by exposure certain types of music (with intensity/loudness held constant). Wow! My brain hurts! I think that before I delve to far into the dream world of nifty yet scientifically unviable experiments I should stop writing and start looking these things up. If I find anything interesting or supportive of any of these ideas I will post the links!

Blog 8: "Da-di-da-di-da"...or is it "dDi-da-di-da-d"i
Last night as I stared blankly at the darkness of my room, trying desperately to fall asleep, the song "No Italiano" by Yolanda Be Cool was annoyingly running through my head non-stop. It was bizarre though, because it was not the tune so much as the rhythm of the song, and an internally verbalized version of it. Not verbalized by being paired to the song actual lyrics, rather it was simply paired with "Di-da-doo-da....Di-da-doo-da-di-da", if you know the some this might seem to make more sense, it is a very catch tune. Either way I started thinking about how when we produce a melody, however simple, we tend to pair melody with this seemingly random morphemes, if not with the lyrics them selves. For example, Beethoven’s fifth symphony " becomes "dah dah dah duuuuhn...dah.dah.dah.duhhn", and guitar riffs from the Beetles "Day Tripper" come out "neeeeeer, nir, nir, na ni neer, naneeer, naneenire"; you get the picture. I also noticed that I tend when pairing the morphemes /Di/ and /Da/, the order that I produce them is not really random; I tend to end with a /Da/ in a string of /da/s and /di/s if the melody keeps going, and end with a /Di/ to terminate the melody. I wonder what this is based on. I wonder whether it has anything to do with the frequency of words and morphemes patterns that we are exposed to effects this order. It also just feels so unnatural start with /Da/ than with /Di/. I wonder whether people who speak a language with a lot of words that end in /a/ , would find the reverse to be true. I also liken the sound of the morpheme /da/ to the sound of the verbal clutch “uh”, which native English speakers tend to insert between words, or in the I suppose this thought stemmed from our recent discussion on language and music. During that class we talked a bit about how singing with words tends to facilitate accurate melody production and the corresponding melody discriminations. It made me reflect upon, and inevitably admit, my inability to do a Cranium Humdinger (also if you haven't played that game, I would suggest it), where you try to get your team to guess the name of a song by simply humming it, without having to battle the urge to fill in all that sweet, sweet humming with those sing-song morphemes like "di-da-doo", "la-dee-dee", or "tralalala". It seems so much easier to stay on pitch, and so much easier for my team to guess what I am singing. I wonder whether this burning urge to sing even the most random morphemes comes efforts to facilitate my own memory of the often-times obscure target melodies, or simply my urge to give my team hints by trying to sound like the most recognizable instrument that plays the target melody/rhythm, be it guitar, sax, drum. It also made me think about how prevalent the tip-of-the-tongue phenomena occurs in that task; if only I knew one lyric, or if only I could think of how that song starts…Just give me the opening line, or opening note. It also started me thinking about whether these types of singsong morphemes are universal or whether they are language specific. It could also just be a personal thing. I would like to find out more.

Blog 9: Hello Language....goodbye language
First a little tangent: I was just having this conversation with some one in the class about blogs etc. and we started talking about her blog on the cartoon that Dr. Newman showed us in class and I mentioned to her that I truthfully thought he was showing it for some other reason than why he was actually showing it. He was using it as a demonstration of how people use gestures by having someone come up to the front of the class after we watched the cartoon (it was of Tweety and Sylvester I think) and explain the story and have us play close attention to her choices of gestures. While we were watching it however, a clip of about seven minutes or so contained a total of about 12 words, all spoken by Tweety's owner, yet we were able to follow the whole story with only that much narration. It got me thinking the the reason that we are able to follow these stories, where the main characters hardly say a thing, is because of their extreme hyperbolic facial expressions and and body movements. THey say so much with their gestures and their timing. I thought that that was the point he was trying to make. And then It got me thinking about silent film and how very much we can derive from clips with no speech. WE do so much top-down processing and seem to require spoken language. Second tangent, why then if we can get the gist of things without spoken language do we need it? We can clearly communicate with out it, if we practice. Does it really make us smarter than any other species or superior? I don't think so. Nor is it something that we (our generation) can take credit for, it was taught to us, and has been passed down. Just because other organisms seem to be able to communicate just fine without it does not make us superior to them. If they have not evolved to speak a language its because there have been no evolutionary pressures to do so, if there were, then i am confident that it could have evolved in many other species, spoken language that is. Organisms that closely resemble their ancient ancestors, the ones who have not evolved a whole lot, are the ones who got it right the first time. I think that, if they had the propensity to do so, they are the ones who should take credit. They are the species that nature prefers. Okay, now that that is all out of me. What i really wanted to write about was, sort of touching on development, but the idea language has evolved so much even in the last few hundred years and that the increase in technology and global communication has accelerated it. This stemmed from a conversation i had with my friends eleven year old sister. I had asked what sort of books or novels they were reading in english class (she is in grade 4 i think) and she told me they were reading Romeo and Juliet! I did not read that until I was in grade ten...until i was like sixteen at least. It was not a question of appropriateness to me but rather whether or not kids this age can really understand the language of Shakespeare. Come to think of it though, I don't think i fully understood it in grade ten. Will it have any context to her? This whole thing also made me think about how we seem to condense everything we say, especially when telling stories no more of these big long epilogues....time and space are money, if you cant write in 50 characters or less don't bother!. Strange attitude, but its all around us, and i think it is heavily responsible for the lack of vocabulary witnessed in todays youth " Y would u go thru the f-ort? wrds r $, we gotta sav'm". REally though. It seems like the only words that are being learned, taught and used are the ones that will make your life easier, the ones that let you get your message across faster and clearer. Words are no longer being chosen to elaborately describe exactly how your feeling, especially now that there are emotocons for very thing. I think language development is going full circle, and that we are slowly but surely eliminating all words in our vocabulary that are superfluous and keeping only those that have concrete meaning, our language is becoming less and less metaphoric. Eventually, we will be back to just " where nomnom? Me want nomnom" ( this may be an exaggeration but it illustrates the point). All in all i think that it is great that this young girls grade five class is studying Shakespeare, if we don't give children the words early on they will never loose them and they will be lost and gone forever.

Blog 10: My hypothetical Switchboard model of Speech production and perception
Don’t be deceive, I have long ramble before I get to how this blog is related to the title, but it all makes sense in the end. I am working as a research assistant in one of the psychology labs at DAL and for the particular study we are working on, we have to run all of our participants through intelligence tests. The one that we use most often is the Wechsler Abbreviated Scale of Intelligence (WASI), which has two verbal components and two performance components. For one of the verbal components we have participants define a series of words and in the second we have participants tell us how pairs of words are similar. Many of the words have multiple possible definitions that we would issue perfect scores for and I should note that the participants thus far have all been children. I have been running these since last term and only this term did I start to pay attention to the types of errors that people are making when defining the words and generating similarities. Some of the definitions that they provide are semantically very similar to the target definitions of the words. For example, the word impertinent is often incorrectly defined as “something that is very important” (pertinent); semantically related to the target because their definitions both have to do with relative importance. A second example of incorrect definitions, in this case that is phonetically similar to the target is when children define panacea as “when all of the continents were attached together/ when America and Africa were the same island” or something along those lines mistaking the target word with Pangaea. When they mis-define the word intermittent as “between two mittents” we can see how the morpheme inter is looked at in isolation. All of these are real examples from children that I have tested or whose WASI’s I have scored. These types of error seem logical according to a connectionist model of speech perception and production, where all nodes associated with the definition of the target word are activated along with nodes sharing common semantic (example one), phonemic (example two), or morpheme (example three) components with that definition or word. When it comes to the similarities component however, where the children are often asked to tell use how two completely opposing ideas are related (such as love and hate), the children often have difficulty producing similarities, getting stuck on the idea that the words are opposites. This fact emphasizes that we don’t seem to have conscious access to which nodes are being activated, if we did, and if a connectionist model is used, we might be able to work backwards and find out which nodes are co-activated. We asked all of the children to do this for about 20 pairs of words, such as Blue-red, Grape-strawberry, Photograph-song etc., to which the children typically respond, “ They are both colors/fruit/art forms” respectively. The children clearly understand the task and understand that other words that are opposites have some similarities, like smooth-rough for example. They will tell me accurately that both are textures. Sometimes however, they will just say “No. They are opposites” “ true but how are they similar”, “ they aren’t they are just opposite, they are completely different”. After writing my chapter on the models of speech production, now whenever I encounter this type of response rigidity it makes me wonder whether the existing connectionist models could really apply to this situation. It makes me think that such a model would have to include some kind of node that describes the relationship between words (how they are similar/different etc.) beyond their grammatical function (I.e. Both are nouns/verb/adjectives). Such a node does not exist in any of the models that I found. This kind of node might act like a switch of a switchboard, narrowing down the possible associations between words. For example, there could be a node that allows for activation of further nodes that describe how two words are similar and a second for activation of how two words are different. The “different” and the “similar” nodes would be competing for activation. Thus, when accessing similarities between words, the nodes for both words (phonetic, grammatical, semantic etc. nodes) would be activated in addition to a the “relation switch” nodes; if the “how are they similar node” is activated more strongly than the “how are they different node” then the children will respond correctly, but if this relative activation is reversed then the differences node will prevail and the child will only be able to think of and fixate on how the words are different. At the end of the similarities component, after I have technically finished recording responses, some of the children want to go back and “fix” their answers and so to humor them I will ask them go back and repeat the pairs that they got stuck on. The result of this further support the switchboard idea because when the child thinks about something else for a while, or is allowed to do a few different similarities before returning the word pair they were stuck on, they suddenly seem to be able to generate similarities between the words. It is clear that they have not gained any new knowledge over the course of the five or minutes between attempts, however they seem to have a new perspective, a new insight, something that they had stored up in that little head of theirs, just something that for what ever reason they could not access. Perhaps, this time around, the similarities switch, instead of the differences switch was more strongly activated, which lead to propagating activating of the commonalities between the words as opposed to their differences .

Blog 12: Course Feedback and beyond
Please understand that this is clearly not personal and that I will highlight the things about the course that i would have changed, while focussing less on the good things because I feel that you may already know what is good about the course etc. I dont intend for this to be rambly, however i fully expect it will reduce to that. I like many things and disliked some things as well about many aspects of this course. Firstly, I dont feel that a background in cognitive psychology is necessary for someone to follow along and do just fine in this course. The only reason that i took cognitive was because it was a necessary prereq for this course and I dont feel that much or any of the stuff we learned in cognitive was essential for this course. In fact we skipped the language section in cognitive, and the cognitive course was not very stimulating, so If you have the opportunity to save many students many dollar on a course, please in the future have this course be an open, no pre-req course. Secondly, though i understand the efforts to make teach this course in an alternative way, I feel that not testing students on the material allowed students not to attend and also did not encourage people to be paying attention. Thirdly, the only time I used the textbook was as a reference for my online chapter, and because we were not being tested, I found very little incentive to read it; if you suggest a textbook that is so expensive please use it or refer to it or do not assign one at all, especially one that could not be purchased second hand. I think that the weekly blogs are a good idea, however I know that myself and several classmates that I spoke with had a hard time thinking of topics to discuss, perhaps coming up with a few reflection questions and having participant choose from them or just write about what ever they want. I think that giving people a target question would definitely help direct blog posts. Perhaps you could do this by reserving few minutes at the end of class, because most often we ended a good ten minutes early, to having an open discussion about something contraversial(this might be a good time to adress issues that came up in the debates instead of actually having debates). About debates; having group projects towards the end of a 3rd/4th year course is not wise. PEople get bitter and angry and are not friendly and are far too busy to be coordinating with strangers. Group presentations are tough not just dynamics-wise, but organisationally. The benefit of the "team-work" experience is not out-weighed by the inconvenience. The debate topics were okay, but often required much more background than was able to be delivered in the eight minute talks. I think it would be more effective and engaging to adress those questions in class but not in a debate form. DEbates also are unnecessarily stressful. Using wikiversity, despite it being a neat thought turned out to be very stressful as well. I feel that I struggled more with learning how to write HTML than i did learning things in the course. It was very frusterating trying to get things to work and insert figures etc.etc. Please dont have peoples marks dependent on their ability to struggle with learning how to format webpages. not a good use of our time. Wikveristy while making info accessible to the world, made earning marks feel inaccessable. I do appreciate that there were many opportunities in the class to get good grades and/ or to make up grade, however I think the blogs in addition to a few exams would be appropriate. Additionally, I am sure that you have heard this a bunch and have marked accordingly, however, as far as the debates go, if you do end up doing it again, have the sign up dates and topics available from the beginning of the semester so that people can plan ahead. We did not have enough time to fit it into our schedules (many of us), thus we did not do our best because we were not ablet o devout enough time to it. It was a good idea to get us to give peer feedback, but for some of us the peer feedback did not help us in any way to improve our chapter outlines, in which case, for individuals who got nononstuctive or limited feedback, further feedback should have been provided so that everyone got the same amount. Overall, I think that the course became annoying with all of the little assignments, which in the end disengaged or jaded people. I think that the way that the instructor was very available and approachable was great and most feedback on assigments was also good and consructive. the course material felt VERY repetitive. I think that tests would be good because the course material is exciting, but it is really easy to not get involved and it is hard to get interested without getting invested. Good luck with the course next year. I hope this is useful. I feel that the general flow of the course was good, the outline was well constructed however, it was hard to get into the material knowing that you would not be tested on any of it directly. I