User:Emilypmcguire

Hello, I am a fourth year Honours in Neuroscience student and I am interested in pursuing a career in audiology and/or speech language pathology. How humans process and make sense of language is by far the most interesting topic I have come across in my years at Dal. Looking forward to this course!

Blog Post 1
So far I have very much enjoyed the lectures presented by Dr. Newman. While the history lecture and the brain imaging lectures were reviews for me (having taken previous courses such as cognitive psychology, developmental psychology and perceptual processes), I found that they refreshed the important information that I will need to be on top of this class. The lecture on language and thought was my favourite so far - it brought up interesting questions about language that I had never considered before. I had never before approached the idea of thought as being defined or regulated by language, and how thought may differ when a person uses a different language system. The question of whether or not thought can exist without language, as was brought up in class, is still floating around in my head. When this question was posed, I immediately thought of Helen Keller (as did other people in the lecture). The story of Helen Keller is one of my personal favourites and has been ever since I read a biography of Anne Sullivan, the woman who was able to work with Helen and teach her to communicate by spelling onto her hands. Helen lost her hearing and her eyesight around the age of 19 months, at which point most normally developing children have acquired some aspect of the production of language, such as forming basic words and the ability to follow simple instructions, and it is widely accepted that a child of this age would be able to understand many more words than they could produce. So what would happen to the development of language if suddenly the ability to hear it and the ability to see it being produced were taken away? If language development were eradicated, would this also eradicate thought? Are the development of these two phenomena dependent on each other, or does one drive the development of the other? These are interesting questions, but not ones that I think can be answered using Helen Keller's life as an example.

What many people do not know is that when Anne Sullivan met Helen Keller, she was not without language. Helen had a system of "homesign" (which, interestingly, is part of the chapter I will be writing later!!) with which she was able to communicate with her family. Sign language, whether it is invented or formally taught, is just as much a language as speaking or writing. When Helen lost the ability to see and to hear, it became impossible for her to learn to speak (unless formally trained, as she was later in life). Sound, vision, and their uses were no longer available to her, so, she invented a new way to communicate with those around her. Interestingly, when we think of sign language, it is a very visual experience - Helen invented a homesign language without the ability to see what her signs looked like. What does that say about her internal understanding of her world? What does this say about her thoughts? Did she come up with signs to express the thoughts and ideas in her head, or were those thoughts and ideas facilitated by her ability to make signs? Did she spontaneously invent specific signs or did natural motions, as she tried to express herself, become reinforced and "learned"? Or both? Did her signs approximate the visual appearance of things in her world, like ASL tends to do (sign for "cat" is to make motions indicating whiskers; "red" indicates one's lips)? Or did they represent things that she could directly experience, like texture, taste, weight or purpose? Or, in signing properties of objects that she could experience, did she unknowingly create an intuitive visual experience for those around her? One could imagine that even without sight or hearing, a cat's whiskers could be felt and indicated in the same way a conventional ASL user would. Did she have a theory of mind, where she was aware (by memory?) that other people could "see" things and "see" her and by signing for an object, she could communicate a thought? This seems farfetched in that a 19 month old child who loses vision probably does not remember what it is to "see", but it is still puzzling. If the most important things to communicate were nouns and possibly verbs, did Helen have any concept of grammar (before Anne Sullivan's work)?

Anyway, before I ramble on forever, my point is that Helen Keller never really lost language and she never lost the ability to think. Whether her success at communicating even in the absence of sight and hearing indicates that language is driven by thought and the internal desire to express oneself is still something I am considering.

Blog Post 2
Week two was again a bit of a review for me, but my interest was still captured during the lectures - maybe not by the big picture per se but by some small details which I will make some comments on for my second blog post. Firstly, the point that Prof Newman made about determining "language areas" of the brain has stuck with me. It is so true that every time we (or at least, I) see an image of the brain, our focus is on the gyri and sulci and the boundaries they form. Of course, it is just too easy to divide the brain up into functional parts based on the visually pleasing lines made by the folds of the brain. Using an illustration of the fiber tracts and their projection ares in the cortex, the fallacy of this method was made very clear. If you took a sheet of paper, colored it into 100 different "compartments", then balled it up, the likelihood of each compartment being nicely represented by the folds and bumps on the surface is pretty low. Now I know that's an exaggeration, and sometimes dividing functional brain areas by surface anatomical landmarks is fairly accurate, but what I hope to see in the future, as computer imaging and modelling progresses, is this: a functional map of the unfolded cortex, based on many (many, many) subjects, data, and imaging methods, that eliminates the potentially misleading folds and grooves and allows us to see the "page" as it would be without its convolutions. Maybe I am dreaming - I don't know how hard it would be to get a computer program to take the cortex and flatten it out - but I think it would be illuminating (does anyone know if this has been done already??).

Secondly, I want to share with you a nerdy moment I had while browsing some of my favourite internet pages...Hopefully at least some people have heard of the site damnyouautocorrect, a site for iPhone users to submit funny autocorrections their phone has placed in their texts. Some of these are funny because they have turned innocuous messages into something crude, and some of them are funny just because the word is clearly not supposed to be there. The thing about this site is that it is just so universally entertaining that I think it must be tapping in to something natural and common to most humans. Also, the more of them you read, the funnier they get. I think there are videos on Youtube of people helplessly laughing after spending some time on the site. Anyway, my nerdy thought was this: what if the N400 is important to this perception of "funny" and "entertaining"? The autocorrect errors violate the semantics of where the sentence or conversation is going. Without this blip in our brain waves (actually I should say "without the mechanisms that cause this blip in our brain waves), would we be able to tell if these mistakes were hilarious or not? I realize that I am taking good hard neuroscience data and twisting it far outside the box but I think it is important (since we touched on lesions and deficits as well) to consider just how many aspects of our life are affected by the mechanisms we use to process language - if we look past the big picture of losing our ability to communicate efficiently, we see that other things which make life enjoyable, like music, humour, some other art forms, etc., would also be lost.  People who lose some part of their language abilities truly have a mountainous challenge to overcome, which is sadly impossible in some cases, and this is why I think that language research is one of the most important and definitely the most interesting area of neuroscience that I have come across.

My apologies for the rather haphazard thoughts I have thrown together this week - this concludes my post.

Blog Post 3
First, a comment on my research for my book chapter: I keep getting distracted in my literature searches by papers that document word structure in different pidgins and Creoles! Maybe it comes from being a (nearly) tri-lingual (English, French, Italian) and being well-read enough to recognize other languages like Spanish and German, but the more I read, the easier I find it is to look at a pidgin word or sentence and recognize which languages contributed to its formation. At the very least, I can identify the main contributing language (superstrate language) and sometimes the substrate languages. Here is an example of an English-based pidgin: dis smol swain i bin go fo maket; which, translated, says "this little pig went to market". Here is a German based pidgin: ja fruher wir bleiben; it means "yes, at first we stayed". Here is a French based pidgin: mo pe aste sa banan; which says "I am buying that banana". The English pidgin is easy to recognize both by its words and its grammar; the word "ja" and word structures like "fruher" tipped me off that the next one was German; "sa banan" appeared to me to be a permutation of "ca banane", a slightly grammatically incorrect way of saying "this banana" in French. It makes me wonder if multi-lingual people would be better at picking up a pidgin, or at creating one amongst themselves. If you marooned a group of bilingual Canadians (English and French) with a group of bilingual Israelis (Hebrew and Arabic), would you get a pidgin that was a combination of all four languages? Or would you get a pidgin combining the two languages that most closely resembled each other in structure and form? Or, alternatively, would you get a pidgin combining the two languages that turned out to be slightly dominant from each group? Say the Canadians chose mostly French to communicate among themselves and the Israelis chose mostly Hebrew - would the pidgin then be French/Hebrew or would there still be bits of English and Arabic in there too? If there were the same amount of people in each group, how would one language become a superstrate and the others substrate? As far as I have read, the superstrate/substrate structure always occurs in pidgins, but I am also assuming that having equal numbers of different language speakers in a heterogenous group of people is rare if not impossible...hmm. If only there was a way to create a pidgin under strict laboratory conditions...the neuroscientist in me comes out!

I would also like to leave a comment with regards to the voice recognition/voice activation/direct voice input technology: I think it is ridiculous that any company has tried to market a voice recognition software that you apparently don't have to "train" or calibrate. The variations in people's speech production are infinite, and although (like we saw in lectures this week) efforts to categorize our speech sounds have been great, they still fall short of explaining how it is we are able to accomplish all feats of speech comprehension. There is yet to be created a computer with the computational power of the human brain, but what I think we have been able to create (from my limited knowledge of computers) are programs that can accumulate "top-down" information and apply it to other data inputs. This is why you could train a program to know YOUR voice and recognize YOUR words: so long as it can retrieve and match information it has stored, it will do alright. But the slight variations in how you say "cat" and your friend says "cat" would leave the program unable to cope. They can't generalize. And since WE don't even know precisely every parameter that influences how we perceive speech, since even we don't know all the ways our speech perception can be influenced and how those mechanisms work, how can we pretend to hope to construct a machine to do it for us?? A machine is only as smart as its inventor, and therefore I think it is obvious that until another few decades of research into speech perception have gone by, any voice recognition program can only be functional for those speech patterns that it can be taught. Buying a program that pretends to have the capabilities to recognize anyone's speech without training is a waste of money.

Blog Post 4
Unfortunately, this week has been very lacking in psycholinguistics. I was unable to make Monday's class, the university closed right before class on Wednesday, and Friday was Munro day! I had a look through the lecture on morphology though, and went off on a tangent while trying to practice separating free and bound morphemes from common words. I found a PDF document online that gave me a list of words to practice on, such as "creating", "seaward", and "astronomer". Most of them were pretty easy but it was words like "astronomer" that gave me pause. Evidently, -er is a bound morpheme. But what about the rest of the word? In English, I would separate it into "astronomy" and "-er", meaning someone who does astronomy. But I also know that I could separate the word into its Greek roots, "astro" and "nomy" plus "-er", breaking it down into (loosely) someone who deals with the laws of the stars. Most people know that "astro" means something is related to the stars, but it has no English meaning when it stands alone. Are language segments like "astro" and "bio" bound morphemes? They don't communicate a grammatical change or a meaning/class change, at least not how it was defined in lecture. They can't be free morphemes because they don't mean anything on their own. What do linguists do when faced with words that are clearly heavily based on other languages and therefore cannot be separated into regular English phonemes?

In the same interesting PDF file was a reference to Esperanto, which I have never heard of before. In case you haven't either, Esperanto is a constructed language invented in the late 1800's by a man named Zamenhof whose goal was to create an international language in order to bring harmony and understanding to people from different countries. This caught my attention because this, is essence, is an intentionally created pidgin. An artificial pidgin, even. Apparently there are some thousands of native Esperanto speakers in various countries, making it an artificial Creole. Cool. I will definitely be including this in my textbook chapter on pidgins and Creoles!! I am especially interested on which languages he based Esperanto, how they contributed to its structure, and whether or not it is indeed an intuitive international language. What's the range of Esperanto vocabulary? How did he make words?? How did he decide which morphemes stood for gender, plural, past and present tenses?? Where can you learn and use this language? Does learning Esperanto help you learn the languages from which it is based? Excited to keep reading and to write about it in my chapter!

Blog Post 5
This week I would like to write about my progress on my book chapter, mostly because it hasn't really turned out the way I wanted it to and I am not very satisfied with the first version that I have turned in. Firstly, most of my successful attempts at finding good primary research articles turned up information on Homesigns and not on pidgins and creoles. That, coupled with the fact that sign languages are a personal fascination of mine, led to my first version being heavy on the homesign information and light on the pidgin/creole info. In my final copy I hope to balance it out a bit more. I humbly admit that most of my problems with finding good information on pidgins and creoles is due to my own reluctance to go to the library and find information in (shudder) paper form, as most of the resources that popped up in google scholar were actual books and not available for download. I will rise above my laziness and spend some time in the stacks when I am writing my final version. Secondly, I would like to add an entirely new section to my chapter, a kind of summary on how pidgins and creoles are comparable to homesigns and how their developmental patterns are similar and what key points about language acquisition they all support. The information is all there right now but not in an obvious way, and I feel it leaves the reader floundering around a bit. I would just like to tie it all together and make a concluding statement of sorts. Also, I want to add some neat little asides (like about Esperanto) and maybe some pictures, just to make it more interesting and more text-book-like. However, I am having some difficulty with Wikiversity html formatting, in that I don't know how to make tables or put things in boxes or insert pictures while citing them properly. Does anyone know if there is a tutorial for this kind of stuff somewhere? I have looked at some other people's pages and the make mine seem...lame. Any advice on formatting would be greatly appreciated. On my language radar this week is the story of a Canadian soldier who learned to speak Pashto, the language spoken by Afghans. Vancouver Sun story here: http://www.vancouversun.com/news/Alberta+soldier+draws+stares+Afghanistan+speaking+local+language/4275343/story.html >br> This is a nice story about breaking down the language barrier and how it opens people up and brings them together, even in a setting like war. Heartwarming, but what struck me about the story was that a Canadian soldier knowing how to speak Pashto was an exception, an anomaly in the military efforts overseas. Why is this the case?? What could be more helpful in peacekeeping efforts than knowing the local language?? The Afghan people now have to deal with many foreign military personnel - what could make it more alien to them than no one speaking their language? I personally think the apparent lack of effort on behalf of the Canadian military to teach their soldiers the local language is not only irresponsible and counter-productive, it also shows a lack of respect for the people to whom the country belongs. On a happier note, I just learned that one of the girls on BC's ringette team competing in the Canada Games is deaf! One of the boys on Winnipeg's speed skating team is deaf as well. Their teammates are making effort to pick up ASL and have completely inclusive attitudes. It makes my day to hear a story proving people with unconventional language abilities are able to be as much a part of a team as anyone else (not to mention their impressive ability to pay competitive sports).

Blog Post 6
Well, not much to say this week - no class to comment on and not a lot of thought put towards school over the break! I did complete my chapter review and enjoyed that assignment - I chose to review a topic that I have some background in and so I feel that I was able to make some constructive comments and suggestions. My own chapter has just been reviewed and the comments are accurate and the feedback helpful. The reviewer pointed out all the shortcomings that I myself acknowledged, and so I know the direction I need to take it for the final version. In my scrounge for ideas for this blog, I came across an article on how learning a second language can protect against Alzheimer's. More specifically, the article reported that bilingual people were 4-5 years older when they were diagnosed with Alzheimer's. In seeming contradiction to this, the bilingual patients had more advanced brain deterioration than monolingual patients who were functioning at the same level. This means that bilingual patients were better able to cope with the loss of brain tissue. The theory behind this improved functioning is that bilingual people must use their "executive control" more often than monolingual people, and its this "exercise" of the executive control that allows them to deal with the effects of the disease. I am going to look into this further because right now the article sounds sketchy. But, this is where my blog ends this week - I have nothing up my shirtsleeves to write about and rather than have you (my TA) read a long, rambly, poorly written blog entry, I am going to cut it off right here. Hope you had a good break!

Blog Post 7
Hello again. First of all, here is a link to the 2nd language/Alzheimer's article. Unfortunately I haven't been able to find the actual journal article that I am hoping goes along with all of the popular press releases, but I continue to search. For this week's blog, I would like to discuss the math and language lecture, since we are no longer covering it in class. I went through the slides that Dr. Newman has posted, and although some of them are vague without his explanations (I will listen to the podcast ASAP), I found most of the points to be thought-provoking. One of the "huh" moments I came across was in the table comparing the arbitrariness, grammar, and innateness of language versus math. It is well known that we as humans have an innate language learning capacity, but I am not sure that I agree with the term "innate number sense". I agree that we have the innate ability to differentiate between different quantities of concrete things, but I do not believe the capacity to assign numbers to things is innate. It is hard for me to believe that a child isolated from numbers and math would spontaneously create their own system as they would spontaneously create a homesign if isolated from verbal language input. That being said, how did numbers and mathematical operations come into being in the first place?? It hurts my brain a little bit to try to work out the logistics of this, but I believe (as is also reinforced in Dr. Newman's lecture) that language is what allows us to express our innate understanding of quantity. I also believe that language is what allows us to manipulate these understandings into more complex operations and abstract values. I think that math is dependent on language in the same way as any other thought process is: without language, we have no concrete way of expressing or communicating about math. In my mind, it is a one-way dependency: one can have language without having math but one cannnot have math without having language. Lastly, I have a question for you that you may or may not be able to answer: the last two slides in the math and language lecture have left me slightly confused as well as intrigued. What I THINK is being said is that the HIPs, which used for numerosity judgements, number comparisons, and calculations, is also the part of the brain which is used for pointing and grasping...and in light of that discovery, we can consider mathematical abilities to the be the internalization of representations of actions such as pointing or grasping, and that this internalization has occurred over the course of evolution. In simpler terms, perhaps our ability to mentally count things is a internalization of our ability to point at or grasp different numbers of objects in sequence. Is that what those slides are talking about?? If so, that is one of the neatest perspectives I have ever heard about the thought processes behind math.

Blog Post 8
Yaaaaayyyyyyyyy gestures!!!!!!! That was pretty much my sentiment during Wednesdays class. Gestures and gestural language have always been a personal fascination of mine. And Dr. Newman began the class with my other love: a Warner Bros cartoon!! I find it entertaining that this particular cartoon has achieved fame as a psycholinguistic stimulus for the study of gestures :) but it's true - there are a ton of visual events happening in a very short space of time, and to describe them efficiently to another person, it is natural to use gestures and pantomiming. Anyway, while I was listening to Dr Newman's lecture, my mind kind of went on a tangent. There are some researchers who assert that before primates acquired spoken language, they used gestures and some primitive form of signing to communicate.  I don't really agree with this - firstly, if you follow evolutionary pathways, many species that evolved earlier than primates use some sort of vocalization to communicate. It isn't language per se, but it is vocal and it does carry some meaning (think birdsong, lion roaring, etc). Also, early primate groups had to be able to communicate over long distances and in this kind of situation, gestures would obviously not work. I think vocal communication developed before gestural communication, although gestures may have been common when individuals were communicating in a group in a small area. However, it is interesting to think that maybe the ability to apply symbolic and abstract meaning to gestures came before the ability to apply symbolic and abstract meaning to vocal utterances. This again seems unlikely though, since even chickens can be shown to produce vocalizations that carry meaning (I wish I could remember where I read this! they separated two groups of chickens, one which was allowed to see a certain predator and another which could hear the first group but not see them or the predator. When the first group was presented with a fox and subsequently freaked out, the second group responded by trying to get to higher ground, like their perch. When the first group was presented with a hawk's shadow and freaked out again, the second group stayed on the ground this time and tried to find shelter under things. Clearly, the noises produced by the first group were communicating something about the nature of the predator.). Maybe, though...and this is stepping out a limb that has no actual factual support, i.e. is only slightly educated speculation...maybe it went like this: we evolve from Australopithecus into Homo erectus - our hands are now free. We can use tools, hold weapons and food. Our hands are also free to do other things, like point. We start using hand motions to accompany the limited number of vocalizations we can produce. Somehow (and this is where it is fuzzy), both the gestures and the vocalizations begin to acquire symbolic, abstract meaning. Maybe at one point they were used equally frequently and skillfully to communicate. But for some reason, vocal language is the half that continued to develop and grow. Maybe it made more "sense" for language to be something that did not preoccupy the hands, since the hands were so busy being useful in other situations. You don't want to waste time signing about a predator when you could be using your hands to escape while screaming "TIGER!!" to warn your buddies. I am obviously simplifying this and exaggerating in my examples but these are the kinds of things I was thinking about during lecture. I would love to observe a group of primates gesturing someday (and not my classmates!).

Blog Post 9
My 22 year old friend had two strokes this weekend :( she lost language during the actual events but immediately regained it afterwards. She is very lucky to be alive and is on her way to a full recovery.

Blog Post 10
I finished up my learning exercise this week, and in doing some browsing of pidgin-related material, I came a across a website full of educator's materials related to Hawai'ian pidgin/creole (I have put a link to it in my learning exercise section). I decided that for fun, I would take their pidgin grammar quiz...and I scored 100%! It was simply a task of deciding whether a sentence was correct or not (not actually changing it to the correct form), but nevertheless...I don't speak pidgin, and my only exposure to Hawai'ian pidgin has been limited to the few phrases and information I have absorbed during my research for my textbook chapter. So, how was I able to do so well? Possibility: Hawai'ian pidgin/creole retains enough elements of English grammar for me to recognize when grammatical rules have been violated.
 * This blows the relexification hypothesis out of the water, because English has been identified as Hawai'ian pidgin/creole's superstrate, and therefore, lexifying language. According to relexification, this means that English vocabulary should be mapped onto the syntax and structure of the substrate languages (e.g., native Hawai'ian, Cantonese, Portuguese). It IS true that Hawai'ian pidgin/creole has a largely English vocabulary, but if this is the case then English grammatical structure should not also be recognizable. There is no room in the relexification hypothesis for both the vocabulary and structure of a particular language to dominate the final creole. I do not speak any of the substrate languages contributing to Hawai'ian pidgin/creole, so if it is their grammars and structures that make up the final creole then I would not be able to do so well on the grammar tests. I believe that, since the vocabulary is mainly English-based words, I am easily able to pick out when some English grammatical no-nos occur. For example, I can recognize when there is a double negative, or two words that indicate past tense, same as you would recognize the error in someone saying "We can't come not to your party." or "We went walked to the park."  The fact that I scored so well on these grammar quizzes indicates to me that both English vocabulary and grammar have been preserved in Hawai'ian pidgin/creole to a recognizable degree (although, obviously, not exclusive to other contributing languages).

To round out the week, I present you with an awesome and entertaining article entitled 20 Obsolete English Words that Should Make a Comeback. My favorite is "freck: to move swiftly or nimbly" as in "I had to freck to get to school today" :D I think it's great how intuitive some of the words are, like "brabble" which I think is a portmanteau of brawl and squabble, while some are just weird ("kench: to laugh loudly"). At the end of the article you'll find more links to great untranslatable words from other languages - wait til you see what itsuarpok means! So useful!

Blog Post 11
Well, this week saw the beginning of our series of debates. On Monday was a fairly controversial debate about ebonics/AAVE, Wednesday was the debate presented by my group about the Fast ForWord program, and Friday's debate was about oral and total communication strategies for hearing impaired children. I'd like to make some comments on each of the debates for this week's post. I was dissatisfied with how the debate on ebonics played out. The position put forth by Dr. Newman were very specific: The resolution of the Oakland, CA School Board, stating that Ebonics (African American Vernacular English) should be the language of instruction in classrooms where this is the dominant language of the majority of children, should be implemented in schools where appropriate. I think both sides had their arguments blurred by the overlying tones of possible racism and belief in the supremacy of the English language. I'm not saying anyone in the groups held any of those beliefs, rather that there was definitely an effort made to avoid them and in doing so, they became the elephant in the room. I found it interesting that both sides agreed that AAVE is a legitimate separate language. If this is the case, then the question becomes how do you accommodate bilingualism in the public school system? I wonder if the groups would have had different arguments if we were debating the validity of, say, French being the language of instruction in classrooms where this is the dominant language of the majority of children. I felt that our debate on Wednesday went fairly well. We were a bit freaked out by the seriousness of the debate on Monday so we tried to give ours a more interesting, lighter approach. What made it fun for us is that none of us actually agreed with our position, which was that the FastForward program for dyslexia is an effective treatment, and superior to other therapies. We agree that it can be effective but not necessarily superior to other studies. There is a lot of research out there to prove otherwise. Because of this, we had to think outside the box and approach the notion of "superior" from angles other than cut-and-dry effectiveness, like the "fun" aspect, whether kids like it enough to stick with it, how pricey it is, how accessible and accommodating it is, etc. However, the "against" group had well formulated arguments backed up with good scientific evidence and we were happy to acknowledge that. We lost. :) Friday's debate was a bit unfortunate, because the position that deaf children receiving cochlear implants should receive only oral communication and not be given training in Total Communication/Sign Language had to be defended by only one person! He was at a clear disadvantage, having to introduce and explain the topic as well as make a good argument in a maximum of two minutes. That's nearly impossible. However, he did a very good job, especially in the rebuttal, at which point he was able to make use of the full five minutes and get some more arguments out. I have to say though, I agree with the opposing position, that deaf children receiving cochlear implants should receive only oral communication and not be given training in Total Communication/Sign Language, but that is based on my own prior knowledge, not just because the opposite team was convincing. As well, I think both teams lost sight of one of the crucial details in their positions: that we were discussing deaf children receiving cochlear implants, which is not ALL deaf children by any means. I forgot that this was even part of the argument until I looked back at the topics after the class, indicating that the groups didn't make it very clear. I think that if the groups had made this detail more of a central topic, both sides could have had more specific arguments and presented a better debate.

Blog Post 12
Final blog post! For this last entry, Dr. Newman expressed interest in us evaluating the course's pros and cons of the course this year. I provided some feedback on my written course evaluation, but I will elaborate on it here. Firstly, I think running a paperless 3rd year course was a ambitious undertaking. However, one of the pitfalls of the course design was that there were no exams, and therefore students were not responsible for any of the material covered in lecture. This means that a good portion of students chose not to attend lectures, and this was probably not even reflected in their grades. Adding to this is the fact that Dr. Newman posted podcasts and presentation slides for all of his lectures. While it is standard for profs to post lecture slides, adding podcasts means that this might as well have been an online course. For those students who chose to attend lecture, like me, there was still little, if any, motivation to pay attention. Why would anyone focus on a slough of information that they will never have to remember again? Before you go thinking "for the love of learning and genuine interest in the topic", I can assure you that my love of learning is profound, and I am genuinely interested in the neuroscience of language, but I used lecture times to mostly brush up on my solitaire skills on my computer.I believe that one of the main purposes of testing students is to make them liable for regurgitating information from their classes at some point, which ensures that they will at least memorize parts of it and hopefully, with proper test design, go beyond that into the realm of understanding and long term memory. In past years I think that Dr. Newman had a better approach with assignments such as the ones he posted as guidelines for our learning exercises, which required students to use critical thinking about the information presented in lecture. My main disappointment from this class is that I really only learned about the topics covered in my textbook chapter and the topics covered in the debates. While I recognize that at this level, I should be taking responsibility for my learning and paying attention in class no matter what, it is too easy to use this class as a "break" when I am not being help accountable for learning the information. That being said, I thoroughly enjoyed Dr. Newman as a lecturer. I think the debates were a novel and interesting way to get students more involved in the topics covered in this course, and helped people to apply psycholinguistics to "real life". I found Wikiversity to be a bit frustrating since I am not internet-savvy or computer-inclined, but I think Dr. Newman and Sarah took pains to avoid penalizing people based on their knowledge of computer-related stuff. I know a lot of people complained about some of the marking from the chapter and learning exercise assignments but in my opinion that reflects laziness and a lack of effort on their part. All in all, I enjoyed the course, but I feel am leaving it rather empty-handed, in terms of knowledge. I applaud Dr. Newman's forward thinking but I believe some changes are needed in order to assure students are still motivated to put effort into this course. All the best!