User:Jnvincent

Jacqueline Vincent is the author of the chapter on Reading in the collaborative Psycholinguistics textbook project, based out of Dalhousie University.

= The Author =

I am a third-year honours neuroscience student at the University of King's College and Dalhousie. After graduating, I plan to work and travel for awhile before eventually applying to medical school. My long-term goals at this point are to do a combination of clinical and humanitarian work, and help improve the structure and accessibility of the mental health care system. When not working or studying I enjoy brains, philosophy, vegan cupcakes, poetry, Garrison stout, and the things that Joss Whedon has seen fit to put into the world.

= The Blog =

January 16th, 2011
This week's readings and lectures were meant to set up a foundation for studying psycholinguistics: we discussed what language is, the history of linguistic theories, and how language shapes thought, and began looking at the neural basis of language. Attempting to think about the role of language in the nature of thought is a strange experience, to say the least. Not all of thought is linguistic - for example, emotional experiences or visualizing imagery are cognitive experiences that aren't initially constrained by language. In cases such as those, language is a constraint: trying to encompass a complicated emotional state in words can never quite get across the immediate, visceral aspect of experiencing it. But what is fascinating is that we choose to use a system that puts limits on our experiences, because the process of getting what happens in one person's mind into someone else's requires some medium of translation. We need to communicate, and this system of using symbols for other, often abstract things has been working well for us. Different cultures have different sets of symbols and combinatorial (syntactic) rules, but all are generative, recursive, agreed-upon systems for communicating ideas and desires. We use language to communicate things that are more abstract and beyond our immediate needs than in animal communication systems, and our symbols are highly complex and connotative. I really liked the phrase that Steven Pinker applies to human language: it is the "instinct to acquire an art," meaning that it is both innate and something that can be developed as a tool and a means of creative expression. It's interesting to wonder if we could have developed some other form of communication that wasn't linguistic. I personally doubt it - until the day comes when we can read each other's minds or directly transfer thoughts between people, we will need some system where publicly understandable signs stand for private thoughts.

January 22nd, 2011
This past week in class, we explored the implementation of language in the brain and various ways of measuring it using technology like EEG and fMRI, and began looking at speech perception in terms of how we understand a stream of sounds as cohesive speech. I found the discussion of all the various types of aphasias to be very interesting - as well as terrifying, to be honest. Most varieties of aphasia come with a mix of ability and disability depending on which area of the brain is damaged. For example, a person with damage to the left inferior frontal gyrus - the typical location of Broca's area - can usually comprehend speech, but has difficulty naming objects or verbally communicating. Conversely, damage to Wernicke's area (a temporoparietal lesion) results in someone who is excellent at producing speech but lacks semantic coherency, ending up with word salad. Most people's basic syntactic and semantic language abilities are localized in the left hemisphere (though not all, given that 27% of left-handed people are bilateral or left hemisphere-localized for language), and the more affective and prosodic aspects of language are localized in the right hemisphere. Something about this struck a chord with me: I had never before considered the possibility of losing abilities that seem more and more intrinsic to our humanity the more thought I give to the psychology of language. I don't want to fall into the trap of implying that those with brain injuries and linguistic disabilities are somehow lesser people or "broken" - perhaps differences in language ability is just another example of neurodiversity that hasn't been given much thought. It's more that in the context of my own life, the prospect of experiencing global aphasia or losing emotional affect would feel horrifically isolating from the rest of the world. I wondered how actual people with aphasias had dealt with them, and if they had built a new system of communication for themselves or if they had devised ways to integrate themselves back into "normal" linguistic society. I was unsurprised to find that the latter was more true: a quick search led to many tips for improved communication with those with aphasias, and working around linguistic disabilities to take more advantage of intact functions. For example, the Aphasia Institute in Ontario has devised a system called Supported Conversation, using trained intermediaries with a variety of textual and pictographic tools to help conversations happen. The general pattern for recovery from an injury or stroke seems to be consistent implementation of speech therapy, beginning with teaching a patient concrete, immediately useful words and moving on from there to gradually more complex ideas. The focus is on helping a person feel like they are able to communicate again.

January 29th, 2011
The most important thing we covered this week in class is the fact that chinchillas have categorical perception.

Well, perhaps that wasn't quite the take-home message of this week's lectures, which discussed aural and visual perception of language. Since the chapter I'm writing for the upcoming textbook is on reading and I've been immersed in background reading lately, I'm a little more excited by the new information about acoustic phonetics and perception. We looked at speech spectrograms, which illustrate the frequency and intensity of acoustic output over time. I found it so neat that even outside of unrealistic spy movies with voice-coded secret doors, human voices do indeed have "fingerprints" of a sort. Vocal output contains multiple formants or resonances, each of which varies in a significant way depending on which sounds are being articulated. F0 is the base formant, which is the relatively invariant sound that a person always has: this is the source of their distinctive voice. F1 and F2 are formants covering lower and higher frequency bands, respectively, and they vary with location of articulation in the vocal tract, providing most of the linguistic information in speech. Interestingly, if the two frequency bands with the most acoustic information are separated and played individually, very little information can be gleaned from either one. Only with simultaneous or binaural presentation do they become intelligible.

I study vocal music in my spare time, and all this information led me to wonder about what the spectrogram of a singer might look like, and how their formants would be different from a normal speaker or an untrained singer. Apparently, there exists a particular "singers' formant," with resonance around 2,500-3,000 Hz: if there is a major peak around 3 kHz in a voice, the singer is said to have this formant. Resonance at this frequency allows a singer to be heard over an orchestra, whose sound begins to lose amplitude with frequency higher than 450 Hz (the same range in which a normal speaking voice begins to lose intensity). Thus, projecting over accompaniment requires something extra from the singer in terms of frequency.

The other thing I found quite interesting this week was the idea of categorical perception, and how it varies from person to person based on linguistic upbringing. We perceive sounds on a continuum from one phoneme to another, and while we are excellent at distinguishing between categories, our within-category discrimination is quite poor. For example, modifying the voice-onset time of a consonant will produce a slightly different sound, but a listener will not detect a meaningful difference until the VOT change crosses a certain threshold point or boundary between categories. These points of discrimination vary widely, depending on which sound discriminations are necessary in a speaker's native language. The example given in class illustrated this with Hindi consonants, which would be easily distinguishable to native Hindi speakers, but much less so to speakers of a language such as English, in which the phonemes heard would all fall into the same perceptual category. This is a really neat application of linguistic relativity: native language may or may not have major influence on our conscious cognition, but it certainly shapes the way we hear every word we come into contact with. It would be interesting to investigate how bilingualism would play into this, and if late-life language learning can modify perception...

February 5th, 2011
Due to a snow day and a university holiday, this week was short on lecture material. Of course, given that the first drafts of our textbook chapters are due in less than a week, I’ve been traipsing through a virtual stack of PDFs about the psychology of reading that would kill many a tree if printed out. I knew that I would learn a lot about my chosen topic in the process of writing this chapter, but I didn’t realize how much insight it would give me into my own cognitive processes. I’ve been writing about the stages of reading, from logographic (recognizing words based on visual features) to alphabetic and logographic (recognizing words by identifying letters and letter patterns with pronunciations), and it’s strange how I can recall looking at distinctive logos and “reading” some semantically equivalent variation on the actual name of the company (McDonald’s was “the fry store,” as my parents never tire of reminding me), just like the children described in the studies I’ve read.

Even stranger to realize was the fact that English is so incredibly hard to learn in comparison to other Western languages! Probably the most interesting study I’ve read so far examined children learning to read in English, German, Dutch, Swedish, French, Spanish, and Finnish, and compared their progress over time and how quickly they could read and comprehend text. English readers were ludicrously far behind all the other languages for years of elementary school, while the Finns and Swedes were pulling ahead. The authors suggested that English readers were at a disadvantage because the relationships between written letters and sounds are so unreliable in comparison to the other languages. As well, English-speaking children tend to start reading instruction much earlier than in other countries, which seems intuitive but may actually just lead to confusion. I barely remember the process of learning to read, since it happened so early for me and my parents had been reading with me since I was born. But I do remember the process of trying to sound words out that simply couldn’t be sounded out, because English is such a confusing language – the day I tried to ask my parents what “psycho” meant was deeply confusing for everyone involved. It’s kind of surprising any of us end up literate at all…

February 12th, 2011
This week's lectures focused on word recognition and sentence processing. There are a variety of methods for measuring lexical access - our ability to use our existing knowledge of words to recognize and understand linguistic input - including lexical (word/nonword) decision tasks, word association, reaction time in object naming tasks, and the effects of semantic priming on output. As well, there are a number of factors that influence performance on measures like these, especially the frequency of the words involved, the type of word, and whether it refers to an abstract or concrete idea. The type of word refers to whether it is open- or closed-class, a distinction that makes perfect sense to anyone who speaks a living language, but is not often consciously considered. Open-class words such as English verbs and nouns can be added or deleted from a language, and are not part of the "core," unchanging linguistic basis. Closed-class words make up this core, and are usually function words such as conjunctions and pronouns. (Interestingly, in Japanese, verbs are closed-class words, since they are formed by simply appending the characters for the suffix "to do" to a noun. ) What I found most interesting about this was a different way of looking at our changing lexicon. Words are added and subtracted to our vocabularies more often than we (or I, at least) ever notice: the latest slang changes on a weekly or monthly basis, and even the most commonly accepted words in general society from fifty years ago might be met with a blank stare or a raised eyebrow today. To me, this is an entirely neutral observation, but it makes me wonder about the purpose of linguistic prescriptivism: isn't the end state of extreme prescriptivism just the eradication of any difference between closed- and open-class word use? (...and really, does that kind of viewpoint exist much outside of Orwell novels and totalitarian history?)

One of the other things I wondered about as a result of this week's material was the idea of a corpus, and how multiple corpora differ. Word frequencies and patterns of linguistic rules are examined in corpuses, which are representative samples of real-world text or speech. For example, we learned that Zipf's law (i.e. that the most frequent word in a corpus is twice as frequent as the next, which is twice as frequent as the next, and so on) was verified using a corpus from Wikipedia. That is fascinating, as is the idea of patterns like this emerging from a linguistic soup with no explanation (save for the fact that by the principle of least effort, people tend to use the most frequent and easily-understood methods of communication). But what is this sample of language that we base our investigations on? Online corpora come from search engines and compilations of articles, but a corpus of a living language is apparently compiled from many different sources. For example, the International Corpus of English is highly structured and includes a set number of texts from categories such as scripted/unscripted monologues and speeches, broadcasts, legal cross-examinations, press editorials, and letters. It would be interesting to listen in on the committee deciding how to structure a corpus sampling an entire language - of course, the widest variety of word use possible needs to be included, since linguistic changes between media (internet vs. print vs. spoken language) and register can be so drastic.

February 19th, 2011
This week, things got complicated. Understanding the individual parts of sentences and texts is complex enough, but the theories of how exactly we parse sentences (break them down into their constituents in order to comprehend them) is even more so. According to the Chomskyan X-bar theory, a sentence can be broken down into X-bar and a Y phrase: X-bar contains X, the head of the sentence to which everything else refers, along with a complementary Z phrase, and is followed by a Y phrase of modifiers and specifiers. This is a universal structure, but the parameter of the order of elements in X-bar depends on the speaker's native language (a neat example of linguistic relativity!) There are a lot of different models of parsing, from those based on people's reactions to garden path sentences to basic dependence on elements' location in the sentence. The garden path model seems to suggest that the parser commits to a particular sentence structure before the sentence is over, and uses that to comprehend incoming information. The module performing the parsing attaches as few new nodes as possible to the hierarchical "tree" of the sentence structure (a strategy called minimal attachment), and adds new input to the most recent open phrase (late closure) rather than putting in the extra effort to go back and attach new words to previous parts of the sentence. However, one study done in 1995 showed some context effects that seemed to point toward a much more open process of parsing, that didn't commit to one single interpretation until enough information had been provided. Under conditions showing two versions of a referent, but without information specific enough to point to one of them until after the referent had been mentioned in the sentence, subjects showed equal likelihood of looking at each before disambiguation (what a great word). Presumably this issue is still under debate, though the model that makes by far the most sense to me is definitely the competition model: it allows for variation in individual parsing strategies based on native language, which imposes different weighting on components like lexical information, phonological input, and syntax. This model theoretically also leaves room for individual variation based on previous experience, cultural and emotional context, and day-to-day variation.

Of course, these more individual factors come into play more when considering discourse, which can't be properly analyzed without taking into account the speaker/writer and the goals they have in communicating. The coherence of a text is entirely based within the interpreter's mind, with their ability to link ideas together sensically, a process that is inherently dependent on one's own perspective on the text and the constructive nature of memory. The effects of personal context and individual relationships with language are things that we don't see a lot in this material, and that I think would be really interesting to investigate further.

March 5th, 2011
I knew this week was going to be interesting as soon as I started the reading for the language and music lecture. It often feels to me as though music is just as complex and evocative a language as the one I speak the other 75% of the time. And, indeed, there are many similarities: spoken/written language and music are both generative systems, consisting of meaningful, recursive phrases that are built up from minimal sound units (phonemes and notes, respectively). As well, we have intuitive knowledge of both media. In terms of grammaticality, we understand syntactic combination and agreement in language, and nearly anyone – even a two-month-old infant – can discern between consonance and dissonance in musical ideas. And while we are certainly not as automatically fluent in musical production as we are in speech production, both appear spontaneously: children babble and make up songs even before they can reproduce music they hear.

Of course, language and music are not entirely the same thing: they have different basic functions, and music communicates much looser relationships and less universal propositions or meanings. A language is built upon a shared system of meaning that all its speakers agree upon, but music is fundamentally more an art form than a method of precise communication. Listeners may share very general ideas about the tone of music (e.g. in Western music, a minor chord evokes sadness), but ultimately the “meaning” of a piece is decided anew by each individual listener.

This distinction certainly makes sense in the context of Patel et al.’s 2003 paper, asserting that music and language share syntactic processing resources in the brain, but have separate syntactic representations. A spoken phrase and a musical phrase can both be processed by breaking down their hierarchical structure, but their syntax does not refer to meaning in exactly the same way. Patel et al. predicted that a study of those with Broca’s aphasia (usually showing grammatical deficits) would reveal processing impairment in both language and musical harmony. And indeed, they found that patients showed both impairment in processing complex sentences, and an absent increase in speed of response to chords harmonically closer to one another. This raises interesting possibilities about treatment. Could musical training become part of speech therapy for acquired aphasia? Would the combination of syntactic practice in two forms of communication help speed recovery? Perhaps treatment could take advantage of the brain plasticity demonstrated in professional musicians, who show increased grey matter volume in temporal areas associated with the mental lexicon. It would be interesting to see how much musical training would be necessary to see an effect, however, especially given that professional-grade ability does not come quickly or easily (or at all).

March 12th, 2011
By far the most fascinating part of this week was the time we spent covering the function and inner workings of speech-linked gestures. Although the models of speech production were quite interesting - involving much more strictly-ordered pathways than I had expected from general idea to specific lexical, syntactic, and articulatory choices - I really enjoyed considering an aspect of communication to which we very rarely pay any conscious attention. Considering gesture as intimately linked with speech allows us to think outside the box of what we normally consider linguistic or non-linguistic. It was particularly neat to learn about how people have categorized different types of gestures that occur under various circumstances or for certain purposes: for instance, the use of "beats" to emphasize certain points, which can be very useful in giving a speech or presentation. Iconics are obvious symbols that are linked to physical objects that the corresponding speech describes, e.g. a sweeping upward motion referring to a tall person or a steep hill. The gestures that seem most linguistic to me are the metaphorics, since they are inherently unrelated gestures attempting to convey abstract ideas and concepts. Of course, there are many more varieties of gesture, which coincide depending on the context and the speaker's intentions. It can serve multiple functions in communication to a listener: gestures can be substantive (giving information directly related to the content of the sentence) or pragmatic (pertaining to the sentence structure of the utterance, and how it fits into the social interaction).

Gesture is inextricable from spoken communication, it seems, whether a listener can benefit from it or not - multiple studies discussed in class showed that subjects continued to gesture as normal when visually separated from their conversational partner, whether on the phone or behind a screen. As well, people tend to gesture more when they are searching for a word on the tip of their tongue, indicating that gesturing assists lexical access. Thus, it not only serves as a method of communicating nonverbal information along with speech to a listener, but also assists the speaker in organizing their own output. The fact of gestural variation between speakers is also quite interesting: I love the idea that each person has a certain style of gesturing and set of go-to gestures that make up their library of physical expression. For example, my metaphoric gestures might have a different look than the opening, forward-moving gestures that Dr. Newman used in the lecture when giving a sort of meta-explanation of how he explains ideas. Similarly, cross-cultural differences apparently exist in type and frequency of gestures, the most well-known example of which is the preponderance of gestures (particularly emblems, gestures which have free-standing meanings independent of speech) among Italian speakers. I'd be interested to learn more about how other factors might affect gestures, such as introversion versus extroversion, situational context (performance, conversation, or talking to oneself), or use of mood-altering drugs. As well, in linguistic development, the complexity of gesture develops alongside language and reflects changes in cognition, potentially making it an interesting behavioural marker for developmental delays.

March 19th, 2011
I have always thought that neural development was among the most lovely things I got to learn about as a neuroscience student, and it turns out that cognitive development is pretty fascinating too. Many of the basic concepts covered in the linguistic development lectures this week were already familiar to me from other classes or outside knowledge (e.g. the idea of innate linguistic ability, and the fact that perceptual discrimination becomes tuned to one's native language during early development). However, I was unaware of just how early linguistic perception begins - the fact that very young infants can actually show a preference for familiar prosodic contours (whether of the language they hear most often, or found in the story they heard most often while in utero) is amazing. By only six months, infants begin to understand language patterns, and can detect similarities between types of stimuli even when they're presented in different voices or with different intonations. Perhaps what I found most neat about language development in young children was the amount of instinct that is involved on the part of both the child and the parent. "Motherese" - the distinctly exaggerated, slow, drawn-out style of speech that parents use when talking to infants - actually has a lot of instructional utility. Exaggerating the vowel sounds found in a child's native language eases the process of learning to distinguish and reproduce them. As well, infants spontaneously begin to produce vocalizations very early on, which develop into identifiable vowel sounds by 3 months or so. Canonical babbling (producing strings of nonsense sounds made up of syllables present in the child's native language) also begins without explicit prompting, around 7 months, and leads into the appearance of first real words around a year of age. How neat that we have developed so that parents and babies are naturally inclined to use language in the way most conducive to learning!

There are many interesting implications that come along with these early abilities, often hinging on the fact that (judging by studies showing longer gaze length on pictures matching matching the meaning of a presented sentence), infants understand much more than their verbal development allows them to express. This assumption gives rise to systems such as baby sign language, which teaches parents to set up a simple system of signs with their child that let them "talk" before true speech is possible. Baby signing has developed into a booming business, and is supported by highly positive anecdotal evidence. However, a 2005 review of baby-signing literature found that methodological shortcomings made it impossible to draw a firm conclusion regarding its benefits. Thus, until more research is conducted (particularly by researchers other than those profiting from publishing baby-sign guidebooks), I will take the miraculous potential of baby-sign with a grain of salt.

March 26th, 2011
I have always been slightly envious of my adorable French cousins, who grew up speaking English and French interchangeably with more ease than I ever attained through nine years of French class in an otherwise monolingual school. The focus of two of this week's lectures, bilingualism (or multilingualism), only increased this feeling - children who grow up bilingual may occasionally mix up words between languages, but enjoy a host of benefits on attentional control and metalinguistic awareness that benefit them later on. Unfortunately, there is a large difference between growing up speaking multiple languages and learning a new tongue later in life. It is far from impossible, as the many adult learners who achieve functional fluency can attest, but after an early sensitive period of acquisition in childhood, speakers' second-language (L2) performance varies wildly. Increased age of acquisition of L2 is associated with decreased scores on written L2 judgement tests, particularly between ages 2 and 16. Spoken L2 performance seems to show declines in grammaticality after age 7-8, and the presence of a noticeable accent when speaking L2 increases up to a plateau around age 20. Better performance with younger age of L2 acquisition may be associated with more exposure to an L2-speaking school system. There are a number of hypotheses for why fluency comes harder to adult learners of a second language: it may have to do with decreased motivation (as compared to a small child, who must learn their native language in order to be able to communicate), or with the difficulty in overriding previous neural programming for one language's rules, or a loss of neural plasticity due to hormonal changes...the list goes on. Regardless of the explanation, though, it does appear that later L2 learning is more difficult than L1 learning, and makes comprehension of syntactically complex/misleading linguistic stimuli more difficult than it would be in one's native language.

This pattern shows up in studies of neural function as well. One influential study (albeit one whose results should be interpreted cautiously, due to methodological limitations and the fact that they have never been replicated to the same extent) showed that later L2 learners had much more distinct areas of activation within Broca's area when imagining speech in each language. Conversely, early bilingualism was associated with significant overlap of each language's area of activation within Broca's area. However, later studies of language-related ERP components offer more hope to the prospective L2 learner. The LAN component (associated with fast, automatic processing and expectation of next word category) is most sensitive to advanced age of L2 acquisition, and both the N400 and P600 components also show declines with higher age of acquisition. However, a 2002 study of native Spanish speakers learning English as L2 found that when both native and L2 English speakers were divided into low- and high-proficiency groups, the changes in ERP components initially seen in late L2 learners actually developed with lower proficiency. This means that changes in neural activity are not due to a fundamental difference in the brains of people who learn a language later in life, but depend on how well the language is acquired. As well, the brain shows evidence of being able to comprehend and process new rules and vocabulary in an mostly-unfamiliar language even before the learner is consciously aware of these skills, as seen in an ERP study of lexical decision in new students of French as a second language.

These findings are comforting with respect to things I have been thinking about and investigating on my own recently. If achievement of proficiency is not due entirely to chance, then perhaps there are things I can do to ease the path to becoming fluent in German (despite, at the ripe old age of 20, having never spoken a word of it beyond schadenfreude and Eine kleine nachtmusik). In fact, it seems that one of the best things someone can do to increase performance on a new language is to learn another new one first! This effect has been found particularly with Esperanto: evidently, learning Esperanto first increases speed of acquisition of another language. One study from the mid-1960s found that high school students who studied Esperanto for one year and French for three years has significantly better command of French than students who studied French for all four years. It would certainly be interesting to continue investigating this possibility, and particularly to see if the beneficial effects of learning Esperanto (or another language) first extend into adulthood.

April 2nd, 2011
This past week's classes consisted of listening to our classmates debate current issues in the real-life application of research in the field of psycholinguistics. My own group presented on Wednesday afternoon, arguing that the Fast ForWord program for treating dyslexia is neither effective nor superior to any other treatment. My teammates and I were surprised to find, once we began our research, how overwhelming the evidence was to support our side of the story. The biggest issue for me was the fact that much of the research supporting the efficacy of FFW is published and/or sponsored by the company producing the program, and that the co-inventors are profiting hugely from the number of children using FFW as a result of their studies. It reminded me of a fascinating book I read last summer chronicling the development of the psychiatric health care system in America (Mad in America): much of the most shocking information was about the conflicts of interest involved in the original development and use of antipsychotic drugs. Researchers' financial ties to companies that can profit from favourable study results are often not disclosed, and this has a huge impact on the trustworthiness of results. The FFW situation is a fairly egregious example of this problem, but any partisan source of funding that isn't disclosed honestly is, in my mind, not okay.

The other two debates presented this week both focused on the practice of teaching minority languages: the first was about using Ebonics as the main language of instruction in a heavily African-American, Ebonics-speaking district, and the second was on teaching ASL to children with cochlear implants. There's a lot to think about following those presentations, but one particular thing has been on my mind. Although it was a necessary part of the presentations, given the demographics of the class, there were times when I was really uncomfortable that people were passing judgment on issues facing minority communities of which they were not members. A big part of working against oppression and discrimination is to give credence to voices that are often not listened to, and that means checking your own privilege and letting the viewpoints of, say, the African-American or Deaf community take the lead. Of course, this is difficult when many of the presenters are white, hearing folks, but I still think everyone needs to be aware of a) how our society affects who is usually listened to the most, and b) how to combat imbalances in that.

April 5th, 2011
Overall, I found this class both enjoyable and challenging. I'm glad that I've had class with Dr. Newman for this entire year, since he's an excellent professor who manages to make things interesting and convey his enthusiasm for the topics at hand without ever dumbing things down or condescending to us as undergrads. I also appreciated the departure from traditional class structure, especially with the goal of creating a real-world resource that other people could legitimately benefit from (rather than a term paper that will end up in the bottom of someone's filing cabinet and may as well have been written in a vacuum). The grading reflected the high standard of work that was expected of us, which I found completely fair.

My main issue with how this class was set up was the fact that so much of my experience of the course was predicated on a mostly-random choice of chapter topic at the very beginning of the semester. At that point, most of us had very little context for choosing a topic, and it soon became clear that whatever we ended up with was going to occupy the vast majority of our attention for the semester. That led to a lot of imbalance - there was so much fascinating material presented throughout the course, but aside from fodder to write blog posts about, there was pretty much zero motivation to actually study it or do the readings for topics unrelated to the chapter I was working on. While it was a really neat learning experience to complete a major writing assignment and see that finished product go out into the world, there's also a lot of appeal to the sample assignments from previous years that were posted on BLS. With the old format, I think my learning would have been much more well-rounded than it ended up being just from writing a paragraph or two of reflection every week. That's why I liked having the debates - though by this point in the semester we're all pretty burned out on work for this class, it was still an opportunity to learn about another topic.

Miscellaneous things: The online textbook doesn't seem like something that can be re-created year after year, so I'm not sure what to suggest in terms of next year's class. More clarity in terms of expectations would be nice, or just a return to the old format. Also, Sarah's feedback was consistently thorough and really helpful, and I love love love that she posted the info about person-centred language (so important!) on the BLS page. This was generally a positive class experience for me and I would recommend it to others for next year.

= References =