(Courtesy of Carleton University)
Imagine your brain as a bustling city. The grey matter in your head could be compared to a dense network of mixed-use buildings, with snaking utility lines and connecting sidewalks throughout. Steam delivers heat through pipes out of the main utility centre, lights blink on and off, and information zooms around at a blazing speed. Cranks turn to keep everything motoring along. That's a plausible visual for what's happening as we process information, cues and signals.
Those cues can be ever so slight: the intonation at the end of a sentence (in English, lilting up for a question, for example) fires up a signal in one part of the brain, but another part lights up when we hear a sentence spoken in a bored monotone, like, maybe, in the classroom of a tired lecturer.
While subtleties of speech have different effects on the brain, age is a factor, as well. Brain function in a child is markedly different from that in a fully grown person. In test material given to children, a passive sentence is read aloud. For example, "Mary was patted by John" (as opposed to "John patted Mary"). Hearing the passive construction, children have trouble understanding the relationship between patter and pattee, or between John and Mary.
These differences have given researchers insight into how the mind acquires language. Linguists know that in humans, there is an innate system in place at birth that allows us to acquire language and to learn to speak within the first few years of life. It's a kind of software for learning, and our brain is the hardware that uses it. We are the only species with such a complex system. Bees "dance" to share information on the best way to the hive; dogs declare rank through tail movement and posture; we go yackety-yack.
Still, there is much about the mind that we have yet to discover. To that end, the people behind a new lab being established at Carleton University are planning to plunge further into what's known as neurolinguistics—language and brain research. The Centre for Cognitive Neuroscience: Language and Brain is scheduled to begin operation in the fall.
The lab, on the ground floor of Paterson Hall, has a small waiting room with a vanity and sink nearby for research participants to rinse the gel out of their hair after stripping off the skullcap they have to wear during experiments. It's smeared with the stuff to ensure a strong connection. Electrodes in the skullcap are connected to the scalp and to a brainwave-reading machine at the other end. Researchers measure the electrophysiological responses to stimuli such as clips of sentences being read in different tones and different voices.
When you're having a conversation, whether you're the speaker or the listener, different regions of the brain are lighting up in response, like some kind of neural Ping-Pong game. Researchers are trying to find the precise areas in the brain that activate when we speak, process what we hear, and comprehend.
In another room, an acoustic chamber allows researchers to record conversations between participants, the most natural form of speech (as opposed to the rigidity of a recorded interview); it's analytic gold for linguists.
In another room, the futuristic-sounding EyeLink 1000, a machine that tracks eye movements by the millisecond, offers additional data about how we input and process information.
Researchers are confident that the data from voice sounds and from eye movements, when put together, will give valuable insights into the relationship between language and brain function.
The ramifications of this research are far-reaching, with reverberations in psychology, teaching, pathology, philosophy, computer science and medicine. For example, if we know what part of the brain is responsible for speech function and which other part for comprehension, that knowledge could bring a better understanding of brain-injury disorders such as aphasia, which hampers a person's ability to express himself and the ability to understand the speech of others.
The lab is the vision of Masako Hirotani, a psycholinguist with degrees from the University of Tokyo and a PhD from the University of Massachusetts, Amherst. Three years ago, when she came to Carleton to teach, Hirotani began the groundwork for the lab. In November 2008, she was awarded $140,000 from the Canada Foundation for Innovation, an independent corporation created by the Government of Canada to fund research infrastructure. Hirotani is also a research associate in the department of neuropsychology at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany, a highly regarded fellowship, to put it mildly.
Her research focuses on intonation and other speech prompts—what's known as the "sentence melody."
"In English, we have different kinds of intonation patterns, and when we speak, we give cues, almost like a pause in music," she explains. "The cues tell us that the speaker is continuing to talk."
As a gateway into further study of linguistics, Hirotani has been co-teaching a 100-level course, called The Mysteries of Language, with professor Ida Toivonen, whose research interests include theoretical syntax and Inari Saami, a language spoken in northern Finland by fewer than 300 people. Toivonen came to Carleton with a doctoral degree in linguistics from Stanford after a BA from Brandeis, double majoring in French language and literature and linguistics and cognitive science.
Hirotani and Toivonen came up with the idea of a kind of pop-science course while leading recruitment tours for high school students. They were compiling digestible facts about language research for young people and realized that many intriguing aspects of language could form a foundation course for linguistics.
They gave the course a catchy title, and students lapped it up: with a course cap of 236 students, it takes up all the seats in the Azrieli Theatre lecture hall. The "language and society" lecture examines "Black English." Some people think it's just sloppy usage of mainstream English, but it's actually rule-governed like all languages, with a sophisticated verbal system capable of distinctions that standard English doesn't make.
Where is the border between a dialect and a language? Often it's determined along geographic, political and sociological lines rather than linguistically. If you compare the Cape Breton brogue with the slick vernacular of South London, U.K., you'll understand why speakers of two different dialects of English may have trouble understanding each other.
In another lecture, Toivonen tells of a linguist who spent a year looking at the speech patterns of Detroit high school students. The linguist classified certain social groups as jocks, burnouts and in-betweens and found marked patterns in speech for each group. Measuring their vowels, she found clear differences.
"Of course, they wouldn't tell you, 'I'm a jock, therefore my e-vowels are slightly fronter than the burnouts,' " Toivonen explains, "but these are actually really exciting findings."
She also has a saucy example using the f-word to relate a grammatical nuance, and it's guaranteed to get most students to perk up mid-class. The point about the f-bomb example is this: "Native English speakers have very strong intuition about what is okay grammatically and what is not. You learn rules and patterns subconsciously," she says.
Many regions of the brain are uncharted territory, leaving plenty for academics to uncover. Hirotani notes the growing number of prominent young researchers in Carleton's faculty, many of whom have just finished their PhDs. A new master's program in cognitive science, set to launch this fall, will bring more students into the new Centre for Cognitive Neuroscience.
The lab will join a surge of linguistic and para-linguistic studies going on at other universities. At McGill's Montreal Neurological Institute, researchers are studying the molecular trace, as in the actual physical change that occurs in the brain when a memory forms. McMaster University is creating a humanities-based multidisciplinary lab in neurocognitive linguistics to study interpersonal communication.
"There are so many things we don't know about the brain," Hirotani says with a mixture of frustration and excitement. "We have so many things we have to do."
Did you know?
Factoids from the course pack for LALS-1100A, The Mysteries of Language So, there are 100 "Eskimo" words for snow? Not really, that's actually intellectual sloth. —"The great Eskimo vocabulary hoax," Geoffrey K. Pullum
A native speaker of English has 60,000 or so words stored in memory. The left temporal cortex plays a crucial role in storage and retrieval of language. —"On Broca, brain and binding, a new framework," TRENDS in Cognitive Sciences, Peter Hagoort
So, like, Valley Girl discourse is really becoming part of the fabric of our language: sentence intensifiers such as "like" are on the rise. "Very" is moving out of favour in Canadian English, only to be replaced by "really" and "so." —"So cool, right: Canadian English Entering the 21st Century," Canadian Journal of Linguistics, Sali A. Tagliamonte
Although "eh" has become an icon of Canadian English, there are actually dozens of ways Canadians end their sentences: "right," "whatever," "stuff like that," "you know," "or something," "or anything" and "so" are popular utterance tags.
—"So cool, right …"