科学家发现大脑通过高频伽马波信号来识别语言中的单词边界,这一神经机制在母语者和熟练学习者中表现明显,挑战了传统语言处理分区的观点。
Speech sounds like it is made of words, but that impression has more to do with what’s in our heads than with what comes out of our mouths. In natural speech, there are no clear acoustic boundaries separating words; we pause about as many times within words as we do between them. This is especially evident when listening to an unfamiliar language being spoken: words often seem to “blur” together into one smeared stream of sound.
So how does the brain slice speech into recognizable chunks? Recent research by neurologist and neurosurgeon Edward Chang of the University of California, San Francisco, and his colleagues reveals a hint. In one study, published in Neuron, the researchers looked at fast brain waves that flicker about 70 to 150 times per second through a part of the brain involved in speech perception. They realized that the power of these “high-gamma” waves consistently plummets about 100 milliseconds after a word boundary. Like a blank space in printed text, the sharp drop marks the end of a word for people who are fluent in that language.
“To my knowledge, this is the first time that we have a direct neural brain correlate of words,” Chang says. “That’s a big deal.”
In a different study, published in Nature, the scientists reported that native speakers of English, Spanish or Mandarin all showed these high-gamma responses to their mother tongues, but listening to foreign speech didn’t trigger the dips as strongly or consistently. Bilingual people showed nativelike patterns in both their languages, and the brain activity of adult English learners listening to English looked more nativelike the more proficient they were.
“This is a great first foray into the question” of how the brain marks word boundaries, says Massachusetts Institute of Technology neuroscientist Evelina Fedorenko, who wasn’t involved in either work. She adds, however, that it’s not yet clear whether actually understanding a language is necessary for word-break recognition. Maybe the brain simply picks up on sound patterns it hears often, regardless of comprehension. Or maybe meaning matters, as with muffled speech in a movie that suddenly sounds clearer when subtitles are switched on. Even if speech sounds and higher-level language structures are processed differently in the brain, the two can feed back into each other. Experiments with artificial language that mimics natural speech sounds could tease apart the details, Fedorenko says.
When it comes to deciphering words, Chang suspects there may be no clean distinction between these different types of processing; the signal he and his co-workers linked to word boundaries occurs in a brain region that also recognizes speech sounds. Historically, Chang says, researchers imagined that different levels of structure in language, from sounds to words up to meaning, would be processed in dedicated brain regions. These new findings, he adds, “kind of blow that out of the water. This is actually all happening in the same place. When we compute sounds, we are computing words.”