Sounds and words are processed in the brain separately
Auditory and speech processing occur in the brain in parallel and simultaneously, according to a new study that contradicts a long-held theory that the brain processes acoustic information first and then transforms it into linguistic information.
The finding is published in the journal Cell and, specifically, neuroscientists from the University of California (USA) have discovered a new pathway in the human brain that processes the sounds of language.
These, upon reaching the ears, are converted into electrical signals by the cochlea and sent to a region of the brain called the auditory cortex, in the temporal lobe.
For decades, scientists have thought that speech processing in the auditory cortex occurred in series, similar to an assembly line in a factory, explains a statement from the publication.
It was believed that, in the first place, the primary auditory cortex processed simple acoustic information, such as the frequencies of sounds. Next, an adjacent region, called the superior temporal gyrus, extracted the most important features for speech, such as consonants and vowels, transforming the sounds into meaningful words.
However, the authors of this work point out, this theory has not been directly demonstrated, as it requires highly detailed neurophysiological recordings of the entire auditory cortex with extremely high spatio-temporal resolution.
Edward Chang and his team studied nine participants for seven years who had to undergo brain surgery for medical reasons, such as removing a tumor or locating a seizure focus.
Arrays of small electrodes covering their entire auditory cortex were placed in order to collect neural signals for language and seizure mapping.Participants also agreed to have the recordings analyzed to understand how the auditory cortex processes speech sounds.
"This is the first time that we were able to cover all these areas simultaneously directly from the surface of the brain and analyze the transformation of sounds into words," describes Chang.
When the researchers reproduced phrases and short sentences for the participants, they expected to find a flow of information from the primary auditory cortex to the adjacent superior temporal gyrus, as proposed by the traditional model; if that's the case, the two areas should be activated one after the other.
Surprisingly, they found that some areas in the superior temporal gyrus responded as quickly as the primary auditory cortex when phrases were played, suggesting that both areas began processing acoustic information at the same time.
In addition, the researchers stimulated the primary auditory cortex of the participants with small electrical currents; if speech processing were serial, these stimuli would probably distort the patients' perception of speech.
In contrast, although they experienced stimulus-induced acoustic hallucinations, they were still able to hear and clearly repeat the words spoken to them.
However, when the superior temporal gyrus was stimulated, they reported that they could hear people speak, "but not make out the words."
This evidence suggests that the traditional hierarchy model of speech processing is oversimplified and probably incorrect, according to the scientists, pointing to the possibility that the superior temporal gyrus functions independently - rather than as a next step - from processing in the primary auditory cortex.This parallel nature can give new ideas on how to treat diseases like dyslexia. "Although this is an important step, we still do not understand this parallel auditory system very well; it raises more questions than answers," sums up Chang.