Become a member

Language Magazine is a monthly print and online publication that provides cutting-edge information for language learners, educators, and professionals around the world.

― Advertisement ―

― Advertisement ―

Bilingualism under threat: structured literacy will make it harder for children to hold on to their mother tongue

International Viewpoint: NEW ZEALAND From the beginning of the 2025 school year, all schools will be required to use structured literacy – also known as...

Decoding Dyslexia

AI in Class

HomenewsResearchSounds and Words Processed Separately but Simultaneously

Sounds and Words Processed Separately but Simultaneously

After years of research, neuroscientists have discovered a new pathway in the human brain that processes the sounds of language. The findings, reported last month in the journal Cell, suggest that auditory and speech processing occur in parallel, contradicting a long-held theory that the brain processes acoustic information then transforms it into linguistic information.

Sounds of language, upon reaching the ears, are converted into electrical signals by the cochlea and sent to a brain region called the auditory cortex on the temporal lobe. For decades, scientists have thought that speech processing in the auditory cortex followed a serial pathway, similar to an assembly line in a factory. It was thought that first the primary auditory cortex processes the simple acoustic information, such as frequencies of sounds. Then an adjacent region, called the superior temporal gyrus (STG), extracts features more important to speech, like consonants and vowels, transforming sounds into meaningful words.

But direct evidence for this theory has been lacking, as it requires very detailed neurophysiological recordings from the entire auditory cortex with extremely high spatiotemporal resolution. This is challenging, because the primary auditory cortex is located deep in the cleft that separates the frontal and temporal lobes of the human brain.

Over the course of seven years, neuroscientist and neurosurgeon Edward Chang at the University of California, San Francisco, and his team played short phrases and sentences for study participants, expecting to find a flow of information from the primary auditory cortex to the adjacent STG, as the traditional model proposes. If that were the case, the two areas should have been activated one after the other.

Surprisingly, the team found that some areas located in the STG responded as fast as the primary auditory cortex when sentences were played, suggesting that both areas started processing acoustic information at the same time.

The latest evidence suggests the traditional hierarchy model of speech processing is oversimplified and likely incorrect. The researchers speculate that the STG may function independently from— instead of as a next step of—processing in the primary auditory cortex.

The parallel nature of speech processing may give doctors new ideas for how to treat conditions such as dyslexia, where children have trouble identifying speech sounds.

“While this is an important step forward, we don’t understand this parallel auditory system very well yet. The findings suggest that the routing of sound information could be very different than we ever imagined. It certainly raises more questions than it answers,” Chang says.

Language Magazine
Send this to a friend