How does the brain turn heard sounds into comprehensible language?

How does the brain turn heard sounds into comprehensible language?

How does the brain turn heard sounds into comprehensible language?

Image: Shutterstock
Image: Shutterstock

While hearing aids and other assistive devices have gotten better over the years, there is still a lot of room for improvement. Imagine if, in a crowded room with a lot of background noise, a device could help its wearer tune in to listen to just the person with whom she is having a conversation, and recede unwanted sounds further into the background.

To get there, we need a better understanding of the process by which the brain turns hearing into comprehensible language.

Fortunately, scientists have been learning much more about how this process works. They know there is a neural pathway inside our heads that connects our ears—which collect information about sounds—with our brains, which can distinguish among and interpret sounds, recognizing some of them as meaningful language. However, this physical path, with its many linkages, stages, and processes that convert sound into meaning, is only now beginning to be explored in detail.

A new five-year, $2.88M grant from the National Institute on Deafness and other Communication Disorders at the National Institutes of Health will bring researchers another step closer to fully understanding the system and ultimately being able to develop better hearing assistive devices.

Professor Jonathan Simon (ECE/ISR/Biology) is the principal investigator for the grant, “Multilevel Auditory Processing of Continuous Speech, from Acoustics to Language.” Co-PIs are Associate Professor Behtash Babadi (ECE/ISR), Associate Professor Samira Anderson (HESP), and Stefanie Kuchinsky (HESP affiliate).

The researchers are aware that the brain plays a significant role in helping a person compensate for poor quality signals coming through the ears—whether the degradation is due to complex acoustics, hearing loss or both. And they already understand how specific parts of the brain function in processing language. Anderson, for example, is an expert in subcortical auditory processing, while Simon has done extensive work in early cortical auditory processing. Kuchinsky specializes in “auditory effort”—not just whether the brain can compensate for suboptimal input, but how much effort the brain must expend during the compensation process. And Babadi’s expertise is in neural connectivity, the flow and give and take of information among the areas of the brain.

Until now, the ear and hearing, and individual parts of the brain, have been studied in isolation, and their roles are fairly well understood. The integrated processing chain and how it works to enable language comprehension is much less well understood.

“For many years, treatment of hearing difficulties has focused on the ear,” Anderson explains, “but this approach does not consider the need for accurate representation of sound from ear to cortex to perceive and understand speech and other auditory signals. A better understanding of speech processing along the entire auditory pathway is a first step in developing more individualized treatment strategies for individuals with hearing loss.”

In this new project, the researchers will move closer to understanding the entire system and what is occurring in each step along the pathway. Sophisticated and accurate electroencephalography (EEG) and magnetoencephalography (MEG) scan studies of young normal hearing listeners will focus on what is going on inside the brain from the midbrain all the way up to the language areas.

“Using EEG and MEG to simultaneously measure both midbrain and cortical speech processing puts us at the cutting edge of the field,” Simon notes. “Nobody else is doing that yet.”

Because there is some indication that listening effort plays a role in comprehension, the researchers will also use pupillometry (a measure of eye dilation) as a physiological proxy for this effort, and also will ask the subjects to self report the amount of effort they exert in listening.

“Difficulty understanding speech in noise is one of the most common complaints of patients in the audiology clinic, even among those with normal hearing,” says Stefanie Kuchinsky. “However, it is unclear how best to quantify the effort and fatigue that these listeners report. The results of this project will help validate pupillometry as an objective measure of listening effort by linking it to a well-studied set of neural systems. Long term, we aim for this knowledge to improve our ability to both measure and mitigate the communication challenges people face in their daily lives.”

The researchers hope to find the acoustic and neural conditions under which intelligible speech is perceived. They believe a grounded understanding of how speech processing progresses through a network path, and learning what compensating mechanisms the brain employs to perceive speech under degraded hearing conditions will result in foundational principles that can be used to develop “brain-aware” and automatically tuning hearing assistive devices for persons with hearing and related disorders.

Babadi says, “We hope that new insights into how the brain’s dynamic network excels in reliable speech perception under difficult listening conditions will pave the way for developing hearing-assistive devices that use brain activity as feedback in real time to enhance speech intelligibility.”

Related Articles:
New UMD Division of Research video highlights work of Simon, Anderson
$7.9 Million in NIH Awards Propel UMD Aging Research
Uncovering the mysteries of networking in the brain
‘Priming’ helps the brain understand language even with poor-quality speech signals
Autism Research Resonates in Hearing-Focused Project
Do Suddenly Self-Centered Brain Cells Promote Disease?
How Home Alone Helped UMD Neuroscientists Unlock Brain Scan Data
Cornelia Fermüller is PI for 'NeuroPacNet,' a $1.75M NSF funding award
NSF Awards $1M Grant to UMD Researchers
Stop—hey, what’s that sound?

October 25, 2021


Prev   Next

Current Headlines

UMD Joins $50M Sodium-Ion Battery Innovation Partnership

Celebrating Five Years of Innovation at CEEE’s Daikin Lab

Project Embraces Tribal History With Modern Technology

Former Chair of Materials Science and Engineering To Retire from the University

Sophomore in Chemical and Biomolecular Engineering Heads to NCAA Cross Country Championship

Eminent Scholar in Metallurgy To Join Clark School as Distinguished Chair

UMD Joins Sodium-Ion Battery Alliance for Renewable Grid Energy Storage

Biocomputational Engineering Program at UMD Earns ABET Accreditation

News Resources

Return to Newsroom

Search News

Archived News

Events Resources

Events Calendar