By Brian Taylor, AuD, Senior Director of Audiology at Signia
People who suffer from hearing loss—as many as 48 million Americans, according to the Hearing Loss Association of America—know the challenge of understanding speech and communicating effectively in noisy conditions. It’s not that they can’t hear at all; they’re just not able to hear what’s right in front of them—the voice of a companion at dinner; the TV in a sports bar; their spouse telling them they need more ice at a cocktail party. The background noise makes it hard to separate speech from the din.
Studies have shown that in such cases, the person with hearing loss can grow tired from trying to make out what others are saying. This exhaustion, in turn, can lead them to withdraw from social situations altogether. And further studies have found the resulting social isolation can lead to mental health issues and cognitive decline.
New hearing aid technology can help people hear speech in noise.
For years, hearing aid manufacturers, including Signia, have developed many methods to improve speech understanding in noisy conditions. They’ve incorporated bilateral beamforming that uses full audio transfer between hearing aids and applied multiple types of digital noise reduction – all in a single device. Many of these technologies work well and continue to work today, but they have their limitations. Specifically, when existing hearing aid technology goes to work making it easier to hear speech, all the sound is processed, amplified, or attenuated the same way at the same time. Applying noise reduction? It’s applied to the entire sound stream.
What if hearing aids could process speech and background noise separately? Now they can.
Split Processing Gets the Sound Mix Right
Stepping back for a minute, consider watching a movie. Whether people are aware or not, the film’s audio has been specially mixed so that viewers focus on what’s important. For example, when Hollywood sound engineers want to focus the audience’s attention on the actors’ dialogue, they’ll add more contrast to the speech track, so it stands out more from background noise. Today, new hearing aid technology can achieve the same effect for people with hearing loss.
Signia invented what’s called split processing for hearing aids. Computers have long split data into parallel pathways to process information more efficiently. Now, thanks to advances in chip miniaturization and digital signal processing, split processing is available in hearing aids. Among the most important benefits is the ability to separate and enhance the speech wearers want to hear, while diminishing the background noise they don’t.
Signia’s split-processing technology is Augmented Focus™, one of the foundational technologies of the company’s Augmented Xperience (AX) platform. With Augmented Focus, a hearing aid has one beamforming directional microphone for sounds coming primarily from in front of the wearer (focused sound, including speech) and another for sounds coming primarily from behind the wearer (surrounding sound, or background noise). Clearly, there may be wanted or unwanted noise coming from either direction, so now the Augmented Focus processors get busy.
Yes, processors. Plural. For the first time, with spilt processing, a hearing aid includes two different processors. Each stream—front and back— into its own processor for analysis and processing before being recombined into a single, augmented stream the wearer can hear better.
Each processor breaks incoming sound into 48 channels, meaning it can get very detailed in its analysis. Like all modern hearing aids, Augmented Focus processors examine an input signal’s amplitude modulation to gauge what it is and how to handle it. Slow modulation sounds are commonly unwanted background noise, like the hum of a fan. Augmented Focus suppresses that. Fast, strongly modulated background sounds, like a waiter dropping dishes, are transient but distracting. Augmented attenuates those, too. What patients really want to hear—speech—is somewhere in the middle, with faster modulation, but not dropping-dishes fast.
What makes Augmented Focus unique and unlike other types of signal processing found in competing devices is that the processing occurs in two separate streams. That is, Augmented Focus determines what incoming sounds are in the focus stream – sounds the wearer often wants to hear – and processes them separately compared to other incoming sounds that are in the surround stream, which is typically unwanted background noise.
Not Simply Noise Reduction
It’s important to understand that Augmented Focus doesn’t eliminate background noise to improve speech intelligibility. Background noise is important. It contributes to situational awareness, excitement, and a total sound experience.
Once Augmented Focus processors identify background noise, they basically create a clearer contrast with the speech sound. In split processing, the two streams are independently “shaped” for greater contrast between wanted and unwanted sound, much the way the Hollywood sound engineer manipulates audio. Less compression and noise reduction are applied to the speech stream, making it sound clearer, crisper and very detailed so that it sounds nearer to the wearer and easier to understand.
In contrast, more compression and noise reduction are added to incoming sounds in the background stream. This results in excellent sound quality, even while certain sounds are turned down or minimally amplified. But the wearer’s sense of space remains so that he can still, for example, hear other diners enjoying themselves.
Splitting the incoming sounds into two separate streams has the effect of making the focused speech sound more intelligible and vastly improves communication.
More Benefits of Split Processing
Another benefit of split processing is a better hearing aid experience for the wearer. All other hearing aids process sound in a serial manner. That is, compression, noise reduction, etc., are performed in a series, one after the other. When incoming sounds are processed in a serial fashion, features like compression and noise reduction can sometimes work against each other, which creates noise artifacts that degrade sound quality and often are noticeable to the wearer. With two processors working in parallel, we can eliminate these artifacts and actually create a cleaner, more natural sound, whether it’s speech or background noise.
It’s alarming that anywhere from 5 percent to one-quarter of patients never use their hearing aids., And that by some accounts, an astounding 98 percent of wearers say they had at least one problem with their hearing aids in the first year. And that 54 percent of problems go unreported to hearing care professionals while 46 percent remain unresolved even after being reported.
There are many reasons the people who could benefit from hearing aids don’t wear them, including cost and stigma, but a big reason is that hearing aids don’t always sound good. With two processors working in parallel, however, sound artifacts are minimized and the hearing aids create a cleaner, more natural sound, whether it’s speech or background noise.
A Better Overall Hearing Experience
Twice the processing allows the overall Augmented Xperience platform to do even more to improve hearing. The strength of Augmented Focus split processing is its ability improve speech intelligibility in noise, but an even bigger challenge is achieving better speech intelligibility under a variety of circumstances.
As Augmented Focus processes speech and background noise along parallel paths then recombines them, it also considers other information from the Augmented Xperience platform of technologies. For example, we know that occlusion-related problems are a big reason individuals stop wearing their hearing aids. Thanks to patented Own Voice Processing (OVP™) technology, the Augmented Experience platform knows when the wearer is talking and reduces the intensity of their own voice. It can also sense if the wearer is out for a bike ride, as detected by the platform’s acoustic-motion sensors. Such information is fed into the final Augmented Focus “mix” to improve hearing and speech intelligibility wherever the person is. In certain situations, hearing aid wearers can even understand speech better than people with normal hearing.
In Signia’s own surveys, patients describe 25 percent greater speech understanding in noise when they use hearing aids with Augmented Focus. Nearly 100 percent of participants reported exceptional speech understanding in their home environment, which is an important sign that the technology can adapt to various settings.
Recently, Signia scientists used an auditory phenomenon called mismatch negativity to determine empirically the benefits of split processing. Mismatch negativity is a measurable electrophysiologic response the auditory cortex sends when it detects an unexpected sound. In this case, it was used to evaluate if hearing aid wearers could actually pick up and track speech in noise. Indeed, the study showed that Augmented Focus increased the contrast between sounds and enhanced listeners’ ability to discern speech.
Augmented Listening in Any Situation
We often say “focus” in relation to directional microphone processing in a noisy situation. But Augmented Focus is different. It enhances sound in any situation without removing entire parts of the soundscape. Through split processing, it processes the sound around hearing aid wearers in a way that allows the brain to more naturally comprehend information while focusing their attention more easily on meaningful sounds in their listening environment.
By splitting a hearing aid wearer’s soundscape into two separate streams, shaping those two streams, then recombing them, this new technology creates an augmented listening experience—with remarkable sound clarity—in any situation. Split processing is a major leap forward in solving a common problem for hearing aid wearers: separating what they want to hear from the noise they don’t.
- Aazh, H., Prasher, D., Nanchahal, K., & Moore, B. C. (2015). Hearing-aid use and its determinants in the UK National Health Service: a cross-sectional study at the Royal Surrey County
- Solheim, J., & Hickson, L. (2017). Hearing aid use in the elderly as measured by datalogging and self-report. International journal of audiology, 56(7), 472–479
- Bennett, R. J., Kosovich, E. M., Stegeman, I., Ebrahimi-Madiseh, A., Tegg-Quinn, S., & Eikelboom, R. H. (2020). Investigating the prevalence and impact of device-related problems associated with hearing aid use. International journal of audiology, 59(8), 615–623.
- Bennett, R.J. (2021) Underreported hearing aid problems: No News is good news, right? Wrong! Hearing Journal Feb 16-18.
- Jensen, Høydal, Branda, Weber: Augmenting Speech Recognition with a New Split-processing Paradigm (Hearing Review 2021;28(6):24-27)
- Jensen, Pischel, Taylor, Schulte: Performance of Signia AX in At-Home Listening Situations (Signia White Paper, 2021)