(c) Gerd Altmann / Pixaby
Author profile picture

One of the things that strikes you when your hearing deteriorates is that you can no longer filter individual voices out of the din at a party. A hearing aid seems to be the solution. But it doesn’t always amplify the voice that you just want to hear. Researchers at KU Leuven (Belgium) have now solved this problem with a technique based on brainwaves that determines within a second who you want to listen to, according to a press release from the Flemish university.

In principle, our brains are very good at picking up that one voice out of a jumble of sounds. This is quite a challenge for hearing aids though. They are capable of suppressing background noise, but the problem lies in the choice of the sound source, says professor Tom Francart of KU Leuven.

“For example, a hearing aid focuses on the loudest speaker in the immediate surroundings. That’s not always the appropriate choice. Or the system takes your line of vision into account. But if you drive a car, you can’t look at at the same time at the passenger beside you who is talking,” Francart states in the press release.

Researchers have been working for some time on solutions that take into account what the listener wants. “Using an electroencephalogram (EEG), we can measure brain waves that are generated in response to sounds. This allows us to determine which speaker you want to listen to. The system separates the audio signals from different speakers and then makes a connection with the brainwaves. The disadvantage is that you have to take a delay of 10 to 20 seconds into account in order to be reasonably sure that you are on the right track.”

Artificial intelligence to speed up the process

Progress has been made with this new technique, colleague Alexander Bertrand continues: “We only use brainwaves, without linking them to the audio signals of the various speakers. This is done by using artificial intelligence. On the basis of test data, the system has learned to use these brainwaves to determine whether someone is listening to a speaker on either their left or right. Once it recognizes the direction, the acoustic camera directs itself there and suppresses the sound. This can now be done within a second on average. That’s a big step because that is a realistic time span when you want to switch from one speaker to another.”

Market launch will take up to 5 years

According to Francart, it will be another 5 years or so before this new hearing aid system is available to everyone. This has mainly to do with the way the system is working at the moment. In the laboratory, test subjects get to wear a kind of hat that is full of electrodes. This is not something that you do in everyday life. But research is already being carried out into hearing aids with built-in electrodes.

The researchers are tinkering even more with the technology. Doctoral student Simon Geirnaert: “We are already doing more research, for example into combining several speaker directions at the same time. The current system can just choose between two directions. Initial experiments show that we can extend that to other conceivable directions. However, we need to refine our AI system by feeding it with more brainwave data from users who are also listening to speakers from other directions.”

The study ‘Fast EEG-based decoding of the directional focus of auditory attention using common spatial pattern‘ by S. Geirnaert, T. Francart en A. Bertrand is published in the professional journal IEEE Transactions on Biomedical Engineering.

Also interesting: Electronic lens offers a medical solution for eye disorders