
Machine-learning-based audio algorithms for hearing loss compensation
- Author
- Marjoleen Wouters (UGent) , Fotios Drakopoulos and Sarah Verhulst (UGent)
- Organization
- Project
-
- RobSpear (Speech Encoding in Impaired Hearing)
- Precision Hearing Diagnostics and Augmented-hearing Technologies
- Abstract
- Computational auditory models have been used for decades to develop audio signal processing algorithms in hearing aids. Here, using a biophysically inspired auditory model in a differentiable convolutional-neural-network (CNN) description (CoNNear), we trained end-to-end machinelearning- (ML) based audio signal-processing algorithms that maximally restored auditory-nerve (AN) responses affected by cochlear synaptopathy. To this end, we used backpropagation to develop several ML-based algorithms that match the simulated response of the corresponding hearing-impaired model back to the normal-hearing response, each time using the same CNN encoder-decoder architecture but different loss functions to achieve different compensation of the AN responses. Evaluation of the hearing-aid models was performed by processing sentences of the Flemish matrix test and comparing model outcomes with the unprocessed sentences. The magnitude spectra of all processed sentences showed differences between the HA models in amplification of low- and high-frequency speech content, whereas the high-frequency processing often introduced audible tonal distortions. Our processing showed different enhancement of the AN population responses at the speech onsets, vowels and consonants. We will objectively assess the effect of the most optimal compensation algorithms on sound quality and speech intelligibility in future clinical experiments.
Downloads
-
ACUS 662.pdf
- full text (Published version)
- |
- open access
- |
- |
- 1.45 MB
Citation
Please use this url to cite or link to this publication: http://hdl.handle.net/1854/LU-01HNZG7EVFP92CKWDADS65RN51
- MLA
- Wouters, Marjoleen, et al. “Machine-Learning-Based Audio Algorithms for Hearing Loss Compensation.” Forum Acusticum 2023 : 10th Convention of the European Acoustics Association, Proceedings, 2023.
- APA
- Wouters, M., Drakopoulos, F., & Verhulst, S. (2023). Machine-learning-based audio algorithms for hearing loss compensation. Forum Acusticum 2023 : 10th Convention of the European Acoustics Association, Proceedings. Presented at the Forum Acusticum 2023, Turin, Italy.
- Chicago author-date
- Wouters, Marjoleen, Fotios Drakopoulos, and Sarah Verhulst. 2023. “Machine-Learning-Based Audio Algorithms for Hearing Loss Compensation.” In Forum Acusticum 2023 : 10th Convention of the European Acoustics Association, Proceedings.
- Chicago author-date (all authors)
- Wouters, Marjoleen, Fotios Drakopoulos, and Sarah Verhulst. 2023. “Machine-Learning-Based Audio Algorithms for Hearing Loss Compensation.” In Forum Acusticum 2023 : 10th Convention of the European Acoustics Association, Proceedings.
- Vancouver
- 1.Wouters M, Drakopoulos F, Verhulst S. Machine-learning-based audio algorithms for hearing loss compensation. In: Forum Acusticum 2023 : 10th Convention of the European Acoustics Association, Proceedings. 2023.
- IEEE
- [1]M. Wouters, F. Drakopoulos, and S. Verhulst, “Machine-learning-based audio algorithms for hearing loss compensation,” in Forum Acusticum 2023 : 10th Convention of the European Acoustics Association, Proceedings, Turin, Italy, 2023.
@inproceedings{01HNZG7EVFP92CKWDADS65RN51, abstract = {{Computational auditory models have been used for decades to develop audio signal processing algorithms in hearing aids. Here, using a biophysically inspired auditory model in a differentiable convolutional-neural-network (CNN) description (CoNNear), we trained end-to-end machinelearning- (ML) based audio signal-processing algorithms that maximally restored auditory-nerve (AN) responses affected by cochlear synaptopathy. To this end, we used backpropagation to develop several ML-based algorithms that match the simulated response of the corresponding hearing-impaired model back to the normal-hearing response, each time using the same CNN encoder-decoder architecture but different loss functions to achieve different compensation of the AN responses. Evaluation of the hearing-aid models was performed by processing sentences of the Flemish matrix test and comparing model outcomes with the unprocessed sentences. The magnitude spectra of all processed sentences showed differences between the HA models in amplification of low- and high-frequency speech content, whereas the high-frequency processing often introduced audible tonal distortions. Our processing showed different enhancement of the AN population responses at the speech onsets, vowels and consonants. We will objectively assess the effect of the most optimal compensation algorithms on sound quality and speech intelligibility in future clinical experiments.}}, author = {{Wouters, Marjoleen and Drakopoulos, Fotios and Verhulst, Sarah}}, booktitle = {{Forum Acusticum 2023 : 10th Convention of the European Acoustics Association, Proceedings}}, isbn = {{9788888942674}}, issn = {{2221-3767}}, language = {{eng}}, location = {{Turin, Italy}}, pages = {{5}}, title = {{Machine-learning-based audio algorithms for hearing loss compensation}}, url = {{https://www.fa2023.org/}}, year = {{2023}}, }