Preprint
Article

This version is not peer-reviewed.

How Does Deep Neural Network-Based Noise Reduction in Hearing Aids Impact Cochlear Implant Candidacy?

A peer-reviewed article of this preprint also exists.

Submitted:

04 November 2024

Posted:

05 November 2024

You are already at the latest version

Abstract
Background/Objectives Adult hearing-impaired patients qualifying for cochlear implants typically exhibit less than 60% sentence recognition in their best hearing aid condition, either in quiet or noisy environments, with speech and noise presented through a single speaker. This study examines the influence of deep neural network-based (DNN-based) based noise reduction on cochlear implant evaluation. Methods Speech perception was assessed using AzBio sentences in both quiet and noisy conditions (multi-talker babble) at 5 and 10 dB signal-to-noise ratio (SNR) through one loudspeaker. Sentence recognition scores were measured for 10 (5-bilateral, 5-unilateral) hearing-impaired patients using three hearing aid programs: calm situation, speech in noise, and spheric speech in loud noise (DNN-based noise reduction). Speech perception results were compared to bench analyses comprising the phase inversion technique, employed to predict SNR improvement, and the Hearing-Aid Speech Perception Index (HASPI v2), utilized to predict speech intelligibility. Results The spheric speech in loud noise program improved speech perception by 20 to 32% points as compared to the calm situation program. This improvement might make it difficult for some patients to achieve the
Keywords: 
;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated