Return to search

Neural representations of natural speech in a chinchilla model of noise-induced hearing loss

<div>Hearing loss hinders the communication ability of many individuals despite state-of-the-art interventions. Animal models of different hearing-loss etiologies can help improve the clinical outcomes of these interventions; however, several gaps exist. First, translational aspects of animal models are currently limited because anatomically and physiologically specific data obtained from animals are analyzed differently compared to noninvasive evoked responses that can be recorded from humans. Second, we lack a comprehensive understanding of the neural representation of everyday sounds (e.g., naturally spoken speech) in real-life settings (e.g., in background noise). This is even true at the level of the auditory nerve, which is the first bottleneck of auditory information flow to the brain and the first neural site to exhibit crucial effects of hearing-loss. </div><div><br></div><div>To address these gaps, we developed a unifying framework that allows direct comparison of invasive spike-train data and noninvasive far-field data in response to stationary and nonstationary sounds. We applied this framework to recordings from single auditory-nerve fibers and frequency-following responses from the scalp of anesthetized chinchillas with either normal hearing or noise-induced mild-moderate hearing loss in response to a speech sentence in noise. Key results for speech coding following hearing loss include: (1) coding deficits for voiced speech manifest as tonotopic distortions without a significant change in driven rate or spike-time precision, (2) linear amplification aimed at countering audiometric threshold shift is insufficient to restore neural activity for low-intensity consonants, (3) susceptibility to background noise increases as a direct result of distorted tonotopic mapping following acoustic trauma, and (4) temporal-place representation of pitch is also degraded. Finally, we developed a noninvasive metric to potentially diagnose distorted tonotopy in humans. These findings help explain the neural origins of common perceptual difficulties that listeners with hearing impairment experience, offer several insights to make hearing-aids more individualized, and highlight the importance of better clinical diagnostics and noise-reduction algorithms. </div>

  1. 10.25394/pgs.13365188.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/13365188
Date14 December 2020
CreatorsSatyabrata Parida (9759374)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/Neural_representations_of_natural_speech_in_a_chinchilla_model_of_noise-induced_hearing_loss/13365188

Page generated in 0.002 seconds