Spelling suggestions: "subject:"dearing aims -- design anda construction"" "subject:"dearing aims -- design ando construction""
1 |
Speech Understanding in Noise as a Function of Microphone Placement in Hearing AidsHand, Erin Marlene Flowers 10 July 1996 (has links)
Hearing aid users often complain of poor speech understanding in the presence of background noise. There have been many attempts to overcome this problem by hearing aid manufactures and dispensers. The purpose of the present study was to determine if differences existed between three different styles of hearing aids (i.e. in the ear (ITE), in the canal (ITC), and completely in the canal (CIC)) in the presence of a multi-talker babble. Five sensori-neural hearing impaired subjects were selected from the Portland State University audiology clinic. The subjects were required to listen to a recording of the California Consonant Test (CCT) against a background noise of multi-talker babble. Stimuli were presented through headphones in the sound booth. The stimuli were recorded through three different hearing aids placed on KEMAR's left ear and adjusted to a 10 dB signal-to-noise ratio. Once the speech samples were recorded and digitized, they were routed through a GSl-16 audiometer to the listener. In order to determine performance differences across the three hearing aid configurations from within a single-subject design, each subject's performance was compared in a pairwise fashion between the hearing aid configurations. An analysis of the data was completed using the Randomization test. Using this statistical model, no significant difference was found between the individual scores. Further research is warranted to determine if a better measure exists that qualitatively defines the effect of microphone placement on speech understanding ability in hearing aid users.
|
2 |
Automatic Speech Separation for Brain-Controlled Hearing TechnologiesHan, Cong January 2024 (has links)
Speech perception in crowded acoustic environments is particularly challenging for hearing impaired listeners. While assistive hearing devices can suppress background noises distinct from speech, they struggle to lower interfering speakers without knowing the speaker on which the listener is focusing. The human brain has a remarkable ability to pick out individual voices in a noisy environment like a crowded restaurant or a busy city street. This inspires the brain-controlled hearing technologies. A brain-controlled hearing aid acts as an intelligent filter, reading wearers’ brainwaves and enhancing the voice they want to focus on.
Two essential elements form the core of brain-controlled hearing aids: automatic speech separation (SS), which isolates individual speakers from mixed audio in an acoustic scene, and auditory attention decoding (AAD) in which the brainwaves of listeners are compared with separated speakers to determine the attended one, which can then be amplified to facilitate hearing. This dissertation focuses on speech separation and its integration with AAD, aiming to propel the evolution of brain-controlled hearing technologies. The goal is to help users to engage in conversations with people around them seamlessly and efficiently.
This dissertation is structured into two parts. The first part focuses on automatic speech separation models, beginning with the introduction of a real-time monaural speech separation model, followed by more advanced real-time binaural speech separation models. The binaural models use both spectral and spatial features to separate speakers and are more robust to noise and reverberation. Beyond performing speech separation, the binaural models preserve the interaural cues of separated sound sources, which is a significant step towards immersive augmented hearing. Additionally, the first part explores using speaker identifications to improve the performance and robustness of models in long-form speech separation. This part also delves into unsupervised learning methods for multi-channel speech separation, aiming to improve the models' ability to generalize to real-world audio.
The second part of the dissertation integrates speech separation introduced in the first part with auditory attention decoding (SS-AAD) to develop brain-controlled augmented hearing systems. It is demonstrated that auditory attention decoding with automatically separated speakers is as accurate and fast as using clean speech sounds. Furthermore, to better align the experimental environment of SS-AAD systems with real-life scenarios, the second part introduces a new AAD task that closely simulates real-world complex acoustic settings. The results show that the SS-AAD system is capable of improving speech intelligibility and facilitating tracking of the attended speaker in realistic acoustic environments. Finally, this part presents employing self-supervised learned speech representation in the SS-AAD systems to enhance the neural decoding of attentional selection.
|
3 |
A general Purpose Digital Signal Processing SystemMyer, Christopher P. 01 January 1989 (has links)
This report introduces a novel architecture for a General Purpose Digital Signal Processing System and applies the system to implement a digital hearing aid. The theory and implementation of the general purpose digital signal processing system revolve around the architecture of the digital signal processor (DSP) and its use. The system consists of three subsystems: the Analog Interface Board, the DAAD Board, and the DSP Board. The general purpose digital signal processing system described takes into consideration both the basic needs of such a system as well as the many features which make it efficient in a wide range of applications. The system was used as a testbed for implementing various real-time DSP Algorithms. One of these algorithms is concerned with the problem of hearing loss. The final implementation of the digital hearing aid examines both the feasibility of the DHA as well as the usefulness of the general purpose digital signal processing system in a random application. Suggestions for future modification and expansion are discussed.
|
Page generated in 0.1663 seconds