• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 7
  • 7
  • 7
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The neural basis of musical consonance

Bones, Oliver January 2014 (has links)
Three studies were designed to determine the relation between subcortical neural temporal coding and the perception of musical consonance. Consonance describes the pleasing perception of resolution and stability that occurs when musical notes with simple frequency ratios are combined. Recent work suggests that consonance is likely to be driven by the perception of ‘harmonicity’, i.e. the extent to which the frequency components of the combined spectrum of two or more notes share a common fundamental frequency and therefore resemble a single complex tone (McDermott et al, 2010, Curr Biol). The publication in Chapter 3 is a paper describing a method for measuring the harmonicity of neural phase locking represented by the frequency-following response (FFR). The FFR is a scalp-recorded auditory evoked potential, generated by neural phase locking and named from the characteristic peaks in the waveform with periods corresponding to the frequencies present in the fine structure and envelope of the stimulus. The studies in Chapters 4 and 5 demonstrate that this method predicts individual differences in the perception of consonance in young normal-hearing listeners, both with and without musical experience. The results of the study in Chapter 4 also demonstrate that phase locking to distortion products resulting from monaural cochlear interactions which enhance the harmonicity of the FFR may also increase the perceived pleasantness of consonant combinations of notes. The results of the study in Chapter 5 suggest that the FFR to two-note chords consisting of frequencies below 2500 Hz is likely to be generated in part by a basal region of the cochlea tuned to above this frequency range. The results of this study also demonstrate that the effects of high-frequency masking noise can be accounted for by a model of a saturating inner hair-cell receptor potential. Finally, the study in Chapter 6 demonstrates that age is related to a decline in the distinction between the representation of the harmonicity of consonant and dissonant dyads in the FFR, concurrent with a decline in the perceptual distinction between the pleasantness of consonant and dissonant dyads. Overall the results of the studies in this thesis provide evidence that consonance perception can be explained in part by subcortical neural temporal coding, and that age-related declines in temporal coding may underlie a decline in the perception of consonance.
2

Effects of early language experiences on the auditory brainstem

Chang, Andrea Chi-Ling 06 July 2018 (has links)
Recent studies have come to contradicting conclusions as to whether international adoptees, who experience a sudden change in language environment, lose or retain traces of their birth language (Pallier et al., 2003; Ventureyra, Pallier & Yoo, 2004; Pierce, Klein, Chen, Delcenserie, & Genesee, 2014). Though these studies have considered cortical differences between international adoptees and individuals from their birth counties, none has looked at subcortical differences in the brain between the two groups. The current project examined the frequency following response of adult Chinese international adoptees (N = 9) adopted as infants by American English-speaking families in the United States compared to native Mandarin (N = 21) and American English (N = 21) controls. Additional behavioral tasks were completed to explore different levels of linguistic features from phonetics to phonology to semantic knowledge to suprasegmental characteristics of speech. The FFR results indicate mostly good pitch tracking abilities amongst the adoptees that may support future tonal language learning in the adoptees. The behavioral data suggest that the adoptees have minimal access to all levels of linguistic levels of linguistic processing (i.e., phonetic, phonological, lexical, suprasegmental) after adoption and after early exposure to English. Overall, the data provide evidence for the neural commitment theory that humans’ language acquisition is attuned to their language environment early on in life.
3

Characterization and Classification of the Frequency Following Response to Vowels at Different Sound Levels in Normal Hearing Adults

Heffernan, Brian 12 February 2019 (has links)
This work seeks to more fully characterize how the representation of English vowels changes with increasing sound level in the frequency following response (FFR) of normal-hearing adult subjects. It further seeks to help inform the design of brain-computer interfaces (BCI) that exploit the FFR for hearing aid (HA) applications. The results of three studies are presented, followed by a theoretical examination of the potential BCI space as it relates to HA design. The first study examines how the representation of a long vowel changes with level in normal hearing subjects. The second study examines how the representation of four short vowels changes with level in normal hearing subjects. The third study utilizes machine learning techniques to automatically classify the FFRs captured in the second study. Based in-part on the findings from these three studies, potential avenues to pursue with respect to the utilization of the FFR in the automated fitting of HAs are proposed. The results of the first two studies suggest that the FFR to vowel stimuli presented at levels in the typical speech range provide robust and differentiable representations of both envelope and temporal fine structure cues present in the stimuli in both the time and frequency domains. The envelope FFR at the fundamental frequency (F0) was generally not monotonic-increasing with level increases. The growth of the harmonics of F0 in the envelope FFR were consistent indicators of level-related effects, and the harmonics related to the first and second formants were also consistent indicators of level effects. The third study indicates that common machine-learning classification algorithms are able to exploit features extracted from the FFR, both in the time and frequency domains, in order to accurately predict both vowel and level classes among responses. This has positive implications for future work regarding BCI-based approaches to HA fitting, where controlling for clarity and loudness are important considerations.
4

Classification of Frequency Following Responses to English Vowels in a Biometric Application

Sun, Rui 27 May 2020 (has links)
The objective of this thesis is to characterize and identify the representation of four short English vowels in the frequency following response (FFR) of 22 normal-hearing adult subjects. The results of two studies are presented, with some analysis. The result of the first study indicates how the FFR signal of four short vowels can be used to identity different subjects. Meanwhile, a rigorous test was conducted to test and verify the quality and consistency of responses from each subject between test and retest, in order to provide strong and representative features for subject identification. The second study utilized machine learning and deep learning classification algorithms to exploit features extracted from the FFRs, in both time and frequency domains, to accurately identify subjects from their responses. We used three kinds of classifiers with respect to three aspects of the features, yielding a highest classification accuracy of 86.36%. The results of the studies provide positive and important implications for establishing a biometric authentication system using speech-evoked FFRs.
5

Noise Separation in Frequency Following Responses through Non-negative Matrix Factorizations

Hart, Breanna N. 10 September 2021 (has links)
No description available.
6

Human Frequency Following Responses to Voice Pitch: Relative Contributions of the Fundamental Frequency and Its Harmonics

Costilow, Cassie E. 06 July 2010 (has links)
No description available.
7

Neural representations of natural speech in a chinchilla model of noise-induced hearing loss

Satyabrata Parida (9759374) 14 December 2020 (has links)
<div>Hearing loss hinders the communication ability of many individuals despite state-of-the-art interventions. Animal models of different hearing-loss etiologies can help improve the clinical outcomes of these interventions; however, several gaps exist. First, translational aspects of animal models are currently limited because anatomically and physiologically specific data obtained from animals are analyzed differently compared to noninvasive evoked responses that can be recorded from humans. Second, we lack a comprehensive understanding of the neural representation of everyday sounds (e.g., naturally spoken speech) in real-life settings (e.g., in background noise). This is even true at the level of the auditory nerve, which is the first bottleneck of auditory information flow to the brain and the first neural site to exhibit crucial effects of hearing-loss. </div><div><br></div><div>To address these gaps, we developed a unifying framework that allows direct comparison of invasive spike-train data and noninvasive far-field data in response to stationary and nonstationary sounds. We applied this framework to recordings from single auditory-nerve fibers and frequency-following responses from the scalp of anesthetized chinchillas with either normal hearing or noise-induced mild-moderate hearing loss in response to a speech sentence in noise. Key results for speech coding following hearing loss include: (1) coding deficits for voiced speech manifest as tonotopic distortions without a significant change in driven rate or spike-time precision, (2) linear amplification aimed at countering audiometric threshold shift is insufficient to restore neural activity for low-intensity consonants, (3) susceptibility to background noise increases as a direct result of distorted tonotopic mapping following acoustic trauma, and (4) temporal-place representation of pitch is also degraded. Finally, we developed a noninvasive metric to potentially diagnose distorted tonotopy in humans. These findings help explain the neural origins of common perceptual difficulties that listeners with hearing impairment experience, offer several insights to make hearing-aids more individualized, and highlight the importance of better clinical diagnostics and noise-reduction algorithms. </div>

Page generated in 0.0815 seconds