1 |
The neural basis of musical consonanceBones, Oliver January 2014 (has links)
Three studies were designed to determine the relation between subcortical neural temporal coding and the perception of musical consonance. Consonance describes the pleasing perception of resolution and stability that occurs when musical notes with simple frequency ratios are combined. Recent work suggests that consonance is likely to be driven by the perception of ‘harmonicity’, i.e. the extent to which the frequency components of the combined spectrum of two or more notes share a common fundamental frequency and therefore resemble a single complex tone (McDermott et al, 2010, Curr Biol). The publication in Chapter 3 is a paper describing a method for measuring the harmonicity of neural phase locking represented by the frequency-following response (FFR). The FFR is a scalp-recorded auditory evoked potential, generated by neural phase locking and named from the characteristic peaks in the waveform with periods corresponding to the frequencies present in the fine structure and envelope of the stimulus. The studies in Chapters 4 and 5 demonstrate that this method predicts individual differences in the perception of consonance in young normal-hearing listeners, both with and without musical experience. The results of the study in Chapter 4 also demonstrate that phase locking to distortion products resulting from monaural cochlear interactions which enhance the harmonicity of the FFR may also increase the perceived pleasantness of consonant combinations of notes. The results of the study in Chapter 5 suggest that the FFR to two-note chords consisting of frequencies below 2500 Hz is likely to be generated in part by a basal region of the cochlea tuned to above this frequency range. The results of this study also demonstrate that the effects of high-frequency masking noise can be accounted for by a model of a saturating inner hair-cell receptor potential. Finally, the study in Chapter 6 demonstrates that age is related to a decline in the distinction between the representation of the harmonicity of consonant and dissonant dyads in the FFR, concurrent with a decline in the perceptual distinction between the pleasantness of consonant and dissonant dyads. Overall the results of the studies in this thesis provide evidence that consonance perception can be explained in part by subcortical neural temporal coding, and that age-related declines in temporal coding may underlie a decline in the perception of consonance.
|
2 |
Effects of early language experiences on the auditory brainstemChang, Andrea Chi-Ling 06 July 2018 (has links)
Recent studies have come to contradicting conclusions as to whether international adoptees, who experience a sudden change in language environment, lose or retain traces of their birth language (Pallier et al., 2003; Ventureyra, Pallier & Yoo, 2004; Pierce, Klein, Chen, Delcenserie, & Genesee, 2014). Though these studies have considered cortical differences between international adoptees and individuals from their birth counties, none has looked at subcortical differences in the brain between the two groups. The current project examined the frequency following response of adult Chinese international adoptees (N = 9) adopted as infants by American English-speaking families in the United States compared to native Mandarin (N = 21) and American English (N = 21) controls. Additional behavioral tasks were completed to explore different levels of linguistic features from phonetics to phonology to semantic knowledge to suprasegmental characteristics of speech. The FFR results indicate mostly good pitch tracking abilities amongst the adoptees that may support future tonal language learning in the adoptees. The behavioral data suggest that the adoptees have minimal access to all levels of linguistic levels of linguistic processing (i.e., phonetic, phonological, lexical, suprasegmental) after adoption and after early exposure to English. Overall, the data provide evidence for the neural commitment theory that humans’ language acquisition is attuned to their language environment early on in life.
|
3 |
Characterization and Classification of the Frequency Following Response to Vowels at Different Sound Levels in Normal Hearing AdultsHeffernan, Brian 12 February 2019 (has links)
This work seeks to more fully characterize how the representation of English vowels changes with increasing sound level in the frequency following response (FFR) of normal-hearing adult subjects. It further seeks to help inform the design of brain-computer interfaces (BCI) that exploit the FFR for hearing aid (HA) applications. The results of three studies are presented, followed by a theoretical examination of the potential BCI space as it relates to HA design.
The first study examines how the representation of a long vowel changes with level in
normal hearing subjects. The second study examines how the representation of four short vowels changes with level in normal hearing subjects. The third study utilizes machine learning techniques to automatically classify the FFRs captured in the second study. Based in-part on the findings from these three studies, potential avenues to pursue with respect to the utilization of the FFR in the automated fitting of HAs are proposed.
The results of the first two studies suggest that the FFR to vowel stimuli presented
at levels in the typical speech range provide robust and differentiable representations of both envelope and temporal fine structure cues present in the stimuli in both the time and frequency domains. The envelope FFR at the fundamental frequency (F0) was generally not monotonic-increasing with level increases. The growth of the harmonics of F0 in the envelope FFR were consistent indicators of level-related effects, and the harmonics related to the first and second formants were also consistent indicators of level effects.
The third study indicates that common machine-learning classification algorithms are
able to exploit features extracted from the FFR, both in the time and frequency domains, in order to accurately predict both vowel and level classes among responses. This has positive implications for future work regarding BCI-based approaches to HA fitting, where controlling for clarity and loudness are important considerations.
|
4 |
Classification of Frequency Following Responses to English Vowels in a Biometric ApplicationSun, Rui 27 May 2020 (has links)
The objective of this thesis is to characterize and identify the representation of four short English vowels in the frequency following response (FFR) of 22 normal-hearing adult subjects. The results of two studies are presented, with some analysis.
The result of the first study indicates how the FFR signal of four short vowels can be used to identity different subjects. Meanwhile, a rigorous test was conducted to test and verify the quality and consistency of responses from each subject between test and retest, in order to provide strong and representative features for subject identification.
The second study utilized machine learning and deep learning classification algorithms to exploit features extracted from the FFRs, in both time and frequency domains, to accurately identify subjects from their responses. We used three kinds of classifiers with respect to three aspects of the features, yielding a highest classification accuracy of 86.36%.
The results of the studies provide positive and important implications for establishing a biometric authentication system using speech-evoked FFRs.
|
5 |
Noise Separation in Frequency Following Responses through Non-negative Matrix FactorizationsHart, Breanna N. 10 September 2021 (has links)
No description available.
|
6 |
Changes in Auditory Evoked Responses due to Blast and AgingEmily X Han (10724001) 05 May 2021 (has links)
Hearing loss of various types is increasingly plaguing our modern world (Geneva: World Health Organization 2018). As the life expectancy increased in the industrialized world, age-related hearing loss (ARHL) has become more prevalent. The wars and terrorism of the modern world also created a significant body of blast-induced hearing loss (BIHL) patients. Both types of hearing loss present significant challenges for listeners even at suprathreshold sound levels. However, increasing bodies of clinical and laboratory evidence have suggested that the difficulties in the processing of time-varying auditory features in speech and other natural sounds may not be sufficiently diagnosed by threshold changes and simple auditory electrophysiological measures (Snell and Frisina 2000; Saunders et al. 2015; Bressler et al. 2017; Guest et al. 2018).<br>Studies have emphasized that excitatory/inhibitory neurotransmission imbalance plays important roles in ARHL (Caspary et al. 2008) and may also be key in BIHL, as hinted by the strong presence of GABA regulation in non-blast TBI (O’Dell et al. 2000; Cantu et al. 2015; Guerriero et al. 2015). The current studies focus on age-related and blast-induced hearing deficits by examining changes in the processing of simple, brief stimuli and complex, sustained, temporally modulated sounds.<br>Through post hoc circular analysis of single-unit, in vivo recording of young and aged inferior colliculus (IC) neurons responding to amplitude modulation (AM) stimuli and modulation depth changes, we observed evidence of central compensation in the IC manifesting as increased sensitivity to presynaptic input, which was measured via local field potentials (LFPs). We also found decreased sensitivity to decreasing modulation depth. Age-related central gain in the IC single units, while preserving and even overcompensating for temporal phase coding in the form of vector strength, was unable to make up for the loss of envelope shape coding.<br>Through careful, longitudinal measurements of auditory evoked potential (AEP) responses towards simple sounds, AM and speech-like iterated rippled noise (IRN), we documented the development and recovery of BIHL induced by a single mild blast in a previously established (Song et al. 2015; Walls et al. 2016; Race et al. 2017) rat blast model over the course of two months. We identified crucial acute (day 1-4 post-exposure) and early recovery (day 7-14) time windows in which drastic changes in electrophysiology take place. Challenging conditions and broadband, speech-like stimuli can better elucidate mild bTBI-induced auditory deficits during the sub-acute period. The anatomical significance of the aforementioned time windows was demonstrated with immunohistochemistry methods, showing two distinct waves of GABA inhibitory transmission changes taking place in the auditory brainstem, the IC, and the auditory thalamus. These changes were in addition to axonal and oxidative damage evident in the acute phase. We examined the roles and patterns of excitatory/inhibitory imbalance in BIHL, its distinction compared to that of ARHL, and demonstrated the complexity of its electrophysiological consequences. Blast traumatizes the peripheral auditory system and auditory brainstem, evident through membrane damage and acrolein-mediated oxidative stress. These initial traumas kickstart a unique, interlocking cascade of excitatory/inhibitory imbalances along the auditory neuraxis that is more complex and individually varied than the gradual, non-traumatic degradations in ARHL. Systemic treatment with the FDA-approved acrolein scavenger Hydralazine (HZ) was attempted with limited effects.<br>Taken together, the current study provided insights into the similarities and distinctions between the mechanisms of ARHL and BIHL and called for innovative and individual diagnostic and therapeutic measures.<br>
|
7 |
Human Frequency Following Responses to Voice Pitch: Relative Contributions of the Fundamental Frequency and Its HarmonicsCostilow, Cassie E. 06 July 2010 (has links)
No description available.
|
8 |
THE OCULAR FOLLOWING RESPONSE (OFR) AS A PROBE OF ABNORMAL VISUOMOTOR TRACKINGJoshi, Anand C. 17 May 2010 (has links)
No description available.
|
9 |
Lier l'activité de population de neurones du cortex visuel primaire avec le comportement oculomoteur : des saccades de fixation à V1, et de V1 à la réponse de suivi oculaireMontardy, Quentin 20 December 2012 (has links)
Nous avons analysé l'activité de population au sein du cortex visuel primaire en vue de comprendre (i) les mécanismes mis en jeu lors de l'intégration de l'information visuelle suite à un mouvement oculaire, et inversement (ii) de l'influence du traitement effectué au niveau de V1 sur la génération d'un mouvement oculaire.1. Nous avons enregistré des saccades de fixation, et mis en relation, essai par essai, ces mouvements avec la représentation de la position d'un stimulus local dans V1. Après une saccade de fixation, l'activité se déplace de façon cohérente dans V1. Le décours temporel des réponses au niveau des foyers pre- et post-saccadiques montre une dynamique biphasique. La taille du foyer d'activité augmente. Nous proposons que le comportement des populations de neurones s'explique par deux phénomènes principaux : (i) La réponse suppressive précoce attribuable à la décharge corollaire (ii) de connections latérales qui réactiveraient le foyer pre-saccadique.2. Nous avons enregistré l'OFR, et cherché à savoir si la réponse de V1 l'influençait. Les latences VSD précèdent les latences OFR. Il n'existe pas de corrélation à l'essai unique. Nous avons montré que la force et la dynamique des réponses de V1 n'étaient pas prédictives de l'OFR. La distance de la périphérie à un effet sur la réponse VSD, mais pas sur l'OFR. La dynamique de propagation de cette suppression, nous avons montré deux phases : une précoce sur l'ensemble de la carte, et une plus périphérique tardive. Nous proposons que la suppression précoce soit originaire de projections en retour de structures comme MT et MST, alors que la suppression plus lente s'explique par les connections horizontales. / We analyzed population activity in V1 to understand (i) the consequence of eye movements on integration of visual information, and (ii) the influence of the processing performed at the level of V1 on the generation of eye movements.1. We recorded fixational saccades, relating, trial-by-trial, these eye movements with the representation of the position of a local stimulus in V1. After a fixational saccade, activity moves consistently in V1. However, the time-course of responses display a biphasic dynamic. This results in a global increase of the extent of cortical activity representing the local stimulus. We propose that the behavior of populations of neurons studied is explained by the contribution of two main phenomena: (i) an early suppressive response that could be attributed to the corollary discharge and (ii) the lateral connections generating lateral interactions between pre and post-saccadic lci of activity.2. We recorded the ocular following response, determining whether the response of V1 influences the oculomotor response. We studied the contrast response function of the population V1 activity and the OFR. The dynamics of CRF for a local stimulus are similar and shifted in time. We found no correlations between the single trial latencies between V1 and the OFR. At the chosen scale, surround suppression was found to be distance-dependent only in V1. The dynamics of the surround suppression shows two phases: an early suppression present over a wide cortical area, and a later peripheral spread. We propose that the early surround suppression originates from feedback from MT and MST, while the later is explained by the horizontal connections.
|
10 |
Behavioral and auditory evoked potential (AEP) hearing measurements in odontocete cetaceansCook, Mandy Lee Hill 01 June 2006 (has links)
Bottlenose dolphins (Tursiops truncatus) and other odontocete cetaceans rely on sound for communication, navigation, and foraging. Therefore, hearing is one of their primary sensory modalities. Both natural and anthropogenic noise in the marine environment could mask the ability of free-ranging dolphins to detect sounds, and chronic noise exposure could cause permanent hearing losses. In addition, several mass strandings of odontocete cetaceans, especially beaked whales, have been correlated with military exercises involving mid-frequency sonar, highlighting unknowns regarding hearing sensitivity in these animals.Auditory evoked potential (AEP) methods are attractive over traditional behavioral methods for measuring the hearing of marine mammals because they allow rapid assessments of hearing sensitivity and can be used on untrained animals. The goals of this study were to 1.) investigate the differences among underwater AEP, in-air AEP, and underwater behavioral heari
ng measurements using two captive bottlenose dolphins, 2.) investigate the hearing abilities of a population of free-ranging bottlenose dolphins in Sarasota Bay, Florida, using AEP techniques, and 3.) report the hearing abilities of a stranded juvenile beaked whale (Mesoplodon europaeus) measured using AEP techniques.For the two captive dolphins, there was generally good agreement among the hearing thresholds determined by the three test methods at frequencies above 20 kHz. At 10 and 20 kHz, in-air AEP audiograms were substantially higher (about 15 dB) than underwater behavioral and underwater AEP audiograms.For the free-ranging dolphins of Sarasota Bay, Florida, there was considerable individual variation, up to 80 dB between individuals, in hearing abilities. There was no relationship between age, gender, or PCB load and hearing sensitivities. Hearing measured in a 52-year-old captive-born bottlenose dolphin showed similar hearing thresholds to the Sarasota dolphins up to 80 kHz,
but exhibited a 50 dB drop in sensitivity at 120 kHz.Finally, the beaked whale was most sensitive to high frequency signals between 40 and 80 kHz, but produced smaller evoked potentials to 5 kHz, the lowest frequency tested. The beaked whale hearing range and sensitivity were similar to other odontocetes that have been measured.
|
Page generated in 0.0996 seconds