• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A three-channel model of human binaural sound localization

Dingle, Rachel Neville 23 March 2012 (has links)
The most accepted model of mammalian binaural sound localization postulates two neural/perceptual channels with hemifield tuning and overlapping medial borders; the extent to which the two channels are co-activated by the source is the neural "code" for the source's azimuthal location. This model does not take into account physiological data on the existence of a population of cells with spatial receptive fields centered on the azimuthal midline. The following work sought to test the hypothesis that the mammalian binaural sound localization apparatus includes a third, midline-tuned channel. Ten experiments used a selective adaptation paradigm in human listeners to probe for the existence of a midline channel. Psychometric functions were obtained for lateral position based on ITD or ILD both before and after adaptation with high-frequency (2800 and 4200 Hz) or low-frequency (260 and 570 Hz) tones. Listeners experienced highly lateralized adaptor stimuli with different frequencies at each ear (asymmetrical adaptation), highly lateralized adaptor stimuli of the same frequency at each ear (symmetrical adaptation), and single frequency adaptation at the midline (central adaptation). At both high and low frequencies in the domains of both interaural time difference (ITD) and interaural level difference (ILD), location judgements after asymmetrical adaptation shifted away from the fatigued side. These shifts occurred across each adapted hemifield and extended slightly over the midline, as is consistent with the two-channel model. The two-channel model would predict no effect of symmetrical or central adaptation because fatiguing both lateral channels equally would not change their relative activation by a given source. In practice, the result of symmetrical adaptation was a shift in location judgements towards the midline as would be expected if adaptation of the lateral channels resulted in a greater relative contribution of a third, midline channel. Likewise, central adaptation tended to result in shifts in perceived location towards the sides. The evidence for the midline channel was strong for high and low frequencies localized by ILD, and was present for low frequencies, but not for high frequencies, localized by ITD.
2

Repetitiv ljuddesign och dess inverkan på spelare : En undersökning av hur en repetitiv ljuddesign påverkar spelupplevelsen gentemot en varierande / Repetitive sound design and its impact on players : A study on how a repetitive sound design affects the gaming experience versus a varied

Stockselius, Christoffer January 2018 (has links)
Undersökningens syfte var att undersöka repetitiv ljuddesign, vilken inverkan det har på spelare och deras spelupplevelse. I undersökningen undersöks dessutom om repetitiv ljuddesign förändrar deltagarnas spelbeteende och om det är några specifika typer utav ljud som upplevs mer repetitiva än andra. I bakgrunden går studien igenom ett flertal olika studier, teorier och forskning inom repetitiv ljuddesign och andra relevanta områden. Dessa delades upp i tre olika relevanta områden för denna studies forskningsfråga, dessa tre områden var hur människan lyssnar, ljud kopplat till beteendeförändring och repetition mot immersion. Prototypen som skapades genom frågeställningen bestod utav ett spel med två olika versioner, där version A hade en varierande ljuddesign medan version B istället hade en repetitiv ljuddesign. Deltagarna fick spela både den varierande och repetitiva versionen utav spelet för att på så vis uppleva båda versionerna för att sedan kunna beskriva skillnaden mellan dem i en kvalitativ intervju. Samtidigt som deltagarna spelade spelet så analyserades även deras spelbeteende genom inspelningar utav spelsessionerna. Resultaten visade överlag att repetitiv ljuddesign hade en relativt negativ inverkan på deltagarnas spelupplevelse men hade istället mindre inverkan på deltagarnas spelbeteende.
3

High-Frequency Energy in Singing and Speech

Monson, Brian Bruce January 2011 (has links)
While human speech and the human voice generate acoustical energy up to (and beyond) 20 kHz, the energy above approximately 5 kHz has been largely neglected. Evidence is accruing that this high-frequency energy contains perceptual information relevant to speech and voice, including percepts of quality, localization, and intelligibility. The present research was an initial step in the long-range goal of characterizing high-frequency energy in singing voice and speech, with particular regard for its perceptual role and its potential for modification during voice and speech production.In this study, a database of high-fidelity recordings of talkers was created and used for a broad acoustical analysis and general characterization of high-frequency energy, as well as specific characterization of phoneme category, voice and speech intensity level, and mode of production (speech versus singing) by high-frequency energy content. Directionality of radiation of high-frequency energy from the mouth was also examined. The recordings were used for perceptual experiments wherein listeners were asked to discriminate between speech and voice samples that differed only in high-frequency energy content. Listeners were also subjected to gender discrimination tasks, mode-of-production discrimination tasks, and transcription tasks with samples of speech and singing that contained only high-frequency content. The combination of these experiments has revealed that (1) human listeners are able to detect very subtle level changes in high-frequency energy, and (2) human listeners are able to extract significant perceptual information from high-frequency energy.
4

Effects of Aging and Corticofugal Modulation on Startle Behavior and Auditory Physiology

Marisa A Dowling (6689462) 10 June 2019 (has links)
Frequency-modulated (FM) sweeps play a key role in species specific communication. Evidence from previous studies have shown that central auditory processing has been shown to vary based on the language spoken, which leads to the idea of experience-driven pitch encoding. Other studies have also shown that there is a decrease in this pitch encoding with aging. Using both iterated rippled noise (IRN) and frequency modulated amplitude modulation (FM/AM) methods to create complex pitch sweeps mimicking speech, allows for the processing of pitch to be determined. Neuromodulation using pharmacogenetics allows for the targeted inhibition of a specific neural pathway. Based on previous studies, the primary auditory cortex to inferior colliculus (A1/IC) pathway is hypothesized to be important in pitch encoding. However, there is a lack of evidence on specifically how the pitch information is encoded in the auditory system and how aging impacts the processing. To solve these issues, age-related changes in pitch encoding and maintaining pitch encoding through neuromodulation were characterized in the using behavioral and electrophysiology methods. Behavioral discrimination abilities, measured by modulation of the acoustic startle response, between pitch sweep direction and pitch sweep creation methods highlighted a reduced discrimination in aging and A1/IC inhibited rats. Electrophysiology changes was assessed using envelope-following responses (EFRs) and suggested a decreased initial frequency locking in aging and decrease in frequency locking overall with A1/IC pathway inhibition. Comparison of behavioral and electrophysiology to IRN and FM/AM stimuli show that the decrease in age-related processing as well as A1/IC pathway processing is larger in the behavioral pitch sweep discrimination than in the reduction in EFRs.
5

Chemogenetic Inhibition Of The Inferior Colliculus: Effects On Electrophysiology And Behavior

Nanami L. Miyazaki (5930753) 03 January 2019 (has links)
Age-related hearing loss (ARHL) or presbycusis has grown to be a prevalent problem among the increasing aging population over the past century. Efficacy of hearing aids, cochlear implants or auditory brainstem implants have been shown, but with variable performance among patients, a fuller understanding of the complex circuitry of the auditory system would be beneficial for improving upon current technology as well as developing alternative treatments. In the current study, chemogenetics or DREADDs was utilized to inhibit the neuronal activity of the pathway between the medial geniculate body and the inferior colliculus. Subsequent effects of chemogenetic inhibition was assessed with electrophysiological measures– including auditory evoked potential recordings and single-unit recordings–as well as behavioral measures using the acoustic startle response and prepulse inhibition paradigm.
6

A Multi-Channel EEG Mini-Cap for Recording Auditory Brainstem Responses in Chinchillas

Hannah M Ginsberg (9757334) 14 December 2020 (has links)
<p>According to the World Health Organization, disabling hearing loss affects nearly 466 million people worldwide. Sensorineural hearing loss (SNHL), which is characterized as damage to the inner ear (e.g., cochlear hair cells) and/or to the neural pathways connecting the inner ear and brain, accounts for 90\% of all disabling hearing loss. One important clinical measure of SNHL is an auditory evoked potential called the auditory brainstem response (ABR). The ABR is a non-invasive measure of synchronous neural activity across the peripheral auditory pathway (auditory nerve to the midbrain), comprised of a series of multiple waves occurring within the first 10 milliseconds after stimulus onset. In humans, oftentimes ABRs are recorded using a high-density EEG electrode cap (e.g., with 32 channels). In our lab, a long-term goal is to establish and characterize reliable and efficient non-invasive measures of hearing loss in our pre-clinical chinchilla models of SNHL that can be directly related to human clinical measures. Thus, bridging the gap between chinchilla and human data collection by using analogous measures is imperative. \par</p><p><br></p><p>For this project, a 32-channel EEG electrode mini-cap for recording ABRs in chinchillas was studied. Firstly, the feasibility of this new method to record ABRs demonstrated. Secondly, the sources of bias and variability relevant to the mini cap were investigated. In this investigation, the ability of the mini cap to produce highly reliable, repeatable, reproducible, and valid ABRs was verified. Finally, the benefits of this new method, in comparison to our current approach using three subdermal electrodes, were characterized. It was found that ABR responses were comparable across channels both in magnitude and morphology when referenced to a tiptrode in the ipsilateral ear canal. Consequently, averaging across several channels led to a reduction in overall noise and the need for fewer repetitions (in comparison to the subdermal method) to obtain reliable response. Other methodological benefits of the mini cap included closer alignment with human ABR data collection, more efficient data collection, and capability for more in-depth data analyses, like source localization (e.g., in cortical responses). Future work will include collecting ABRs using the EEG mini-cap before and after noise exposure, as well as exploring the potential to leverage different channels to isolate brainstem and midbrain contributions to evoked responses from simultaneous recordings. </p>
7

Rock Music: The Sounds of Flintknapping

Smith, Heather Noelle 13 April 2020 (has links)
No description available.
8

MULTIPLE PATHWAYS TO SUPRATHRESHOLD SPEECH IN NOISE DEFICIT IN HUMAN LISTENERS

Homeira Islam Kafi (18510747) 08 May 2024 (has links)
<p dir="ltr">Threshold audiometry, which measures the audibility of sounds in quiet, is currently the foundation of clinical hearing evaluation and patient management. Yet, despite using clinically prescribed state-of-the-art hearing aids that can restore audibility in quiet, patients with sensorineural hearing loss (SNHL) experience difficulty understanding speech in noisy backgrounds (e.g. cocktail party-like situations). This is likely because the amplification provided by modern hearing aids while restoring audibility in quiet, cannot compensate for the degradation in neural coding of speech in noise resulting from a range of non-linear changes in cochlear function that occur due to hearing damage. Furthermore, in addition to robust neural coding, the efficacy of cognitive processes such as selective attention also influences speech understanding outcomes. While much is known about how audibility affects speech understanding outcomes, little is known about suprathreshold deficits in SNHL. Unfortunately, direct measurements of the physiological changes in human inner ears are not possible due to ethical constraints. Here, I use noninvasive tools to characterize the effects of two less-familiar forms of SNHL: cochlear synaptopathy (CS; chapter 2) and distorted tonotopy (DT; chapters 3 and 4). Results from our experiments in Chapter 2 showed that age-related CS degrades envelope coding even in the absence of audiometric hearing loss and that these effects can be quantified using non-invasive electroencephalography (EEG)-based envelope-following response (EFRs) metrics. To date, DT has been only studied in laboratory-controlled animal models. In chapters 3 and 4, I combined psychophysical tuning curves, EFRs, and speech-in-noise measurements to characterize the effects of DT. Our results suggest that low-frequency noise produces a strong masking effect on the coding of speech in individuals with SNHL and that an index of DT (tip-to-tail ratio) obtained from psychophysical tuning curves can account for a significant portion of the large individual variability in listening outcomes among hearing-aid users, over and beyond audibility. Lastly, I propose a machine-learning framework to study the effect of attentional control on speech-in-noise outcomes (chapter 5). Specifically, I introduced a machine-learning model to assess how attentional control influences speech-in-noise understanding, using EEG to predict listening performance outcomes based on prestimulus neural activity. This design allows for examining the influence of top-down executive function on listening outcomes separately from the peripheral effects of SNHL. The results from our study suggest prestimulus EEG can predict subsequent listening outcomes and the changes in alpha rhythm may be used as a neural correlate of attention.</p>
9

TEMPORAL DYNAMICS OF PSYCHOACOUSTIC AND PHYSIOLOGICAL MEASURES OF COCHLEAR GAIN REDUCTION

William Bryan Salloom (12463590) 27 April 2022 (has links)
<p>Humans are able to hear and detect small changes in sound across a wide dynamic range despite limited dynamic ranges of individual auditory nerve fibers. One mechanism that may adjust the dynamic range is the medial olivocochlear reflex (MOCR), a bilateral sound-activated system which decreases amplification of sound by the outer hair cells in the cochlea. Much of the previous physiological MOCR research has used long broadband noise elicitors. In behavioral measures of gain reduction, a fairly short elicitor has been found to be maximally effective for an on-frequency, tonal elicitor. However, the effect of the duration of broadband noise elicitors on behavioral tasks is unknown. Additionally, MOCR effects measured using otoacoustic emissions (OAEs), have not consistently shown a positive correlation with behavioral gain reduction tasks. This finding seems counterintuitive if both measurements share a common generation mechanism. The current study measured the effects of ipsilateral broadband noise elicitor duration on psychoacoustic gain reduction (Chapter 2) and transient-evoked OAEs (TEOAEs) (Chapter 3) estimated from a forward-masking paradigm. Changes in the TEOAE were measured in terms of magnitude and phase. When phase was accounted for in the TEOAEs, the time constants were approximately equal to the psychoacoustic time constants, and were relatively short (~80 ms). When only changes in TEOAE magnitude were measured, and phase was omitted, the average time constants were longer (~172-ms). Overall, the psychoacoustic and physiological data were consistent with the timecourse of gain reduction by the MOCR. However, when the magnitudes from these data were directly compared in a linear mixed-effects model (Chapter 4), no positive predictive relationship was found, and in some cases there was a significant negative association between the physiological and psychoacoustic measures of gain reduction as a function of elicitor duration. The multitude of factors involved in this relationship are discussed, as are the implications of dynamic range adjustment in everyday listening conditions (noisy backgrounds) in both normal and hearing impaired listeners (Chapter 5).</p>
10

Development and Application of Tools for the Characterization of the Optogenetics Stimulation of the Cochlea

Duque Afonso, Carlos Javier 29 August 2019 (has links)
No description available.

Page generated in 0.0561 seconds