Spelling suggestions: "subject:"found localization""
11 |
Spectral analysis and resolving spatial ambiguities in human sound localizationJin, Craig January 2001 (has links)
Doctor of Philosophy / This dissertation provides an overview of my research over the last five years into the spectral analysis involved in human sound localization. The work involved conducting psychophysical tests of human auditory localization performance and then applying analytical techniques to analyze and explain the data. It is a fundamental thesis of this work that human auditory localization response directions are primarily driven by the auditory localization cues associated with the acoustic filtering properties of the external auditory periphery, i.e., the head, torso, shoulder, neck, and external ears. This work can be considered as composed of three parts. In the first part of this work, I compared the auditory localization performance of a human subject and a time-delay neural network model under three sound conditions: broadband, high-pass, and low-pass. A “black-box” modeling paradigm was applied. The modeling results indicated that training the network to localize sounds of varying center-frequency and bandwidth could degrade localization performance results in a manner demonstrating some similarity to human auditory localization performance. As the data collected during the network modeling showed that humans demonstrate striking localization errors when tested using bandlimited sound stimuli, the second part of this work focused on human sound localization of bandpass filtered noise stimuli. Localization data was collected from 5 subjects and for 7 sound conditions: 300 Hz to 5 kHz, 300 Hz to 7 kHz, 300 Hz to 10 kHz, 300 Hz to 14 kHz, 3 to 8 kHz, 4 to 9 kHz, and 7 to 14 kHz. The localization results were analyzed using the method of cue similarity indices developed by Middlebrooks (1992). The data indicated that the energy level in relatively wide frequency bands could be driving the localization response directions, just as in Butler’s covert peak area model (see Butler and Musicant, 1993). The question was then raised as to whether the energy levels in the various frequency bands, as described above, are most likely analyzed by the human auditory localization system on a monaural or an interaural basis. In the third part of this work, an experiment was conducted using virtual auditory space sound stimuli in which the monaural spectral cues for auditory localization were disrupted, but the interaural spectral difference cue was preserved. The results from this work showed that the human auditory localization system relies primarily on a monaural analysis of spectral shape information for its discrimination of directions on the cone of confusion. The work described in the three parts lead to the suggestion that a spectral contrast model based on overlapping frequency bands of varying bandwidth and perhaps multiple frequency scales can provide a reasonable algorithm for explaining much of the current psychophysical and neurophysiological data related to human auditory localization.
|
12 |
Spatial sound and sound localization on a horizontal surface for use with interactive surface (tabletop) computersLam, Jonathan 01 August 2012 (has links)
Tabletop computers (also known as surface computers, smart tables, and
interactive surface computers) have been growing in popularity for the last decade
and are poised to make in‐roads into the consumer market, opening up a new
market for the games industry. However, before tabletop computers become widely
accepted, there are open problems that must be addressed with respect to audio
interaction including: "What loudspeaker constellations are appropriate for tabletop
computers?" "How does our perception of spatial sound change with these different
loudspeaker configurations?" and "What panning methods should be used to
maximally use the spatial localization abilities of the user(s)?" Using a custom‐built
tabletop computer setup, the work presented in this thesis investigated these three
questions/problems via a series of experiments. The results of these experiments
indicated that accurately localizing a virtual sound source on a horizontal surface is
a difficult and error‐prone task, for all of the methods that were used. / UOIT
|
13 |
Spectral analysis and resolving spatial ambiguities in human sound localizationJin, Craig January 2001 (has links)
Doctor of Philosophy / This dissertation provides an overview of my research over the last five years into the spectral analysis involved in human sound localization. The work involved conducting psychophysical tests of human auditory localization performance and then applying analytical techniques to analyze and explain the data. It is a fundamental thesis of this work that human auditory localization response directions are primarily driven by the auditory localization cues associated with the acoustic filtering properties of the external auditory periphery, i.e., the head, torso, shoulder, neck, and external ears. This work can be considered as composed of three parts. In the first part of this work, I compared the auditory localization performance of a human subject and a time-delay neural network model under three sound conditions: broadband, high-pass, and low-pass. A “black-box” modeling paradigm was applied. The modeling results indicated that training the network to localize sounds of varying center-frequency and bandwidth could degrade localization performance results in a manner demonstrating some similarity to human auditory localization performance. As the data collected during the network modeling showed that humans demonstrate striking localization errors when tested using bandlimited sound stimuli, the second part of this work focused on human sound localization of bandpass filtered noise stimuli. Localization data was collected from 5 subjects and for 7 sound conditions: 300 Hz to 5 kHz, 300 Hz to 7 kHz, 300 Hz to 10 kHz, 300 Hz to 14 kHz, 3 to 8 kHz, 4 to 9 kHz, and 7 to 14 kHz. The localization results were analyzed using the method of cue similarity indices developed by Middlebrooks (1992). The data indicated that the energy level in relatively wide frequency bands could be driving the localization response directions, just as in Butler’s covert peak area model (see Butler and Musicant, 1993). The question was then raised as to whether the energy levels in the various frequency bands, as described above, are most likely analyzed by the human auditory localization system on a monaural or an interaural basis. In the third part of this work, an experiment was conducted using virtual auditory space sound stimuli in which the monaural spectral cues for auditory localization were disrupted, but the interaural spectral difference cue was preserved. The results from this work showed that the human auditory localization system relies primarily on a monaural analysis of spectral shape information for its discrimination of directions on the cone of confusion. The work described in the three parts lead to the suggestion that a spectral contrast model based on overlapping frequency bands of varying bandwidth and perhaps multiple frequency scales can provide a reasonable algorithm for explaining much of the current psychophysical and neurophysiological data related to human auditory localization.
|
14 |
The Synaptic Mechanisms Underlying Binaural Interactions in Rat Auditory CortexKyweriga, Michael 29 September 2014 (has links)
The interaural level difference (ILD) is a sound localization cue first computed in the lateral superior olive (LSO) by comparing the loudness of sounds between the two ears. In the auditory cortex, one class of neurons is excited by contralateral but not ipsilateral monaural sounds. These "EO" neurons prefer ILDs where contralateral sounds are louder than ipsilateral sounds. Another class, the "PB" neurons, are unresponsive to monaural sounds but respond predominantly to binaural ILDs, when both ears receive simultaneous sounds of roughly equal loudness (0 ILD).
Behavioral studies show that ILD sensitivity is invariant to increasing sound levels. However, in the LSO, ILD response functions shift towards the excitatory ear as sound level increases, indicating level-dependence. Thus, changes in firing rate can indicate either a change in sound location or sound level, or both. This suggests a transformation in level-sensitivity between the LSO and the perception of sound sources, yet the location of this transformation remains unknown. I performed recordings in the auditory cortex of the rat to test whether neurons were invariant to overall sound level. I found that with increasing sound levels, ILD responses were level-dependent, suggesting that level invariance of ILD sensitivity is not present in the rat auditory cortex.
In general, neurons follow one of two processing strategies. The tuning of cortical cells typically follows the "inheritance strategy", such that the spiking output of the cell matches that of the excitatory synaptic input. However, cortical tuning can be modified by inhibition in the "local processing strategy". In this case, neurons are prevented from spiking at non-preferred stimuli by inhibition that overwhelms excitation. The tuning strategy of cortical neurons to ILD remains unknown. I performed whole-cell recordings in the anesthetized rat and compared the spiking output with synaptic inputs to ILDs within the same neurons. I found that the PB neurons showed evidence of the local processing strategy, which is a novel role for cortical inhibition, whereas the EO neurons utilized the inheritance strategy. This result suggests that an auditory cortical circuit computes sensitivity for midline ILDs.
This dissertation includes previously published/unpublished co-authored material.
|
15 |
Investigating Compensatory Mechanisms for Sound Localization: Visual Cue Integration and the Precedence EffectJanuary 2015 (has links)
abstract: Sound localization can be difficult in a reverberant environment. Fortunately listeners can utilize various perceptual compensatory mechanisms to increase the reliability of sound localization when provided with ambiguous physical evidence. For example, the directional information of echoes can be perceptually suppressed by the direct sound to achieve a single, fused auditory event in a process called the precedence effect (Litovsky et al., 1999). Visual cues also influence sound localization through a phenomenon known as the ventriloquist effect. It is classically demonstrated by a puppeteer who speaks without visible lip movements while moving the mouth of a puppet synchronously with his/her speech (Gelder and Bertelson, 2003). If the ventriloquist is successful, sound will be “captured” by vision and be perceived to be originating at the location of the puppet. This thesis investigates the influence of vision on the spatial localization of audio-visual stimuli. Participants seated in a sound-attenuated room indicated their perceived locations of either ISI or level-difference stimuli in free field conditions. Two types of stereophonic phantom sound sources, created by modulating the inter-stimulus time interval (ISI) or level difference between two loudspeakers, were used as auditory stimuli. The results showed that the light cues influenced auditory spatial perception to a greater extent for the ISI stimuli than the level difference stimuli. A binaural signal analysis further revealed that the greater visual bias for the ISI phantom sound sources was correlated with the increasingly ambiguous binaural cues of the ISI signals. This finding suggests that when sound localization cues are unreliable, perceptual decisions become increasingly biased towards vision for finding a sound source. These results support the cue saliency theory underlying cross-modal bias and extend this theory to include stereophonic phantom sound sources. / Dissertation/Thesis / Masters Thesis Bioengineering 2015
|
16 |
Timing cues for azimuthal sound source localization / Indices temporels pour la localisation des sources sonores en azimuthBenichoux, Victor 25 November 2013 (has links)
La localisation des sources en azimuth repose sur le traitement des différences de temps d'arrivée des sons à chacune des oreilles: les différences interaurales de temps (``Interaural Time Differences'' (ITD)). Pour certaines espèces, il a été montré que cet indice dépendait du spectre du signal émis par la source. Pourtant, cette variation est souvent ignorée, les humains et les animaux étant supposés ne pas y être sensibles. Le but de cette thèse est d'étudier cette dépendance en utilisant des méthodes acoustiques, puis d'en explorer les conséquences tant au niveau électrophysiologique qu'au niveau de la psychophysique humaine. A la proximité de sphères rigides, le champ sonore est diffracté, ce qui donne lieu à des régimes de propagation de l'onde sonore différents selon la fréquence. En conséquence, quand la tête d'un animal est modélisée par une sphère rigide, l'ITD pour une position donnée dépend de la fréquence. Je montre que cet effet est reflété dans les indices humains en analysant des enregistrements acoustiques pour de nombreux sujets. De plus, j'explique cet effet à deux échelles: localement en fréquence, la variation de l'ITD donne lieu à différents délais interauraux dans l'enveloppe et la structure fine des signaux qui atteignent les oreilles. Deuxièmement, l'ITD de sons basses-fréquences est généralement plus grand que celui pour des sons hautes-fréquences venant de la même position. Dans une seconde partie, je discute l'état de l'art sur le système binaural sensible à l'ITD chez les mammifères. J'expose que l'hétérogénéité des réponses de ces neurones est prédite lorsque l'on fait l'hypothèse que les cellules encodent des ITDs variables avec la fréquence. De plus, je discute comment ces cellules peuvent être sensibles à une position dans l'espace, quel que soit le spectre du signal émis par la source. De manière générale, j'argumente que les données disponibles chez les mammifères sont en adéquation avec l'hypothèse de cellules sélectives à une position dans l'espace. Enfin, j'explore l'impact de la dépendance en fréquence de l'ITD sur le comportement humain, en utilisant des techniques psychoacoustiques. Les sujets doivent faire correspondre la position latérale de deux sons qui n'ont pas le même spectre. Les résultats suggèrent que les humains perçoivent des sons avec différents spectres à la même position lorsqu'ils ont des ITDs différents, comme prédit part des enregistrements acoustiques. De plus, cet effet est prédit par un modèle sphérique de la tête du sujet. En combinant des approches de différents domaines, je montre que le système binaural est remarquablement adapté aux indices disponibles dans son environnement. Cette stratégie de localisation des sources utilisée par les animaux peut être d'une grande inspiration dans le développement de systèmes robotiques. / Azimuth sound localization in many animals relies on the processing of differences in time-of-arrival of the low-frequency sounds at both ears: the interaural time differences (ITD). It was observed in some species that this cue depends on the spectrum of the signal emitted by the source. Yet, this variation is often discarded, as humans and animals are assumed to be insensitive to it. The purpose of this thesis is to assess this dependency using acoustical techniques, and explore the consequences of this additional complexity on the neurophysiology and psychophysics of sound localization. In the vicinity of rigid spheres, a sound field is diffracted, leading to frequency-dependent wave propagation regimes. Therefore, when the head is modeled as a rigid sphere, the ITD for a given position is a frequency-dependent quantity. I show that this is indeed reflected on human ITDs by studying acoustical recordings for a large number of human and animal subjects. Furthermore, I explain the effect of this variation at two scales. Locally in frequency the ITD introduces different envelope and fine structure delays in the signals reaching the ears. Second the ITD for low-frequency sounds is generally bigger than for high frequency sounds coming from the same position. In a second part, I introduce and discuss the current views on the binaural ITD-sensitive system in mammals. I expose that the heterogenous responses of such cells are well predicted when it is assumed that they are tuned to frequency-dependent ITDs. Furthermore, I discuss how those cells can be made to be tuned to a particular position in space irregardless of the frequency content of the stimulus. Overall, I argue that current data in mammals is consistent with the hypothesis that cells are tuned to a single position in space. Finally, I explore the impact of the frequency-dependence of ITD on human behavior, using psychoacoustical techniques. Subjects are asked to match the lateral position of sounds presented with different frequency content. Those results suggest that humans perceive sounds with different frequency contents at the same position provided that they have different ITDs, as predicted from acoustical data. The extent to which this occurs is well predicted by a spherical model of the head. Combining approaches from different fields, I show that the binaural system is remarkably adapted to the cues available in its environment. This processing strategy used by animals can be of great inspiration to the design of robotic systems.
|
17 |
Neural Correlates of Directional Hearing following Noise-induced Hearing Loss in the Inferior Colliculus of Dutch-Belted RabbitsHaragopal, Hariprakash 22 September 2020 (has links)
No description available.
|
18 |
On the Behavioral Dynamics of Human Sound Localization: Two Experiments Concerning Active LocalizationRiehm, Christopher D., M.A. 22 October 2020 (has links)
No description available.
|
19 |
Acoustic Localization Employing Polar Directivity Patterns of Bidirectional Microphones Enabling Minimum Aperture Microphone ArraysVarada, Vijay K. January 2010 (has links)
No description available.
|
20 |
A Comprehensive Comparative Hearing Aid Study: Evaluating the Neuro-Compensator Relative to Wide Dynamic Range CompressionBruce, Jeff 10 1900 (has links)
<p>This Master’s thesis presents results from two clinical hearing aid studies. Wide dynamic range compression (WDRC), a hearing aid amplification algorithm widely used in the hearing aid industry, is compared against a novel hearing aid called the Neuro-Compensator (NC), which employs a neural-based amplification algorithm based on a computational model of the auditory periphery. The NC strategy involves preprocessing an incoming auditory signal, such that when the signal is presented to a damaged cochlea, auditory nerve output is reconstructed to look similar to the auditory nerve output of a healthy cochlea for the original auditory signal. The NC and WDRC hearing aid technologies are compared across a multitude of auditory domains. Objective measures of speech intelligibility in quiet and in noise, music perception, sound localization, and subjective measures of sound quality are obtained. It was hypothesized that the NC would restore more normal auditory abilities across auditory domains, due to its proposed strategy of restoring more normal auditory nerve output. Results from the clinical hearing aid studies quantified domains in which the NC was superior to WDRC, and vice versa.</p> / Master of Science (MSc)
|
Page generated in 0.1466 seconds