• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 42
  • 42
  • 14
  • 13
  • 10
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Examining distributed change-detection processes through concurrent measurement of subcortical and cortical components of the auditory-evoked potential

Slugocki, Christopher January 2018 (has links)
Study of the mammalian auditory system suggests that processes once thought exclusive to cortical structures also operate subcortically. Recently, this observation has extended to the detection of acoustic change. This thesis uses methods designed for the concurrent capture of auditory-evoked potential (AEP) components attributed to different subcortical and cortical sources. Using such an approach, Chapter 2 shows that 2-month-old infants respond to infrequent changes in sound source location with neural activity implicating both subcortically- and cortically-driven mechanisms of change-detection. Chapter 3 describes the development of a new stimulation protocol and presents normative data from adult listeners showing that the morphologies of several well-known subcortical and cortical AEP components are related. Finally, Chapter 4 uses the new methods developed in Chapter 3 to demonstrate that stimulus regularity not only affects neural activity at both subcortical and cortical structures, but that the activity localized to these structures is linked. Together, the studies presented in this thesis emphasize the potential for existing technologies to study the interaction of subcortical and cortical processing in human listeners. Moreover, the results of Chapters 2 through 4 lend support to models wherein change-detection is considered a distributed, and perhaps fundamental, attribute of the auditory hierarchy. / Thesis / Doctor of Philosophy (PhD)
22

Spatial Audio for Bat Biosonar

Lee, Hyeon 24 August 2020 (has links)
Research investigating the behavioral and physiological responses of bats to echoes typically includes analysis of acoustic signals from microphones and/or microphone arrays, using time difference of arrival (TDOA) between array elements or the microphones to locate flying bats (azimuth and elevation). This has provided insight into transmission adaptations with respect to target distance, clutter, and interference. Microphones recording transmitted signals and echoes near a stationary bat provide sound pressure as a function of time but no directional information. This dissertation introduces spatial audio techniques to bat biosonar studies as a complementary method to the current TDOA based acoustical study methods. This work proposes a couple of feasible methods based on spatial audio techniques, that both track bats in flight and pinpoint the directions of echoes received by a bat. A spatial audio/soundfield microphone array is introduced to measure sounds in the sonar frequency range (20-80 kHz) of the big brown bat (Eptesicus fuscus). The custom-built ultrasonic tetrahedral soundfield microphone consists of four capacitive microphones that were calibrated to match magnitude and phase responses using a transfer function approach. Ambisonics, a signal processing technique used in three-dimensional (3D) audio applications, is used for the basic processing and reproduction of the signals measured by the soundfield microphone. Ambisonics provides syntheses and decompositions of a signal containing its directional properties, using the relationship between the spherical harmonics and the directional properties. As the first proposed method, a spatial audio decoding technique called HARPEx (High Angular Resolution Planewave Expansion) was used to build a system providing angle and elevation estimates. HARPEx can estimate the direction of arrivals (DOA) for up to two simultaneous sources since it decomposes a signal into two dominant planewaves. Experiments proved that the estimation system based on HARPEx provides accurate DOA estimates of static or moving sources. It also reconstructed a smooth flight-path of a bat by accurately estimating its direction at each snapshot of pulse measurements in time. The performance of the system was also assessed using statistical analyses of simulations. A signal model was built to generate microphone capsule responses to a virtual source emitting an LFM signal (3 ms, two harmonics: 40-22 kHz and 80-44 kHz) at an angle of 30° in the simulations. Medians and RMSEs (root-mean-square error) of 10,000 simulations for each case represent the accuracy and precision of the estimations, respectively. Results show lower d (distance between a capsule and the soundfield microphone center) or/and higher SNR (signal-to-noise ratio) are required to achieve higher estimator performance. The Cramer-Rao lower bounds (CRLB) of the estimator are also computed with various d and SNR conditions. The CRLB which is for TDOA based methods does not cover the effects of different incident angles to the capsules and signal delays between the capsules due to a non-zero d, on the estimation system. This shows the CRLB is not a proper tool to assess the estimator performance. For the second proposed method, the matched-filter technique is used instead of HARPEx to build another estimation system. The signal processing algorithm based on Ambisonics and the matched-filter approach reproduces a measured signal in various directions, and computes matched-filter responses of the reproduced signals in time-series. The matched-filter result points a target(s) by the highest filter response. This is a sonar-like estimation system that provides information of the target (range, direction, and velocity) using sonar fundamentals. Experiments using a loudspeaker (emitter) and an artificial or natural target (either stationary or moving) show the system provides accurate estimates of the target's direction and range. Simulations of imitating a situation where a bat emits a pulse and receives an echo from a target (30°) were also performed. The echo sound level is determined using the sonar equation. The system processed the virtual bat pulse and echo, and accurately estimated the direction, range, and velocity of the target. The simulation results also appear to recommend an echo level over -3 dB for accurate and precise estimations (below 15% RMSE for all parameters). This work proposes two methods to track bats in flight or/and pinpoint the directions of targets using spatial audio techniques. The suggested methods provide accurate estimates of the direction, range, or/and velocity of a bat based on its pulses or of a target based on echoes. This demonstrates these methods can be used as key tools to reconstruct bat biosonar. They would be also an independent tool or a complementary option to TDOA based methods, for bat echolocation studies. The developed methods are believed to be also useful in improving man-made sonar technology. / Doctor of Philosophy / While bats are one of the most intriguing creatures to the general population, they are also a popular subject of study in various disciplines. Their extraordinary ability to navigate and forage irrespective of clutter using echolocation has gotten attention from many scientists and engineers. Research investigating bats typically includes analysis of acoustic signals from microphones and/or microphone arrays. Using time difference of arrival (TDOA) between the array elements or the microphones is probably the most popular method to locate flying bats (azimuth and elevation). Microphone responses to transmitted signals and echoes near a bat provide sound pressure but no directional information. This dissertation proposes a complementary way to the current TDOA methods, that delivers directional information by introducing spatial audio techniques. This work shows a couple of feasible methods based on spatial audio techniques, that can both track bats in flight and pinpoint the directions of echoes received by a bat. An ultrasonic tetrahedral soundfield microphone is introduced as a measurement tool for sounds in the sonar frequency range (20-80 kHz) of the big brown bat (Eptesicus fuscus). Ambisonics, a signal processing technique used in three-dimensional (3D) audio applications, is used for the basic processing of the signals measured by the soundfield microphone. Ambisonics also reproduces a measured signal containing its directional properties. As the first method, a spatial audio decoding technique called HARPEx (High Angular Resolution Planewave Expansion) was used to build a system providing angle and elevation estimates. HARPEx can estimate the direction of arrivals (DOA) for up to two simultaneous sound sources. Experiments proved that the estimation system based on HARPEx provides accurate DOA estimates of static or moving sources. The performance of the system was also assessed using statistical analyses of simulations. Medians and RMSEs (root-mean-square error) of 10,000 simulations for each simulation case represent the accuracy and precision of the estimations, respectively. Results show shorter distance between a capsule and the soundfield microphone center, or/and higher SNR (signal-to-noise ratio) are required to achieve higher performance. For the second method, the matched-filter technique is used to build another estimation system. This is a sonar-like estimation system that provides information of the target (range, direction, and velocity) using matched-filter responses and sonar fundamentals. Experiments using a loudspeaker (emitter) and an artificial or natural target (either stationary or moving) show the system provides accurate estimates of the target's direction and range. Simulations imitating a situation where a bat emits a pulse and receives an echo from a target (30°) were also performed. The system processed the virtual bat pulse and echo, and accurately estimated the direction, range, and velocity of the target. The suggested methods provide accurate estimates of the direction, range, or/and velocity of a bat based on its pulses or of a target based on echoes. This demonstrates these methods can be used as key tools to reconstruct bat biosonar. They would be also an independent tool or a complementary option to TDOA based methods, for bat echolocation studies. The developed methods are also believed to be useful in improving sonar technology.
23

Timing cues for azimuthal sound source localization

Benichoux, Victor 25 November 2013 (has links) (PDF)
Azimuth sound localization in many animals relies on the processing of differences in time-of-arrival of the low-frequency sounds at both ears: the interaural time differences (ITD). It was observed in some species that this cue depends on the spectrum of the signal emitted by the source. Yet, this variation is often discarded, as humans and animals are assumed to be insensitive to it. The purpose of this thesis is to assess this dependency using acoustical techniques, and explore the consequences of this additional complexity on the neurophysiology and psychophysics of sound localization. In the vicinity of rigid spheres, a sound field is diffracted, leading to frequency-dependent wave propagation regimes. Therefore, when the head is modeled as a rigid sphere, the ITD for a given position is a frequency-dependent quantity. I show that this is indeed reflected on human ITDs by studying acoustical recordings for a large number of human and animal subjects. Furthermore, I explain the effect of this variation at two scales. Locally in frequency the ITD introduces different envelope and fine structure delays in the signals reaching the ears. Second the ITD for low-frequency sounds is generally bigger than for high frequency sounds coming from the same position. In a second part, I introduce and discuss the current views on the binaural ITD-sensitive system in mammals. I expose that the heterogenous responses of such cells are well predicted when it is assumed that they are tuned to frequency-dependent ITDs. Furthermore, I discuss how those cells can be made to be tuned to a particular position in space irregardless of the frequency content of the stimulus. Overall, I argue that current data in mammals is consistent with the hypothesis that cells are tuned to a single position in space. Finally, I explore the impact of the frequency-dependence of ITD on human behavior, using psychoacoustical techniques. Subjects are asked to match the lateral position of sounds presented with different frequency content. Those results suggest that humans perceive sounds with different frequency contents at the same position provided that they have different ITDs, as predicted from acoustical data. The extent to which this occurs is well predicted by a spherical model of the head. Combining approaches from different fields, I show that the binaural system is remarkably adapted to the cues available in its environment. This processing strategy used by animals can be of great inspiration to the design of robotic systems.
24

Localisation sonore chez les aveugles : l'influence de l'âge de survenue de la cécité

Voss, Patrice January 2009 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal.
25

Perda auditiva unilateral: benefício da amplificação na ordenação e resolução temporal e localização sonora / Unilateral hearing loss: benefit of amplification in the ordering and temporal resolution and sound localization

Santos, Marina De Marchi dos 26 February 2016 (has links)
A perda auditiva unilateral (PAUn) é caracterizada pela diminuição da audição em apenas uma orelha. Indivíduos com este tipo de perda auditiva podem apresentar comprometimento nas habilidades auditivas de localização sonora, processamento temporal, ordenação e resolução temporal. O objetivo deste estudo foi verificar as habilidades auditivas de ordenação temporal, resolução temporal e localização sonora, antes e após a adaptação do aparelho de amplificação sonora individual (AASI). Foram avaliados 22 indivíduos, com idades entre 18 e 60 anos, com diagnóstico de PAUn sensorioneural ou mista, de graus leve a severo. O estudo foi dividido em duas etapas: a pré e a pós-adaptação de AASI. Em ambas as etapas, os indivíduos foram submetidos a uma anamnese, aplicação do Questionário de Habilidade Auditiva da Localização da fonte sonora, avaliação simplificada do processamento auditivo (ASPA) e Random Gap Detection Test (RGDT). O presente estudo encontrou diferença estatisticamente significante na avaliação da ASPA, exceto no teste de memória para sons não verbais em sequência (TMSnV), no RGDT e no Questionário de Habilidade Auditiva da Localização Sonora. A conclusão do estudo foi que com o uso efetivo do AASI, indivíduos com PAUn apresentaram melhora nas habilidades auditivas de localização sonora, ordenação e resolução temporal. / The Unilateral hearing loss (UHL) is characterized by decreased hearing in one ear. In individuals with this type of hearing loss the hearing abilities of sound localization, and temporal processing, ordering and temporal resolution, are affected. The objective of this study was to assess the hearing abilities of temporal ordering, temporal resolution and sound localization before and after the fitting of a hearing aid (HA). They evaluated 22 subjects, aged 18 to 60 years, diagnosed with sensorineural or mixed UHL, from mild to severe degrees. The study was divided into two stages: the pre and post-adaptation of HA. In both phases, subjects were submitted to an interview, application of Questionnarie for Desabilities Associated with Impaired Auditory Localization, auditory processing screening protocol (APSP) and Random Gap Detection Test (RGDT). This study found no statistically significant difference in the evaluation of APSP except in the memory test for non-verbal sounds in sequence, in RGDT and Questionnarie for Desabilities Associated with Impaired Auditory Localization. The conclusion was that with the effective use of hearing aids, individuals with UHL showed improvement in the auditory abilities of sound localization, ordering and temporal resolution.
26

Binaural mechanism revealed with in vivo whole cell patch clamp recordings in the inferior colliculus

Li, Na, 1980 Oct. 2- 02 February 2011 (has links)
Many cells in the inferior colliculus (IC) are excited by contralateral and inhibited by ipsilateral stimulation and are thought to be important for sound localization. These excitatory-inhibitory (EI) cells comprise a diverse group, even though they exhibit a common binaural response property. Previous extracellular studies proposed specific excitatory and/or inhibitory events that should be evoked by each ear and thereby generate each of the EI discharge properties. The proposals were inferences based on the well established response features of neurons in lower nuclei, the projections of those nuclei, their excitatory or inhibitory neurochemistry, and the changes in response features that occurred when inhibition was blocked. Here we recorded the inputs, the postsynaptic potentials, discharges evoked by monaural and binaural signals in EI cells with in vivo whole cell recordings from the inferior colliculus (IC) of awake bats. We also computed the excitatory and inhibitory synaptic conductances from the recorded sound evoked responses. First, we showed that a minority of EI cells either inherited their binaural property from a lower binaural nucleus or the EI property was created in the IC via inhibitory projections from the ipsilateral ear, features consistent with those observed in extracellular studies. Second, we showed that in a majority of EI cells ipsilateral signals evoked subthreshold EPSPs that behaved paradoxically in that EPSP amplitudes increased with intensity, even though binaural signals with the same ipsilateral intensities generated progressively greater spike suppressions. These ipsilateral EPSPs were unexpected since they could not have been detected with extracellular recordings. These additional responses suggested that the circuitry underlying EI cells was more complex than previously suggested. We also proposed the functional significance of ipsilaterally evoked EPSPs in responding to moving sound sources or multiple sounds. Third, by computing synaptic conductances, we showed the circuitry of the EI cells was even more complicated than those suggested by PSPs, and we also evaluated how the binaural property was produced by the contralateral and ipsilateral synaptic events. / text
27

Localisation sonore chez les aveugles : l'influence de l'âge de survenue de la cécité

Voss, Patrice January 2009 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal
28

Perda auditiva unilateral: benefício da amplificação na ordenação e resolução temporal e localização sonora / Unilateral hearing loss: benefit of amplification in the ordering and temporal resolution and sound localization

Marina De Marchi dos Santos 26 February 2016 (has links)
A perda auditiva unilateral (PAUn) é caracterizada pela diminuição da audição em apenas uma orelha. Indivíduos com este tipo de perda auditiva podem apresentar comprometimento nas habilidades auditivas de localização sonora, processamento temporal, ordenação e resolução temporal. O objetivo deste estudo foi verificar as habilidades auditivas de ordenação temporal, resolução temporal e localização sonora, antes e após a adaptação do aparelho de amplificação sonora individual (AASI). Foram avaliados 22 indivíduos, com idades entre 18 e 60 anos, com diagnóstico de PAUn sensorioneural ou mista, de graus leve a severo. O estudo foi dividido em duas etapas: a pré e a pós-adaptação de AASI. Em ambas as etapas, os indivíduos foram submetidos a uma anamnese, aplicação do Questionário de Habilidade Auditiva da Localização da fonte sonora, avaliação simplificada do processamento auditivo (ASPA) e Random Gap Detection Test (RGDT). O presente estudo encontrou diferença estatisticamente significante na avaliação da ASPA, exceto no teste de memória para sons não verbais em sequência (TMSnV), no RGDT e no Questionário de Habilidade Auditiva da Localização Sonora. A conclusão do estudo foi que com o uso efetivo do AASI, indivíduos com PAUn apresentaram melhora nas habilidades auditivas de localização sonora, ordenação e resolução temporal. / The Unilateral hearing loss (UHL) is characterized by decreased hearing in one ear. In individuals with this type of hearing loss the hearing abilities of sound localization, and temporal processing, ordering and temporal resolution, are affected. The objective of this study was to assess the hearing abilities of temporal ordering, temporal resolution and sound localization before and after the fitting of a hearing aid (HA). They evaluated 22 subjects, aged 18 to 60 years, diagnosed with sensorineural or mixed UHL, from mild to severe degrees. The study was divided into two stages: the pre and post-adaptation of HA. In both phases, subjects were submitted to an interview, application of Questionnarie for Desabilities Associated with Impaired Auditory Localization, auditory processing screening protocol (APSP) and Random Gap Detection Test (RGDT). This study found no statistically significant difference in the evaluation of APSP except in the memory test for non-verbal sounds in sequence, in RGDT and Questionnarie for Desabilities Associated with Impaired Auditory Localization. The conclusion was that with the effective use of hearing aids, individuals with UHL showed improvement in the auditory abilities of sound localization, ordering and temporal resolution.
29

How does the number of early reflections in virtual reality environments affect sound localization performance?

Kierkegaard, Tomas January 2024 (has links)
Virtual reality is a medium where both visual and aural fidelity strive to be comparable to real life. However, effective sound localization is an important part of many virtual game environments. Previous studies on virtual acoustic environments suggest that acoustic fidelity may be at odds with effective sound localization. Therefore, this study examines how one of the more computationally demanding acoustic processes affects sound localization performance in virtual reality. This study compares three different conditions with varying amounts of early reflections in a virtual reality environment. The results showed no statistically significant difference in azimuth errors, elevation errors or response times.
30

Prédiction objective de l'effet des systèmes tactiques de communication et protection sur les performances de localisation sonore / Objective prediction of the effect of tactical communication and protective systems on sound localization performance

Joubaud, Thomas 15 September 2017 (has links)
Dans de nombreuses situations civiles ou militaires, la protection de l'audition du personnel est cruciale. La perception et l'interprétation de l'environnement sonore par l'auditeur doivent cependant être préservées. Les Systèmes Tactiques de Communication et Protection (TCAPS) sont des protections auditives qui, à la fois, protègent suffisamment les oreilles de l'auditeur contre les bruits dangereux, et préservent l'intelligibilité, permettant ainsi la communication vocale à bas niveau. Des études précédentes ont cependant démontré que les TCAPS continuent de détériorer la perception de l'environnement sonore de l'auditeur, en particulier sa capacité à localiser les sources sonores. Sur le plan horizontal, la dégradation des indices acoustiques empêchant, en temps normal, l'auditeur de confondre les sources avant et arrière, en est la principale explication. Dans ce travail de thèse, une expérience subjective de localisation sonore est conduite avec six TCAPS: deux bouchons d'oreille passifs, deux bouchons d'oreilles actifs et deux casques actifs. Si aucun protecteur ne permet de retrouver les performances de la condition d'écoute normale, l'expérience permet de classifier les TCAPS du point de vue de la localisation sonore: les performances des bouchons passifs sont meilleures que celles des bouchons actifs, et les casques actifs engendrent les plus mauvaises performances. Dans le cadre de la conception et de l'évaluation des TCAPS, une méthode prédisant leur dégradation des performances de localisation sonore, basée sur des mesures électroacoustiques, serait plus adaptée que des expériences comportementales très chronophages. Dans ce contexte, deux méthodes basées sur les Fonctions de Transfert Relatives à la Tête (HRTF) mesurées sur tête artificielle sont étudiées: un processus d'appariement et un réseau de neurones à trois couches. Ils sont optimisés pour reproduire les performances de localisation humaine en condition d'écoute normale. Les méthodes sont ensuite appliquées aux HRTF mesurées avec les six TCAPS, et prédisent des probabilités de localisation en fonction de la position. Comparé aux résultats de l'expérience subjective, le réseau de neurones prédit des performances réalistes avec les bouchons d'oreille, mais surestime les erreurs avec les casques. Le modèle d'appariement prédit correctement les performances de localisation. Toutefois, la vraisemblance de ses distributions de probabilité avec les observations subjectives demeure plus faible que celle du réseau de neurones. Pour finir, les deux méthodes développées dans cette étude sont indépendantes de la tête artificielle utilisée, et peuvent être utilisées pour évaluer non seulement des prototypes de TCAPS, mais aussi des prothèses auditives. / In many civilian or military situations, hearing protection is of major importance. The listener's acoustical situational awareness must however also be preserved. Tactical Communication and Protective Systems (TCAPS) are hearing protection devices that sufficiently protect the listener's ears from hazardous sounds and preserve speech intelligibility, thus allowing low-level speech communication. However, previous studies demonstrated that TCAPS still deteriorate the listener's situational awareness, in particular the ability to locate sound sources. On the horizontal plane, this is mainly explained by the degradation of the acoustical cues normally preventing the listener from making front-back confusions. In the present PhD work, a behavioral sound localization experiment is conducted with six TCAPS: two passive and two active earplugs, and two active earmuffs. The performance in open ear condition is not retrieved with any protector, but the experiment ranks the TCAPS by type: passive earplugs lead to better performance than active earplugs, and active earmuffs induce the worst performance. As part of TCAPS development and assessment, a method predicting the protector-induced degradation of the sound localization capability, and based on electroacoustic measurements, would be more suitable than time-consuming behavioral experiments. In this context, two methods based on Head-Related Transfer Functions (HRTFs) measured on an artificial head are investigated: a template-matching model and a three-layer neural network. They are optimized to fit human sound localization performance in open ear condition. The methods are applied to the HRTFs measured with the six TCAPS, providing position-dependent localization probabilities. Compared with the behavioral results, the neural network predicts realistic performances with earplugs, but overestimates errors with earmuffs. The template-matching model predicts human performance well. However, the likelihood of the resulting probability distributions with the behavioral observations is lower than that of the neural network. Finally, both methods developed in this study are independent of the artificial head used, and can be applied to assess not only TCAPS prototypes, but also hearing aids.

Page generated in 0.112 seconds