• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 12
  • 12
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The role of hearing sensitivity above 8 kHz in auditory localization.

Gray, Sarah Elizabeth January 2014 (has links)
The ability to identify where sound is coming from is required for everyday listening tasks such as identifying in which direction the phone is ringing and to help locate who is calling your name in a social situation. While this localization ability has been found to be reduced in listeners with a hearing loss in the typically measured frequency range of 250 to 8000 kHz, less is known about listeners who have a hearing loss that is mainly limited to the extended high frequencies of 8 to 14 kHz, particularly when abilities are tested with speech stimuli. The purpose of the current study was to determine whether listeners with a hearing impairment at these higher frequencies performed less accurately in a number of localization tasks. Twenty-three participants with normal hearing (thresholds not exceeding 20 dB HL from 250 to 14 kHz) and 23 participants with normal hearing up to and including 3 kHz and with at least a moderate hearing loss in the extended high frequencies (thresholds reaching at least 55 dB HL in any of the frequencies from 8 kHz to 14 kHz), localized noise and speech stimuli at a level of 75 dBA in a free field situation. Thirteen speakers were used in four different speaker arrangements; the frontal horizontal plane, lateral horizontal plane, frontal vertical plane and lateral vertical plane. The noise stimuli included noise band-passed filtered between 300 Hz and 16 kHz, and 300 Hz and 8 kHz. Speech stimuli were individual words with strong amounts of high frequency content above 8 kHz and weak amounts of high frequency content above 8 kHz. The two types of speech stimuli were also band-passed filtered using the same filter cut-off frequencies as the noise stimuli. No significant main effect differences were found between the localization ability of the two hearing groups for any of the four experiments. However, within experiment analysis revealed in the lateral vertical plane the normal hearing group localized significantly better than the hearing loss group for both the strong and weak stimuli. Significant differences were also found across experiments with both groups of participants localizing more accurately in the frontal horizontal plane and worst in the frontal vertical plane. All participants were found to localize significantly better with the greater bandwidth of 300 Hz to 16 kHz, and also for both types of speech stimuli when compared to the noise stimuli, although post hoc analysis found that these differences were not consistent between all speaker locations.
2

A comparative study of auditory localization

Beecher, Michael Donovan January 1970 (has links)
Thesis (Ph.D.)--Boston University / PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you. / Neuroanatomical work has shown that the auditory system is different in different mammals, with primates and bats representing two extremes in this regard. It has been suggested that these differences are related to auditory localization. The present work examined auditory localization in several representative mammalian speciesa squirrel monkey, bat {Phyllostomus hastatus), albino rat and cat. A semi-naturalistic localization situation was used. The animal was placed in a wire cage located in a sound-deadened room. Two loudspeakers were located one on either side of the cage. Two response lavers were located in the front wall of the cage, flanking a liquid food dispenser. When tone bursts were presented from one of the loudspeakers, a response on the "correct" lever resulted in the delivery of a small amount of food to the animal. The left-hand lever was correct when the tone bursts were from the left-hand loudspeaker, the right-hand lever was correct when the tone bursts were from the right-hand loudspeaker. The percentage of correct responses on both levers was the measure of performance on the discrimination under a given set of conditions. [TRUNCATED] / 2031-01-01
3

An In-Field Experiment on the Effects of Hearing Protection/Enhancement Devices and Military Vehicle Noise on Auditory Localization of a Gunshot in Azimuth

Talcott, Kristen Alanna 15 November 2011 (has links)
Noise-induced hearing loss and tinnitus are the two most prevalent service-connected disabilities for veterans receiving compensation (Department of Veterans Affairs, 2010). While it is possible to protect against noise-induced hearing damage with hearing protection devices (HPDs) and hearing protection enhancement devices (HPEDs), military personnel resist using HPDs/HPEDs that compromise their situational awareness, including ability to localize enemy gunfire. Manufacturers of a new generation of "pass-through" level-dependent HPEDs claim these devices preserve normal or near-normal hearing. A research study was conducted to evaluate localization of suprathreshold gunshot's report (from blank ammunition) with one passive (3M's Single-Ended Combat Arms earplug) and three electronic level-dependent HPEDs (Peltor's Com-Tac II electronic earmuffs and Etymotic's EB 1 and EB 15 High-Fidelity Electronic BlastPLG earplugs), in comparison to the open ear in an in-field test environment with ambient outdoor noise and in 82 dBA diesel truck noise with nine normal and four impaired hearing participants. Statistical analysis showed worse localization accuracy and response time with the Com-Tac II earmuffs than with the other tested HPEDs. Performance with all HPEDs was worse than that with the open ear, except on right-left confusions, in which the Com-Tac II stood alone as worst, and in response time, for which the EB 1 earplug was equivalent to the open ear. There was no significant main effect of noise level. There was generally no significant effect of hearing ability. However, participants with impaired hearing had more right-left confusions than those with normal hearing. Subjective ratings related to localization generally corroborated objective localization performance. Three follow-up experiments were performed: (1) an assessment of the effect of microphone position on localization with the EB 15, which showed a limited advantage when the microphone was positioned near the opening of the ear canal compared to when it was facing outwards; (2) an assessment of Etymotic's QuickSIN test as a predictor localization performance, which showed little correlation with localization performance; and (3) an assessment of the acoustic properties of the experiment site, which was inconclusive with regards to the direction of dominant sound energy from gunshots from each of the shooter positions. / Ph. D.
4

Development and Human Factors Evaluation of a Portable Auditory Localization Acclimation Training System

Thompson, Brandon Scott 19 June 2020 (has links)
Auditory situation awareness (ASA) is essential for safety and survivability in military operations where many of the hazards are not immediately visible. Unfortunately, the Hearing Protection Devices (HPDs) required to operate in these environments can impede auditory localization performance. Promisingly, recent studies have exhibited the plasticity of the human auditory system by demonstrating that training can improve auditory localization ability while wearing HPDs, including military Tactical Communications and Protective Systems (TCAPS). As a result, the U.S. military identified the need for a portable system capable of imparting auditory localization acquisition skills at similar levels to those demonstrated in laboratory environments. The purpose of this investigation was to develop and validate a Portable Auditory Localization Acclimation Training (PALAT) system equipped with an improved training protocol against a proven laboratory grade system referred to as the DRILCOM system and subsequently evaluate the transfer-of-training benefit in a field environment. In Phase I, a systems decision process was used to develop a prototype PALAT system consisting of an expandable frame housing 32-loudspeakers operated by a user-controlled tablet computer capable of reproducing acoustically accurate localization cues similar to the DRILCOM system. Phase II used a within-subjects human factors experiment to validate whether the PALAT system could impart similar auditory localization training benefits as the DRILCOM system. Results showed no significant difference between the two localization training systems at each stage of training or in training rates for the open ear and with two TCAPS devices. The PALAT system also demonstrated the ability to detect differences in localization accuracy between listening conditions in the same manner as the DRILCOM system. Participant ratings indicated no perceived difference in localization training benefit but significantly preferred the PALAT system user interface which was specifically designed to improve usability features to meet requirements of a user operable system. The Phase III investigation evaluated the transfer-of-training benefit imparted by the PALAT system using a broadband stimulus to a field environment using gunshot stimulus. Training under the open ear and in-the-ear TCAPS resulted in significant differences between the trained and untrained groups from in-office pretest to in-field posttest. / Doctor of Philosophy / Auditory situation awareness (ASA) is essential for safety and survivability in military operations where many of the hazards are not immediately visible. Unfortunately, the Hearing Protection Devices (HPDs) required to operate in these environments can impede sound localization performance. Promisingly, recent studies have exhibited the ability of the human auditory system to learn by demonstrating that training can improve sound localization ability while wearing HPDs. As a result, the U.S. military identified the need for a portable system capable of improving sound localization performance at similar levels to those demonstrated in laboratory environments. The purpose of this investigation was to develop and validate a Portable Auditory Localization Acclimation Training (PALAT) system equipped with an improved training protocol against a proven laboratory grade system referred to as the DRILCOM system and subsequently evaluate the transfer-of-training benefit in a field environment. In Phase I, a systems decision process was used to develop a prototype PALAT system consisting of an expandable frame housing 32-loudspeakers operated by a user-controlled tablet computer capable of reproducing similar sounds as the DRILCOM system. Phase II used a within-subjects human factors experiment to validate whether the PALAT system could impart similar sound localization training benefits as the DRILCOM system. Results showed no significant difference between the two localization training systems at each stage of training or in training rates for the open ear and with two HPDs. The PALAT system also demonstrated the ability to detect differences in localization accuracy between listening conditions in the same manner as the DRILCOM system. Participant ratings indicated no perceived difference in localization training benefit but significantly preferred the PALAT system user interface which was specifically designed to improve usability features to meet requirements of a user operable system. The Phase III investigation evaluated the transfer-of-training benefit imparted by the PALAT system using a broadband stimulus to a field environment using gunshot stimulus. Training under the open ear and in-the-ear TCAPS resulted in significant differences between the trained and untrained groups from in-office pretest to in-field posttest.
5

Localization of Auditory Spatial Targets in Sighted and Blind Subjects

Nuckols, Richard 11 December 2013 (has links)
This research was designed to investigate the fundamental nature in which blind people utilize audible cues to attend to their surroundings. Knowledge on how blind people respond to external spatial stimuli is expected to assist in development of better tools for helping people with visual disabilities navigate their environment. There was also interest in determining how blind people compare to sighted people in auditory localization tasks. The ability of sighted individuals, blindfolded individuals, and blind individuals in localizing spatial auditory targets was assessed. An acoustic display board allowed the researcher to provide multiple sound presentations to the subjects. The subjects’ responses in localization tasks were measured using a combination of kinematic head tracking and eye tracking hardware. Data was collected and analyzed to determine the ability of the groups in localizing spatial auditory targets. Significant differences were found among the three groups in spatial localization error and temporal patterns.
6

Comparison of effectiveness in using 3D-audio and visual aids in identifying objects in a three-dimensional environment / Effektivitetsjämförelse för 3D-ljud och visuella hjälpmedel vid identifikation av objekt i en tre-dimensionell miljö

Åbom, Karl January 2014 (has links)
Context: Modern commercial computer games use a number of different stimuli to assist players in locating key objects in the presented Virtual Environment (VE). These stimuli range from visual to auditory, and are employed in VEs depending on several factors such as gameplay design and aesthetics. Objectives: This study compares three different localization aids in order to evaluate their effectiveness in VEs. Method: An experiment is carried out in which testplayers are tasked with using audio signals, visual input, as well as a combination of both to correctly identify objects in a virtual scene. Results: Results gained from the experiment show how long testplayers spent on tests which made use of different stimuli. Upon analyzing the data, it was found that that audio stimulus was the slowest localization aid, and that visual stimulus and the combination of visual and auditory stimulus were tied for the fastest localization aid. Conclusions: The study concludes that there is a significant difference in efficiency among different localization aids and VEs of varied visual complexity, under the condition that the testplayer is familiar with each stimuli. / 3D-ljud och visuella hjälpmedel är vanliga i moderna datorspel. I denna uppsats detaljeras en studie kring effektivitet vid använding av 3D-ljud och visuella hjälpmedel i tre-dimensionella miljöer. Studien använder sig av en experimentiell design där testspelare får sitta i ett datorspel-liknande upplägg och använda sig av visuella och auditoriska hjälpmedel för att identifiera objekt i dessa miljöer. Studien bekräftar att det finns en signifikant skillnad i effektivitet mellan olika visuella och auditoriska hjälpmedel i tre-dimensionella miljöer.
7

Auditory motion: perception and cortical response

Sarrou, Mikaella 10 April 2019 (has links)
Summary The localization of sound sources in humans is based on the binaural cues, interaural time and level differences (ITDs, ILDs) and the spectral cues (Blauert 1997). The ITDs relate to the timing of sound arrival at the two ears. For example, a sound located at the right side will arrive at the right ear earlier than at the left ear. The ILDs refer to the difference of sound pressure-level between the two ears. In the example mentioned above, if the sound located at the right has short wavelength then it will arrive at the right ear with higher sound-pressure than at the left ear. This is because a sound with short wavelength cannot bypass the head. In other words, the head creates an obstacle that diffracts the waves and that is why the sound arriving at the ear closer to the sound source will receive the sound with higher sound-pressure. Due to the association of each of the binaural cues with the wavelength of a sound, Rayleigh (1907) proposed the ‘duplex theory’ of sound source localization suggesting that on the azimuth, the ITDs is the main localization cue for low frequency sounds and the ILDs is the main localization cue for high frequency sounds. The spectral cues are based on the shape of the pinna’s folds and they are very useful for sound source localization in elevation but they also help in azimuthal localization (Schnupp et al. 2012). The contribution of the spectral cues on the azimuthal localization arises from the fact that due to the symmetrical position of the ears on the head, the binaural cues vary symmetrically as a function of spatial location (King et al. 2001). Whereas the ITDs have a very symmetrical distribution, the ILDs become more symmetrical the higher the sound frequency is. This way, there are certain locations within the left-frontal and left-posterior hemifield, as well as the right-frontal and the right-posterior hemifield that share the same binaural cues, which makes the binaural cues ambiguous and so the auditory system cannot depend solely on these for sound source localization. To resolve this ambiguity, our auditory system uses the spectral cues that help to disambiguate frontal-back confusion (King et al. 2001, Schnupp et al. 2012). The role of these cues in localizing sounds in our environment is well established. But their role in acoustic motion localization is not yet clear. This is the topic of the current thesis. The auditory localization cues are processed on the subcortical and cortical level. The ITDs and ILDs are processed from different neurons along the auditory pathway (Schnupp et al. 2012). Their parallel processing stages seem to converge at the inferior colliculus as evidence shows from cat experiments (Chase and Young 2005). But in humans, an electroencephalographic (EEG) study measuring the mismatch negativity (MMN; Schröger 1996) and a study using magnetoencephalographie (MEG; Salminen et al. 2005) showed that these cues are not integrated. One of the models of the spatial representation of sound sources is Jeffress’ place code (1948). This model suggests that each location of the azimuthal space is encoded differently, thus the name ‘place code’. Evidence in support of this model comes from studies on the cat (Yin and Chan 1990). However, arguments against this model come from studies in gerbils whose results showed that their subcortical neurons respond maximally to locations that are outside the physiological range based on the size of their heads (Pecka et al. 2008). An alternative model of auditory spatial encoding is the hemifield code (von Bekesy 1960). This model proposes that subcortical neurons are separated into two populations, one tuned to the left hemifield and another tuned to the right. Thus, the receptive field of the neurons is wide and the estimation of the sound source location is derived from the balance of activity of these two populations. Evidence from human studies support this model. Salminen and colleagues (2009) employed an adaptation paradigm during MEG recording. They presented sets of adaptor and probe stimuli that either had the same or different spatial location. Their results showed that the response to the probe was more reduced when the adaptor was located at the far left location and not when the adaptor and probe shared the exact same location. Also, an EEG study on auditory motion showed that sounds that move from central to lateral locations elicit higher amplitudes than when the move in the opposite direction (Magezi and Krumbholz 2010). The authors concluded that these results are based on the movement of the sound source towards the location of the maximal activity of the neurons (also in Salminen et al. 2012). The ability to detect moving objects is well-embedded into our nature. Whereas it enriches predators and prey with the skills to survive, in everyday life it enables us to interact with our environment. For example, the task of crossing a street (without traffic signs) safely is based on the encoding of visual and auditory features of moving vehicles. In the visual modality, the capability of the system to encode motion is based on motion-specific neurons (Mather 2011). In the auditory modality, the debate over whether these sensors exist is still ongoing. One theory on how the auditory system encodes motion is the ‘snapshot’ theory (Chandler and Grantham 1991, Grantham 1986). In a series of experiments, Grantham (1986) showed that auditory perception was not affected by features of motion such as velocity, but it was more sensitive on distance as a spatial cue. Thus, what he suggested is that the encoding of auditory motion is based on the mechanisms that encode stationary sounds. In other words, when a sound is moving it activates the neurons that correspond to the points that are located along the trajectory of that sound but in a serial manner. This way, the perception of auditory motion is based on ‘snapshots’ instead of processing motion as a complete feature. This mechanism of auditory motion processing corroborates with Jeffress’ place code (1948). Animal studies on monkeys (Ahissar et al. 1992) and owls (Wagner et al. 1994) showed that neurons responded similarly to moving and stationary sounds. Evidence against this theory come from a recent behavioural study that introduced velocity changes within acoustic motion and showed that participants were able to detect them (Locke et al. 2016). The authors concluded that if ‘snapshot’ theory would be true, then these detections of velocity change would not occur. Another theory of auditory motion was evolved that supports the motion-specific mechanisms in the brain (Warren et al. 2002, Docummun et al. 2004, Poirier et al. 20017). A human study using functional magnetic resonance imaging (fMRI) and positron-emission tomography (PET) showed evidence of a motion-specific cortical network that includes the planum temporale and the parietotemporal operculum (Warren et al. 2002). The authors suggested that these areas are part of a posterior processing stream that is responsible for analysis of auditory moving objects. Moreover, a recent primate fMRI study provided evidence of motion-specificity in the activity of the posterior belt and parabelt regions of the primary auditory cortex (Poirier et al. 2017). The authors contrasted cortical response to auditory motion with stationary and spectrotemporal sounds and found that the aforementioned cortical areas were only activated by moving sounds. All in all, the neuronal mechanism underlying auditory motion perception has been vaguely described. However, there is an increasing number of evidence that show that specialized motion areas and mechanisms exist in the cortex. To study how exactly these mechanisms function, it is important to know which aspects of the stimulus paradigm affect the response. Study 1. In this study, I focused on eliciting the cortical motion-onset response (MOR) in the freefield. This specific response is measured with EEG and it is elicited when a sound motion follows a stationary sound without any temporal gaps between them. The stationary part serves as an adaptive sound and the onset of motion provides a release-of-adaptation, which gives rise to the MOR. One of the focus was to investigate the effect on the MOR when the initial part is moving in space instead of being stationary. In addition, a secondary focus was the effect of the stimuli frequency on the MOR. I hypothesized that, due to the adaptation provided by the initial stimulus part, the motion response would be smaller after moving than after stationary adaptation. Also, I expected that the effects of frequency would follow the literature and since the motion response is a late response, the amplitude would be smaller after the high frequency than low frequency stimulus presentation. The results showed that the current paradigm did not elicit the MOR. Comparison of the current experimental settings with those used previously in the literature showed that the MOR is strongly depended on the adaptation time provided by the first part of the stimuli. Study 2. In this study, the stimulus characteristics were adapted after failing to elicit the response in the previous study. In addition, I employed an active instead of a passive paradigm, since data from the literature show that the motion response is strongly dependent on the allocation of attention on auditory motion. Thus, in this study, the elicitation of the MOR was successful. The current study examines the modulation of the MOR based on the frequency-range of sound stimuli. Higher amplitude on the motion response was expected after the presentation of stimuli with high frequency spectrum. Also, I studied the effects of hemifield presentation and the direction of motion on the MOR. The results showed that the early part of the motion response (cN1) was modulated by the frequency range of the sounds with stronger amplitudes elicited by stimuli with high frequency range. Study 3. This study is focused on analysis from data collected in the previous study. The focus, however, is on the effects of the stimulus paradigm on the MOR. I hypothesized that after the adaptation provided by an initial moving part, lower amplitude was expected in comparison to the stimuli with an initial stationary part. These responses were also analysed based on the effects of stimulus frequency. The results showed that the stimulus paradigm with the initial moving part elicited a response that resembles the MOR but has lower amplitude. In addition, the effects of stimulus frequency evident from the previous analysis apply here as well, with high frequency stimuli eliciting higher MOR amplitude than low frequency stimuli. Study 4. This study examined further the effects of stimuli characteristics on the MOR. Since the latency of the MOR in the previous study was a bit later than what is usually reported in the literature, the focus here was to test the effects of motion velocity and adaptation duration on the MOR. The results showed that faster velocity elicited higher amplitudes on the peak-to-peak comparison. Separate analysis on the MOR components, showed that this effect was based on higher cN1 amplitude. A separate analysis between the electrodes over the left and right hemisphere, showed that the peak-to-peak amplitude was stronger on the electrodes over the right hemisphere. Lastly, the strong adaptation created by the long duration of the initial stationary part provided abundant evidence of auditory motion, which led to the separation of the cP2 into its constituent parts. Study 5. This behavioural study focused on the effect of motion adaptation on the rear field to the presentation of motion in the frontal field. Thus, the presentation of adaptors and probes within the left-frontal and left-rear fields aimed at locations that share the same ITDs and ILDs. The disambiguation of auditory localization of motion is based on how these interaural cues interact with the spectral cues. A moving probe was presented in the left hemifield, following an adaptor that spanned either the same trajectory or a trajectory located in the opposite field (frontal/ rear). Participants had to indicate the direction of the probe. The results showed that performance was worse when adaptor and probe were sharing the same binaural cues, even if they were in different hemifields and their direction was opposite. But the magnitude of the adaptation effect when the pair was in different hemifields was smaller, thus showing motion-direction detection depends on the integration of interaural and spectral cues.
8

Evaluation of an Auditory Localization Training System for Use in Portable Configurations: Variables, Metrics, and Protocol

Cave, Kara Meghan 22 January 2020 (has links)
Hearing protection can mitigate the harmful effects of noise, but for Service Members these devices can also obscure auditory situation awareness cues. Tactical Communication and Protective Systems (TCAPS) can restore critical cues through electronic circuitry with varying effects on localization. Evidenced by past research, sound localization accuracy can improve with training. The investigator hypothesized that training with a broadband stimulus and reducing the number of presentations would result in training transfer. Additionally, training transfer would occur with implementation of more user-engaged training strategies. The purpose of the experiments described in this study was to develop an optimized auditory azimuth-training protocol for use in a field-validated portable training system sensitive to differences among different TCAPS. A series of indoor experiments aimed to shorten and optimize a pre-existing auditory localization training protocol. Sixty-four normal-hearing participants underwent localization training. The goal of training optimization included the following objectives: 1) evaluate the effects of reducing stimulus presentations; 2) evaluate the effects of training with a broadband stimulus (but testing on untrained military-relevant stimuli); and 3) evaluate performance differences according to training strategies. Twenty-four (12 trained and 12 untrained) normal-hearing listeners participated in the field-validation experiment. The experiment evaluated localization training transfer from the indoor portable system to live-fire blanks in field. While training conducted on the portable system was predicted to transfer to the field, differences emerged between an in-the-ear and over-the-ear TCAPS. Three of four untrained stimuli showed evidence of training transfer. Shortening the training protocol also resulted in training transfer, but manipulating training strategies did not. A comparison of changes in localization scores from the indoor pretest to the field posttest demonstrated significant differences among listening conditions. Training improved accuracy and response time for the open ear and one of two TCAPS. Posttest differences between the two TCAPS were not statistically significant. Despite training, localization with TCAPS never matched the open ear. The portable apparatus employed in this study offers a means to evaluate the effects of TCAPS on localization. Equipped with a known effect on localization, TCAPS users can render informed decisions on the benefits or risk associated with certain devices. / Doctor of Philosophy / Hearing protection can mitigate the harmful effects of noise, but for Service Members these devices can obscure auditory situation awareness cues. Certain powered hearing protection can restore critical cues through electronic circuitry with varying effects on localization. Evidenced by past research, sound localization accuracy can improve with training. The investigator hypothesized that training with a broadband stimulus and reducing the number of presentations would result in auditory learning. Additionally, implementing more user-engaged training strategies would demonstrate more auditory learning. The purpose of the experiments described in this study was to develop an optimized auditory azimuth-training protocol for use in a field-validated training system sensitive to differences among active hearing protection. A series of indoor experiments aimed to shorten and optimize a pre-existing auditory localization training protocol. Sixty-four normal-hearing participants underwent localization training. The goal of training optimization included the following objectives: 1) evaluate the effects of reducing stimulus presentations; 2) evaluate the effects of training with a broadband stimulus (but testing on untrained military-relevant stimuli); and 3) evaluate performance differences in localization performance according to training strategies. In the field-validation study, 12 trained and 12 untrained normal-hearing listeners participated. The experiment evaluated localization learning from the indoor portable training system to live-fire blanks in a field. Training conducted on the portable system was predicted to transfer to the field, but differences would emerge between an in-the-ear and an over-the-ear TCAPS. Three of four untrained stimuli showed evidence of localization learning. Shortening the protocol also resulted in localization learning, but manipulating training strategies did not. A comparison of changes in localization scores from the indoor pretest to the field posttest demonstrated significant differences among listening conditions. Training improved performance for the open ear and one of two active hearing protectors. Posttest differences between the two devices were not significant. Despite training, performance with hearing protection never equaled the open ear. The portable apparatus employed in this study offers a means to evaluate the effects of hearing protection on localization. Knowing the effects of hearing protection on localization apprises users of the benefits and/or risk associated with the use of certain devices.
9

Auditory localisation : contributions of sound location and semantic spatial cues

Yao, Norikazu January 2007 (has links)
In open skill sports and other tasks, decision-making can be as important as physical performance. Whereas many studies have investigated visual perception there is little research on auditory perception as one aspect of decision making. Auditory localisation studies have almost exclusively focussed on underlying processes, such as interaural time difference and interaural level difference. It is not known, however, whether semantic spatial information contained in the sound is actually used, and whether it assists pure auditory localisation. The aim of this study was to investigate the effect on auditory localisation of spatial semantic information. In Experiment One, this was explored by measuring whole body orientation to the words &quotLeft", &quotRight", &quotBack", &quotFront" and &quotYes", as well as a tone, each presented from left right, front and back locations. Experiment Two explored the effect of the four spatial semantic words presented either from their matching locations, or from a position rotated 20 degrees anticlockwise. In both experiments there were two conditions, with subjects required to face the position indicated by the sound location, or the meaning of the word. Movements of the head were recorded in three dimensions with a Polhemus Fastrak system, and were analysed with a custom program. Ten young adult volunteers participated in each experiment. Reaction time, movement time, initial rotation direction, rotation direction at peak velocity, and the accuracy of the final position were the dependent measures. The results confirmed previous reports of confusions between front and back locations, that is, errors about the interaural axis. Unlike previous studies, many more back-to-front than front-toback errors was made. The experiments provided some evidence for a spatial Stroop interference effect, that is, an effect on performance of conflicting information provided by the irrelevant dimension of the stimulus, but only for reaction time and initial movement direction, and only in the Word condition. The results are interpreted using a model of the processes needed to respond to the stimulus and produce an orienting movement. They suggest that there is an asymmetric interference effect in which auditory localisation can interfere with localisation based on semantic content of words, but not the reverse. In addition, final accuracy was unaffected by any interference, suggesting that these effects are restricted to the initial stages of response selection.
10

On the origin of the extracellular potential in the nucleus laminaris of the barn owl

Kuokkanen, Paula 24 August 2012 (has links)
Schleiereulen sind gute Nachtjäger und finden ihre Beute vor allem durch den Hörsinn. Die auditorische Lokalisierung in der horizontalen Ebene basiert dabei auf interauralen Zeitdifferenzen. Diese werden im Hirnstamm durch das Netzwerk von nucleus magnocellularis (NM) und nucleus laminaris (NL) in Orte umkodiert. Im NL kann ein extrazelluläres Potential (EP), das Neurophonpotential (NP) gemessen werden. Dieses hat eine erstaunliche zeitliche Präzision von unter 10 Mikrosekunden, und spiegelt den für die Stimulation benutzten Ton bis zu Frequenzen von 9 kHz wider. Wie kann eine solche Präzision erzeugt werden, und was kann man über den Ursprung des Potentials in dieser neuronalen Struktur lernen? Um diese Fragen zu klären, studiere ich in vivo gemessene NPs. Dadurch kann in Zukunft die Verbindung von neuronaler Aktivität und EP besser verstanden werden. Hunderte neuronale Stromquellen, die alle kohärent mit einer hohen Feuerrate aktiv sind, sind nötig, um ein solches NP zu erzeugen. Dabei sind Anzahl und Stromstärke der Neuronen im NL nicht ausreichend, um das NP zu erzeugen. Der Hauptanteil der Quellen besteht aus den Signalen, die den Input des NL formen: die Ströme der Ranvierschen Schnürringe entlang der Axone aus dem NM, sowie die synaptischen Ströme zu den Dendriten von NL Neuronen. Weiterhin können NPs, die als Antwort auf monaurale Stimulierung aufgenommen wurden, linear addiert werden, um die Antwort auf binaurale Stimulation zuverlässig vorherzusagen. Leichte Abweichungen von der Vorhersage könnten damit erklärt werden, dass einzelne, sehr nah an der Elektrode befindliche Neurone nichtlinear zum NP beitragen. Im Gegensatz zu anderen bisher untersuchten neuronalen Strukturen - auch homologer Hirnregionen - spiegelt das NP der Schleiereule Eingangs- statt Ausgangssignale wider. Dieser strukturelle Unterschied könnte erklären, wieso das Schleiereulengehirn höhere Genauigkeit erreicht, als das anderer Tiere. / The barn owl is a good night hunter and mainly localizes the prey with its auditory system. The auditory localization in the horizontal plane, based on interaural time differences, depends on the auditory brainstem circuit consisting of nucleus magnocellularis (NM) and nucleus laminaris (NL). An extracellular field potential (EFP), named neurophonic, can be recorded in the NL. It has a very high temporal precision of below 10 microseconds and replays the stimulating sound up to 9 kHz. In this thesis I study how an EFP with such a precision can be generated. Furthermore, what can we learn about the system and about the origin of the neurophonic in NL from these recordings? The answers will help connecting the neural activity to the EFP also in general. Firstly, hundreds of sources, all firing with a high rate and in a highly phase-locked manner, are needed to generate the neurophonic in NL. The number of the neurons in NL and the magnitude of their output currents are not high enough to alone give rise to the neurophonic. The majority of the neural sources conveys the input from NM to NL, i.e., the currents from the nodes of Ranvier in the afferent axons from NM, and the synaptic currents to the dendrites of the NL neurons. Furthermore, the neurophonics in response to monaural stimulation sum up linearly and predict accurately the neurophonics in response to binaural stimulation. This implies that the non-linear response of the NL neurons usually cannot be detected in the neurophonic, but that there might be a minor contribution from a single NL neuron when in the immediate vicinity of the electrode. All in all, the neurophonic in the barn owl''s NL seems to reflect the inputs to the nucleus, whereas usually the output is well represented in the EFP. Even in the homologue nuclei in chick and mammals the neurophonic is thought to reflect the output instead of the input. Thus, the exceptionality of the barn owl might be needed for the high precision in its NL.

Page generated in 0.1215 seconds