• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A neural model of head direction calibration during spatial navigation: learned integration of visual, vestibular, and motor cues

Fortenberry, Bret January 2012 (has links)
Thesis (Ph.D.)--Boston University, 2012 / PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you. / Effective navigation depends upon reliable estimates of head direction (HD). Visual, vestibular, and outflow motor signals combine for this purpose in a brain system that includes dorsal tegmental nucleus, lateral mammillary nuclei (LMN), anterior dorsal thalamic nucleus (ADN), and the postsubiculum (PoS). Learning is needed to combine such different cues and to provide reliable estimates of HD. A neural model is developed to explain how these three types of signals combine adaptively within the above brain regions to generate a consistent and reliable HD estimate, in both light and darkness. The model starts with establishing HD cells so that each cell is tuned to a preferred head direction, wherein the firing rate is maximal at the preferred direction and decreases as the head turns from the preferred direction. In the brain, HD cells fire in anticipation of a head rotation. This anticipation is measured by the anticipated time interval (ATI), which is greater in early processing stages of the HD system than at later stages. The ATI is greatest in the LMN at -70 ms, it is reduced in the ADN to -25 ms, and non-existing in the last HD stage, the PoS. In the model, these HD estimates are controlled at the corresponding processing stages by combinations of vestibular and motor signals as they become adaptively calibrated to produce a correct HD estimate. The model also simulates how visual cues anchor HD estimates through adaptive learning when the cue is in the animal's field of view. Such learning gains control over cell firing within minutes. As in the data, distal visual cues are more effective than proximal cues for anchoring the preferred direction. The introduction of novel cues in either a novel or familiar environment is learned and gains control over a cell's preferred direction within minutes. Turning out the lights or removing all familiar cues does not change the cells firing activity, but it may accumulate a drift in the cell's preferred direction. / 2031-01-01
2

Auditory spatial adaptation: generalization and underlying mechanisms

Lin, I-Fan January 2009 (has links)
Listeners can rapidly adjust how they localize auditory stimuli when consistently trained with spatially discrepant visual feedback. However, relatively little is known about what auditory processing stages are altered by adaptation or the mechanisms that cause the observed perceptual and behavioral changes. Experiments were conducted to test how spatial adaptation generalizes to novel frequencies and the degree to which perceptual recalibration and cognitive adjustment contribute to spatial adaptation. A neural network model was developed to help explain and predict behavioral results. Adaptation was found to generalize across frequency when both training and reference stimuli were dominated by interaural time differences (ITDs), but not when the training stimuli were dominated by interaural level difference (ILDs) and the reference stimuli were dominated by ITDs. These results suggest that spatial adaptation occurs after ITDs are integrated across frequency, but before ITDs and ILDs are integrated. Both perceptual and cognitive changes were found to contribute to short-term auditory adaptation. However, their relative contributions to adaptation depended on the form of the rearrangement of auditory space. For both a magnification and a rotation of auditory space, at least some of the adaptation comes from perceptual recalibration. However, for a magnification of auditory space, cognitive adjustment contributed less to the observed adaptation than for a rotation of auditory space. A hierarchical, supervised-learning model of short-term spatial perceptual, recalibration was developed. Discrepancies between the perceived and correct locations drive learning by adjusting how auditory inputs map to exocentric locations to reduce error. Learning affects locations near the input location through a spatial kernel with limited extent. Model results fit the observed evolution of localization errors and account for individual differences by adjusting only three model parameters: the internal sensory noise, the width of the spatial learning kernel, and the threshold for detecting an error. Results demonstrate how training helps listeners calibrate spatial auditory perception. This work can help inform the design of hearing aids and hearing-protection devices to ensure that listeners receive sufficient information to localize sounds accurately, despite distortions of auditory cues caused by these devices.

Page generated in 0.1458 seconds