• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A neural model of head direction calibration during spatial navigation: learned integration of visual, vestibular, and motor cues

Fortenberry, Bret January 2012 (has links)
Thesis (Ph.D.)--Boston University, 2012 / PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you. / Effective navigation depends upon reliable estimates of head direction (HD). Visual, vestibular, and outflow motor signals combine for this purpose in a brain system that includes dorsal tegmental nucleus, lateral mammillary nuclei (LMN), anterior dorsal thalamic nucleus (ADN), and the postsubiculum (PoS). Learning is needed to combine such different cues and to provide reliable estimates of HD. A neural model is developed to explain how these three types of signals combine adaptively within the above brain regions to generate a consistent and reliable HD estimate, in both light and darkness. The model starts with establishing HD cells so that each cell is tuned to a preferred head direction, wherein the firing rate is maximal at the preferred direction and decreases as the head turns from the preferred direction. In the brain, HD cells fire in anticipation of a head rotation. This anticipation is measured by the anticipated time interval (ATI), which is greater in early processing stages of the HD system than at later stages. The ATI is greatest in the LMN at -70 ms, it is reduced in the ADN to -25 ms, and non-existing in the last HD stage, the PoS. In the model, these HD estimates are controlled at the corresponding processing stages by combinations of vestibular and motor signals as they become adaptively calibrated to produce a correct HD estimate. The model also simulates how visual cues anchor HD estimates through adaptive learning when the cue is in the animal's field of view. Such learning gains control over cell firing within minutes. As in the data, distal visual cues are more effective than proximal cues for anchoring the preferred direction. The introduction of novel cues in either a novel or familiar environment is learned and gains control over a cell's preferred direction within minutes. Turning out the lights or removing all familiar cues does not change the cells firing activity, but it may accumulate a drift in the cell's preferred direction. / 2031-01-01
2

Auditory spatial adaptation: generalization and underlying mechanisms

Lin, I-Fan January 2009 (has links)
Listeners can rapidly adjust how they localize auditory stimuli when consistently trained with spatially discrepant visual feedback. However, relatively little is known about what auditory processing stages are altered by adaptation or the mechanisms that cause the observed perceptual and behavioral changes. Experiments were conducted to test how spatial adaptation generalizes to novel frequencies and the degree to which perceptual recalibration and cognitive adjustment contribute to spatial adaptation. A neural network model was developed to help explain and predict behavioral results. Adaptation was found to generalize across frequency when both training and reference stimuli were dominated by interaural time differences (ITDs), but not when the training stimuli were dominated by interaural level difference (ILDs) and the reference stimuli were dominated by ITDs. These results suggest that spatial adaptation occurs after ITDs are integrated across frequency, but before ITDs and ILDs are integrated. Both perceptual and cognitive changes were found to contribute to short-term auditory adaptation. However, their relative contributions to adaptation depended on the form of the rearrangement of auditory space. For both a magnification and a rotation of auditory space, at least some of the adaptation comes from perceptual recalibration. However, for a magnification of auditory space, cognitive adjustment contributed less to the observed adaptation than for a rotation of auditory space. A hierarchical, supervised-learning model of short-term spatial perceptual, recalibration was developed. Discrepancies between the perceived and correct locations drive learning by adjusting how auditory inputs map to exocentric locations to reduce error. Learning affects locations near the input location through a spatial kernel with limited extent. Model results fit the observed evolution of localization errors and account for individual differences by adjusting only three model parameters: the internal sensory noise, the width of the spatial learning kernel, and the threshold for detecting an error. Results demonstrate how training helps listeners calibrate spatial auditory perception. This work can help inform the design of hearing aids and hearing-protection devices to ensure that listeners receive sufficient information to localize sounds accurately, despite distortions of auditory cues caused by these devices.
3

Learning and recognizing patterns of visual motion, color and form

Cunningham, Robert Kevin January 1998 (has links)
Animal vision systems make use of information about an object's motion, color, and form to detect and identify predators, prey and mates. Neurobiological evidence from the macaque monkey indicates that visual processing is separated into two streams: the magnocellular primarily for motion, and the parvocellular primarily for color and form. Two computational systems are developed using key functional properties of the two postulated physiological streams. Each produces invariant representations that act as input to separate copies of a new learning and recognition architecture, Gaussian ARTMAP with covariance terms (GAC). Finally, perceptual experiments are conducted to explore the ability of the human form/color system to detect and recognize targets in photo-realistic imagery. GAC, the component common to both computational systems, retains the on-line learning capabilities of previous ARTMAP architectures, but uses categories that have a location and orientation in the dimensions of the feature space. This architecture is shown to have lower error rates than Fuzzy ARTMAP and Gaussian ARTMAP for all data sets examined, and is used to cluster motion and spectral parameters. For the motion system, local velocity measures of image features are obtained by the method of Convected Activation Profiles. This method is extended and shown to accurately estimate the velocity normal to rotating and translating lines, or of line ends, points, and curves. These local measures are grouped into neighborhoods, and the collection of motions within a neighborhood is described using orientation-invariant deformation parameters. Multiple parameters obtained by examining maneuvering objects are clus­tered, and motions that are characteristic of specific objects are identified. For the form and color system, multi-spectral measurements are made invariant to some fluctuations of local luminance and atmospheric transmissivity by within-band and across-band shunting networks. The resulting color-processed spectral patterns are clustered to enhance the performance of a machine target detection algorithm. Psychophysicists have examined human target detection capabilities primarily via scenes of polygonal targets and distractors on uniform backgrounds. Techniques are developed and experiments are performed to assess human performance of visual search for a complex object in a cluttered scene.

Page generated in 0.0684 seconds