• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The role of uncertainty and reward on eye movements in natural tasks

Sullivan, Brian Thomas 18 July 2012 (has links)
The human visual system is remarkable for the variety of functions it can be used for and the range of conditions under which it can perform, from the detection of small brightness changes to guiding actions in complex movements. The human eye is foveated and humans continually make eye and body movements to acquire new visual information. The mechanisms that control this acquisition and the associated sequencing of eye movements in natural circumstances are not well understood. While the visual system has highly parallel inputs, the fovea must be moved in a serial fashion. A decision process continually occurs where peripheral information is evaluated and a subsequent fixation target is selected. Prior explanations for fixation selection have largely focused on computer vision algorithms that find image areas with high salience, ones that incorporate reduction of uncertainty or entropy of visual features, as well as heuristic models. However, these methods are not well suited to model natural circumstances where humans are mobile and eye movements are closely coordinated for gathering ongoing task information. Following a computational model of gaze scheduling proposed by Sprague and Ballard (2004), I argue that a systematic explanation of human gaze behavior in complex natural tasks needs to represent task goals, a reward structure for these goals and a representation of uncertainty concerning progress towards those goals. If these variables are represented it is possible to formulate a decision computation for choosing fixation targets based on an expected value from uncertainty weighted reward. I present two studies of human gaze behavior in a simulated driving task that provide evidence of the human visual system’s sensitivity to uncertainty and reward. In these experiments observers tended to more closely monitor an information source if it had a high level of uncertainty but only for information also associated with high reward. Given this behavioral finding, I then present a set of simple candidate models in an attempt to explain how humans schedule the acquisition of information over time. These simple models are shown to be inadequate in describing the process of coordinated information acquisition in driving. I present an extended version of the gaze scheduling model adapted to our particular driving task. This formulation allows ordinal predictions on how humans use reward and uncertainty in the control of eye movements and is generally consistent with observed human behavior. I conclude by reviewing main results and discussing the merits and benefits of the computational models used, possible future behavioral experiments that would serve to more directly test the gaze scheduling model, as well as revisions to future implementations of the model to more appropriately capture human gaze behavior. / text
2

Computational Models of Perceptual Space : From Simple Features to Complex Shapes

Pramod, R T January 2014 (has links) (PDF)
Dissimilarity plays a very important role in object recognition. But, finding perceptual dissimilarity between objects is non-trivial as it is not equivalent to the pixel dissimilarity between the objects (For example, two white noise images appear very similar even when they have different intensity values at every corresponding pixel). However, visual search allows us to reliably measure perceptual dissimilarity between a pair of objects. When the target object is dissimilar to the distracter, visual search becomes easy and it will be difficult otherwise. Even though we can measure perceptual dissimilarity between objects, we still do not understand either the underlying mechanisms or the visual features involved in the computation of dissimilarities. For this thesis, I have explored perceptual dissimilarity in two studies – by looking at known simple features and understanding how they combine, and using computational models to understand or discover complex features. In the first study, we looked at how dissimilarity between two simple objects with known features can be predicted using dissimilarities between individual features. Specifically, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We found that multiple feature dissimilarities could be predicted as a linear combination of individual feature dissimilarities. Also, we demonstrated for the first time that Aspect ratio of the object emerges as a novel feature in visual search. This work has been published in the Journal of Vision (Pramod & Arun, 2014). Having established in the first study that simple features combine linearly, we devised a second study to investigate dissimilarities in complex shapes. Since it is known that shape is one of the salient and complex features in object representation, we chose silhouettes of animals and abstract objects to explore the nature of dissimilarity computations. We conducted visual search using pairs of these silhouettes on humans to get an estimate of perceptual dissimilarity. We then used various computational models of shape representation (like Fourier Descriptors, Curvature Scale Space, HMAX model etc) to see how well they can predict the observed dissimilarities. We found that many of these computational models were able to predict the perceptual dissimilarities of a large number of object pairs. However, we also observed many cases where computational models failed to predict perceptual dissimilarities. The manuscript related to this study is under preparation.

Page generated in 0.3107 seconds