Return to search

The role of uncertainty and reward on eye movements in natural tasks

The human visual system is remarkable for the variety of functions it can be used for and the range of conditions under which it can perform, from the detection of small brightness changes to guiding actions in complex movements. The human eye is foveated and humans continually make eye and body movements to acquire new visual information. The mechanisms that control this acquisition and the associated sequencing of eye movements in natural circumstances are not well understood.
While the visual system has highly parallel inputs, the fovea must be moved in a serial fashion. A decision process continually occurs where peripheral information is evaluated and a subsequent fixation target is selected. Prior explanations for fixation selection have largely focused on computer vision algorithms that find image areas with high salience, ones that incorporate reduction of uncertainty or entropy of visual features, as well as heuristic models.
However, these methods are not well suited to model natural circumstances where humans are mobile and eye movements are closely coordinated for gathering ongoing task information. Following a computational model of gaze scheduling proposed by Sprague and Ballard (2004), I argue that a systematic explanation of human gaze behavior in complex natural tasks needs to represent task goals, a reward structure for these goals and a representation of uncertainty concerning progress towards those goals. If these variables are represented it is possible to formulate a decision computation for choosing fixation targets based on an expected value from uncertainty weighted reward.
I present two studies of human gaze behavior in a simulated driving task that provide evidence of the human visual system’s sensitivity to uncertainty and reward. In these experiments observers tended to more closely monitor an information source if it had a high level of uncertainty but only for information also associated with high reward. Given this behavioral finding, I then present a set of simple candidate models in an attempt to explain how humans schedule the acquisition of information over time. These simple models are shown to be inadequate in describing the process of coordinated information acquisition in driving. I present an extended version of the gaze scheduling model adapted to our particular driving task. This formulation allows ordinal predictions on how humans use reward and uncertainty in the control of eye movements and is generally consistent with observed human behavior.
I conclude by reviewing main results and discussing the merits and benefits of the computational models used, possible future behavioral experiments that would serve to more directly test the gaze scheduling model, as well as revisions to future implementations of the model to more appropriately capture human gaze behavior. / text

Identiferoai:union.ndltd.org:UTEXAS/oai:repositories.lib.utexas.edu:2152/ETD-UT-2012-05-4995
Date18 July 2012
CreatorsSullivan, Brian Thomas
Source SetsUniversity of Texas
LanguageEnglish
Detected LanguageEnglish
Typethesis
Formatapplication/pdf

Page generated in 0.0018 seconds