Eye movements during a scene rotation task were measured in two experiments. Two desktop scenes (each consisting of three office objects on a square desktop) were presented consecutively. Participants judged the identity of the two scenes. On same trials, the two scenes were either identical or one was a rotated version of the other. On different trials, the scene frame was as on the same trials, but either the locations or the orientations of some of the objects were changed. Eye movement measures were obtained as real-time indices of information processing. During the task, the eyes dwell on an object region longer when a scene was rotated further (i.e. gaze duration increased) only after the first 900ms of scanning. This result accords to a model in which (a) initial encoding takes place before an alignment process is initiated and (b) alignment is piecemeal and takes place on a gaze-by-gaze basis. As in previous scene rotation experiments, the slope of a mental rotation function differed between conditions. Response latencies increased more strongly with rotation angle in the orientation-change condition than in the location-change condition. This difference was mainly observed for gaze duration. On the other hand, response times in the Y (vertical)-axis rotation conditions were longer than those in the X (horizontal)- and Z (line-of-sight)-axis rotation conditions. This difference corresponds to an increase in the number (rather than the duration) of gazes in the Y-axis rotation conditions. Furthermore, when objects switched their locations, the changed object was fixated earlier than an unchanged object. In accordance with this result, it was assumed that the detection of the location-change is handled not only by foveal vision, but also by parafoveal vision. In Experiment: 2, the desktop was removed from the scene in half of the conditions. In these conditions location-changed objects no longer were fixated earlier than unchanged objects. Another consequence of removing the desktop was that the eyes need to visit objects more often. This means that desktop frame facilitates the piecemeal alignment process. The results were discussed in terms of viewpoint-dependent models of object recognition.
Identifer | oai:union.ndltd.org:UMASS/oai:scholarworks.umass.edu:dissertations-3467 |
Date | 01 January 2001 |
Creators | Nakatani, Chie |
Publisher | ScholarWorks@UMass Amherst |
Source Sets | University of Massachusetts, Amherst |
Language | English |
Detected Language | English |
Type | text |
Source | Doctoral Dissertations Available from Proquest |
Page generated in 0.0018 seconds