Return to search

The perception of object motion during self-motion

When we stand still and do not move our eyes and head, the motion of an object in the world or the absence thereof is directly given by the motion or quiescence of the retinal image. Self-motion through the world however complicates this retinal image. During self-motion, the whole retinal image undergoes coherent global motion, called optic flow. Self-motion therefore causes the retinal motion of objects moving in the world to be confounded by a motion component due to self-motion. How then do we perceive the motion of an object in the world when we ourselves are also moving?

Although non-visual information about self-motion, such as provided by efference copies of motor commands and vestibular stimulation, might play a role in this ability, it has recently been shown that the brain possesses a purely visual mechanism that underlies scene-relative object motion perception during self-motion. In the flow parsing hypothesis developed by Rushton and Warren (2005; Warren & Rushton, 2007; 2009b), the brain uses its sensitivity to optic flow to detect and globally remove retinal motion due to self-motion and recover the scene-relative motion of objects.

Research into this perceptual ability has so far been of a qualitative nature. In this thesis, I therefore develop a retinal motion nulling paradigm to measure the gain with which the flow parsing mechanism uses the optic flow to remove the self-motion component from an object’s retinal motion. I use this paradigm to investigate how accurate scene-relative object motion perception during self-motion can be based on only visual information, whether this flow parsing process depends on a percept of the direction of self-motion and the tuning of flow parsing, i.e., how it is modulated by changes in various stimulus aspects.

The results reveal that although adding monocular or binocular depth information to the display to precisely specify the moving object’s 3D position in the scene improved the accuracy of flow parsing, the flow parsing gain was never up to the extent required by the scene geometry. Furthermore, the flow parsing gain was lower at higher eccentricities from the focus of expansion in the flow field and was strongly modulated by changes in the motion angle between the self-motion and object motion components in the retinal motion of the moving object, the speeds of these components and the density of the flow field. Lastly, flow parsing was not affected by illusory changes in the perceived direction of self-motion.

In conclusion, visual information alone is not sufficient for accurate perception of scene-relative object motion during self-motion. Furthermore, flow parsing takes the 3D position of the moving object in the scene into account and is not a uniform global subtraction process. 8e observed
tuning characteristics are different from those of local perceived motion interactions, providing evidence that flow parsing is a separate process from these local motion interactions.
Finally, flow parsing does not depend on a prior percept of self-motion direction and instead directly uses the input retinal motion to construct percepts of scene-relative object motion during self-motion. / published_or_final_version / Psychology / Doctoral / Doctor of Philosophy

Identiferoai:union.ndltd.org:HKU/oai:hub.hku.hk:10722/196466
Date January 2013
CreatorsNiehorster, Diederick Christian
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Source SetsHong Kong University Theses
LanguageEnglish
Detected LanguageEnglish
TypePG_Thesis
RightsCreative Commons: Attribution 3.0 Hong Kong License, The author retains all proprietary rights, (such as patent rights) and the right to use in future works.
RelationHKU Theses Online (HKUTO)

Page generated in 0.0019 seconds