Return to search

Overt Selection and Processing of Visual Stimuli

To see is to act. Most obviously, we are continuously scanning different regions of the world with the movement of our eyes, head, and body. These overt movements are intrinsically linked to two other, more subtle, actions: (1) the prior process of deciding where to look; and (2) the prediction of the sensory consequences of overt movements. In this thesis I describe a series of experiments that investigate different mechanisms underlying the first process and that evaluate the existence of the second one.
The aiming of eye movements, or spatial visual selection, has traditionally been explained with either goal-oriented or stimulus-driven mechanisms. Our experiments deal with the tension of this dichotomy and present further evidence in favor of two other type of mechanisms, not usually considered: global orienting based on non-visual cues and viewing biases that are independent of stimulus and task.
Firstly, we investigate whether stimulus-driven selection based on low-level features can operate independently of top-down constraints. If this is the case, the inhibition of areas higher in the hierarchy of visual processing and motor control should result in an increased influence of low-level features saliency. The results presented in Chapter 2 show that inhibition of the posterior parietal cortex in humans, by a permanent lesion or by transient inhibition, result in similar effects: an increased selection of locations that are characterized by higher contrast of low-level features. These results thus support a selection system in which stimulus-driven decisions are usually masked by top-down processes but can nevertheless operate independently of them.
Secondly, we investigate how free-viewing selection can be guided by non-visual content. The work in Chapter 3 indicates that touch is not only an effective local spatial cue, but that, during free viewing, it can also be a powerful global orienting signal. This effect occurs always in an external frame of reference, that is, to the side where the stimulation occurred in the external world instead of being anchored to the side of the body that was stimulated.
Thirdly, we investigate whether selection can operate even without reference to any sensory stimulus or goal. Results from our experiments presented in Chapters 2 to 5, demonstrate normal and pathological biases during free-viewing. First, patients with neglect syndrome show a strong bias to explore only the right side of images (ch. 2). In contrast, healthy subjects present a strong leftward bias, but only during the early phase of exploration (ch. 3 & 4). Finally, patients with Parkinson’s disease show a subtle overall bias to the right and no initial leftward bias (ch. 5).
The results described so far indicate that visual selection operates based on diverse mechanisms which are not restricted to the evaluation of visual inputs according to top-down constraints. Instead, selection can be solely guided by the stimulus, which can be of a multimodal nature and result in global rather than local orienting, and by strong biases that are also independent of both stimuli and goals.
The second part of this thesis studies the possibility that eye movements result in predictions of the inputs they are about to bring into sight. To investigate this with electroencephalography (EEG) we had to first learn how to deal with the strong electrical artifacts produced by eye movements. In Chapter 6, a taxonomy of such artifacts and the best way of removing them is described. On this basis, we studied trans-saccadic predictions of visual content, presented in Chapter 7. The results were compatible with the production of error signals after a saccade-contingent change of a target stimulus. These errors signals, coding the mismatch between trans-saccadic predictions and sensory inputs, depend on the reliability of pre-saccadic input. The violation of predictions about veridical input (presented outside the blind spot) results in stronger error signals than when the pre-saccadic stimulus is only inferred (presented inside the blind spot). Thus, these results support the idea of active predictive coding, in which perception consists in the integration of predictions of future input with incoming sensory information.
In conclusion, to see is to act: We actively explore the visual environment. We actively select which area to explore based on various competing factors. And we make predictions about the sensory consequences of our actions.

Identiferoai:union.ndltd.org:uni-osnabrueck.de/oai:repositorium.ub.uni-osnabrueck.de:urn:nbn:de:gbv:700-2016100515075
Date05 October 2016
CreatorsOssandón Dalgalarrando, José Pablo
ContributorsProf. Dr. Peter König, Prof. Dr. Frank Jäkel, Dr. Tobias Heed
Source SetsUniversität Osnabrück
LanguageEnglish
Detected LanguageEnglish
Typedoc-type:doctoralThesis
Formatapplication/zip, application/pdf
RightsNamensnennung-Nicht-kommerziell 3.0 Unported, http://creativecommons.org/licenses/by-nc/3.0/

Page generated in 0.0022 seconds