We humans are visual creatures, constantly extracting information from the world around us. The source of our ability to understand the visual world is an intricate arrangement of multiple areas in our brains: the visual system. It enables us to recognize our friends and family
in diverse conditions, to focus our attention on important aspects of a scene and performs invariant object categorization on multiple levels of abstraction. Vision has been in the focus of scientific interest for many decades and yet our knowledge of the cortical mechanisms involved is only limited. I here describe a series of experiments, in which we investigated how the visual system robustly and efficiently extracts meaning from the environment. In particular, I will focus on thee aspects of object recognition: sampling the environment, visual invariance, and categorization and plasticity.
Starting with the selection of visual information, three eyetracking experiments are described in which we investigate the interplay of overt visual attention and object recognition. We show that overt visual attention and object recognition exert a bi-directional influence on each other. Whereas initial patterns of overt visual attention causally affect the outcome of the later recognition, briefly presented contextual information leads to substantial changes in the attentional sampling behavior, which can be best understood in terms of a shifting exploration-exploitation bias.
Following this, we turn to visual processing within the system and ask how invariant object recognition is accomplished despite large variation in retinal input. As an exemplary case, we focus on changes introduced by rotations in depth. Using a variety of techniques, ranging from fMRI to TMS and EEG, we show that viewpoint symmetry, i.e. the selectivity to mirror-symmetric viewing angles, is a prevalent feature of visual processing across a wide range of higher-level visual regions. These findings jointly suggest that viewpoint-symmetry constitutes a key computational step in achieving full viewpoint invariance.
On the next level of abstraction, we investigate how visual categories are represented at different levels of experience, from novice to expert. By combining training of novel visual categories with psychophysical measures, we demonstrate a change in the underlying type of category representation. Following this, we combine the training paradigm with electrophysiological measurements. In line with our behavioral results, these data reveal a spatiotemporal shift in category selectivity: from late and frontal to early occipitotemporal activity. These results suggest that novel and re-occurring categories rely on partially separate cortical networks, allowing the brain to balance robust and fast recognition with considerable flexibility and plasticity.
The results of all experiments presented are unified by the concept of a system that has evolved efficient mechanisms for robust performance in a large variety of conditions. Using dynamic sampling strategies, computational shortcuts and a division of labor, the visual system is optimally equipped to support higher-level cognitive function in a complex and constantly changing environment.
Identifer | oai:union.ndltd.org:uni-osnabrueck.de/oai:repositorium.ub.uni-osnabrueck.de:urn:nbn:de:gbv:700-2015051213203 |
Date | 12 May 2015 |
Creators | Kietzmann, Tim Christian |
Contributors | Prof. Dr. Peter König, Prof. Dr. Frank Tong, Prof. Dr. Andreas K Engel, Prof. Dr. Gordon Pipa |
Source Sets | Universität Osnabrück |
Language | English |
Detected Language | English |
Type | doc-type:doctoralThesis |
Format | application/pdf, application/zip |
Rights | Namensnennung-NichtKommerziell-KeineBearbeitung 3.0 Unported, http://creativecommons.org/licenses/by-nc-nd/3.0/ |
Page generated in 0.0024 seconds