Return to search

Modeling Prosopagnosia

Prosopagnosia is defined as a profound deficit in facial identification which can be either acquired due to brain damage or is present from birth, i.e. congenital. Normally, faces and objects are processed in different parts of the inferotemporal cortex by distinct cortical systems
for face vs. object recognition, an association of function and location. Accordingly, in acquired prosopagnosia locally restricted damage can lead to specific deficits in face recognition. However, in congenital prosopagnosia faces and objects are also processed in spatially separated areas.
Accordingly, the face recognition deficit in congenital prosopagnosia can not be solely explained by the association of function and location. Rather, this observation raises the question why and how such an association evolves at all.
So far, no quantitative or computational model of congenital prosopagnosia has been proposed and models of acquired prosopagnosia have focused on changes in the information processing
taking place after in icting some kind of \damage" to the system. To model congenital prosopagnosia, it is thus necessary to understand how face processing in congenital prosopagnosia differs from normal face processing, how differences in neuroanatomical development can
give rise to differences in processing and last but not least why facial identification requires a specialized cortical processing system in the first place.
In this work, a computational model of congenital prosopagnosia is derived from formal considerations, implemented in artificial neural network models of facial information encoding, and tested in experiments with prosopagnosic subjects. The main hypothesis is that the deficit in congenital prosopagnosia is caused by a failure to obtain adequate descriptions of individual faces: A predisposition towards a reduced structural connectivity in visual cortical areas enforces
descriptions of visual stimuli that lack the amount of detail necessary to distinguish a specific exemplar from its population, i.e. achieve a successful identification.
Formally recognition tasks can be divided into identification tasks (separating a single individual from its sampling population) and classification tasks (partitioning the full object space
into distinct classes). It is shown that a high-dimensionality in the sensory representation facilitates individuation (\blessing of dimensionality"), but complicates estimation of object class
representations (\curse of dimensionality"). The dimensionality of representations is then studied explicitly in a neural network model of facial encoding. Whereas optimal encoding entails a \holistic" (high-dimensional) representation, a constraint on the network connectivity induces a decomposition of faces into localized, \featural" (low-dimensional) parts. In an experimental validation, the perceptual deficit in congenital prosopagnosia was limited to holistic face manipulations and didn't extend to featural manipulations. Finally, an extensive and detailed investigation of face and object recognition in congenital prosopagnosia enabled a better behavioral characterization and the identification of subtypes of the deficit.
In contrast to previous models of prosopagnosia, here the developmental aspect of congenital prosopagnosia is incorporated explicitly into the model, quantitative arguments for a deficit that
is task specific (identification) - and not necessarily domain specific (faces) - are provided for synthetic as well as real data (face images), and the model is validated empirically in experiments with prosopagnosic subjects.

Identiferoai:union.ndltd.org:DRESDEN/oai:qucosa.de:bsz:15-qucosa-39600
Date20 September 2010
CreatorsStollhoff, Rainer
ContributorsUniversität Leipzig, Fakultät für Mathematik und Informatik, Prof. Dr. Jürgen Jost, Prof. Dr. Christoph von der Malsburg
PublisherUniversitätsbibliothek Leipzig
Source SetsHochschulschriftenserver (HSSS) der SLUB Dresden
LanguageEnglish
Detected LanguageEnglish
Typedoc-type:doctoralThesis
Formatapplication/pdf

Page generated in 0.0024 seconds