Spelling suggestions: "subject:"3 D multimodal face recognition"" "subject:"3 D multiomodal face recognition""
1 |
Robust face recognition based on three dimensional dataHuang, Di 09 September 2011 (has links) (PDF)
The face is one of the best biometrics for person identification and verification related applications, because it is natural, non-intrusive, and socially weIl accepted. Unfortunately, an human faces are similar to each other and hence offer low distinctiveness as compared with other biometrics, e.g., fingerprints and irises. Furthermore, when employing facial texture images, intra-class variations due to factors as diverse as illumination and pose changes are usually greater than inter-class ones, making 2D face recognition far from reliable in the real condition. Recently, 3D face data have been extensively investigated by the research community to deal with the unsolved issues in 2D face recognition, Le., illumination and pose changes. This Ph.D thesis is dedicated to robust face recognition based on three dimensional data, including only 3D shape based face recognition, textured 3D face recognition as well as asymmetric 3D-2D face recognition. In only 3D shape-based face recognition, since 3D face data, such as facial pointclouds and facial scans, are theoretically insensitive to lighting variations and generally allow easy pose correction using an ICP-based registration step, the key problem mainly lies in how to represent 3D facial surfaces accurately and achieve matching that is robust to facial expression changes. In this thesis, we design an effective and efficient approach in only 3D shape based face recognition. For facial description, we propose a novel geometric representation based on extended Local Binary Pattern (eLBP) depth maps, and it can comprehensively describe local geometry changes of 3D facial surfaces; while a 81FT -based local matching process further improved by facial component and configuration constraints is proposed to associate keypoints between corresponding facial representations of different facial scans belonging to the same subject. Evaluated on the FRGC v2.0 and Gavab databases, the proposed approach proves its effectiveness. Furthermore, due tq the use of local matching, it does not require registration for nearly frontal facial scans and only needs a coarse alignment for the ones with severe pose variations, in contrast to most of the related tasks that are based on a time-consuming fine registration step. Considering that most of the current 3D imaging systems deliver 3D face models along with their aligned texture counterpart, a major trend in the literature is to adopt both the 3D shape and 2D texture based modalities, arguing that the joint use of both clues can generally provides more accurate and robust performance than utilizing only either of the single modality. Two important factors in this issue are facial representation on both types of data as well as result fusion. In this thesis, we propose a biological vision-based facial representation, named Oriented Gradient Maps (OGMs), which can be applied to both facial range and texture images. The OGMs simulate the response of complex neurons to gradient information within a given neighborhood and have properties of being highly distinctive and robust to affine illumination and geometric transformations. The previously proposed matching process is then adopted to calculate similarity measurements between probe and gallery faces. Because the biological vision-based facial representation produces an OGM for each quantized orientation of facial range and texture images, we finally use a score level fusion strategy that optimizes weights by a genetic algorithm in a learning pro cess. The experimental results achieved on the FRGC v2.0 and 3DTEC datasets display the effectiveness of the proposed biological vision-based facial description and the optimized weighted sum fusion. [...]
|
2 |
Robust face recognition based on three dimensional data / La reconnaissance faciale robuste utilisant les données trois dimensionsHuang, Di 09 September 2011 (has links)
La reconnaissance faciale est l'une des meilleures modalités biomêtriques pour des applications liées à l'identification ou l'authentification de personnes. En effet, c'est la modalité utilisée par les humains; elle est non intrusive, et socialement bien acceptée. Malheureusement, les visages humains sont semblables et offrent par conséquent une faible distinctivité par rapport à d'autres modalités biométriques, comme par exemple, les empreintes digitales et l'iris. Par ailleurs, lorsqu'il s'agit d'images de texture de visages, les variations intra-classe, dues à des facteurs aussi divers que les changements des conditions d'éclairage mais aussi de pose, sont généralement supérieures aux variations inter-classe, ce qui rend la reconnaissance faciale 2D peu fiable dans des conditions réelles. Récemment, les représentations 3D de visages ont été largement étudiées par la communauté scientifique pour palier les problèmes non résolus dans la reconnaissance faciale 2D, qui sont notamment causés par les changements d'illumination et de pose. Cette thèse est consacrée à la reconnaissance faciale robuste utilisant les données faciales 3D, incluant la reconnaissance de visage 3D, la reconnaissance de visage 3D texturé ainsi que la reconnaissance faciale asymétrique 3D-2D. La reconnaissance faciale 3D, utilisant l'information géométrique 3D représentée sous forme de nuage de points 3D ou d'image de profondeur, est théoriquement non affectée par les changements dans les conditions d'illumination et peut facilement corriger, par l'application d'une approche de recalage rigide comme ICP, les changements de pose. Le principal défi réside dans la représentation, avec précision, de la surface faciale 3D, mais aussi dans le recalage robuste aux changements d'expression faciale. Dans cette thèse, nous concevons une approche efficace et performante pour la reconnaissance de visage 3D. Concernant la description du visage, nous proposons une représentation géométrique basée sur les cartes extended Local Binary Pattern (eLBP), qui décrivent de manière précise les variations de la géométrie locale de la surface faciale 3D; tandis qu'une étape combinant l'appariement local, basé 81FT, aux informations compositionnelles du visage et aux contraintes de configuration permet d'apparier des points caractéristiques, d'un même individu, entre les différentes représentations de son visage. Évaluée sur les bases de données FRGC v2.0 et Gavab DB, l'approche proposée prouve son efficacité. Par ailleurs, contrairement à la plupart des approches nécessitant une étape d'alignement précise et couteuse, notre approche, en raison de l'utilisation de l'appariement local, ne nécessite pas d'enrôlement dans des conditions de pose frontale précise et se contente seulement d'un alignement grossier. Considérant que la plupart des systèmes actuels d'imagerie 3D permettent la capture simultanée de modèles 3D du visage ainsi que de leur texture, une tendance majeure dans la littérature scientifique est d'adopter à la fois la modalité 3D et celle de texture 2D. On fait valoir que l'utilisation conjointe de ces deux types d'informations aboutit généralement à des résultats plus précis et plus robustes que ceux obtenus par l'un des deux séparément. Néanmoins, les deux facteurs clés de la réussite sont la représentation bimodale du visage ainsi que la fusion des résultats obtenus selon chaque modalité. Dans cette thèse, nous proposons une représentation bio-inspirée du visage, appelée Cartes de Gradients Orientés (Oriented Gradient Maps: OGMs), qui peut être appliqué à la fois à la modalité 3D et à celle de texture 2D. Les OGMs simulent la réponse des neurones complexes, à l'information de gradient dans un voisinage donné et ont la propriété d'être très distinctifs et robustes aux transformations affines d'illumination et géométriques. [...] / The face is one of the best biometrics for person identification and verification related applications, because it is natural, non-intrusive, and socially weIl accepted. Unfortunately, an human faces are similar to each other and hence offer low distinctiveness as compared with other biometrics, e.g., fingerprints and irises. Furthermore, when employing facial texture images, intra-class variations due to factors as diverse as illumination and pose changes are usually greater than inter-class ones, making 2D face recognition far from reliable in the real condition. Recently, 3D face data have been extensively investigated by the research community to deal with the unsolved issues in 2D face recognition, Le., illumination and pose changes. This Ph.D thesis is dedicated to robust face recognition based on three dimensional data, including only 3D shape based face recognition, textured 3D face recognition as well as asymmetric 3D-2D face recognition. In only 3D shape-based face recognition, since 3D face data, such as facial pointclouds and facial scans, are theoretically insensitive to lighting variations and generally allow easy pose correction using an ICP-based registration step, the key problem mainly lies in how to represent 3D facial surfaces accurately and achieve matching that is robust to facial expression changes. In this thesis, we design an effective and efficient approach in only 3D shape based face recognition. For facial description, we propose a novel geometric representation based on extended Local Binary Pattern (eLBP) depth maps, and it can comprehensively describe local geometry changes of 3D facial surfaces; while a 81FT -based local matching process further improved by facial component and configuration constraints is proposed to associate keypoints between corresponding facial representations of different facial scans belonging to the same subject. Evaluated on the FRGC v2.0 and Gavab databases, the proposed approach proves its effectiveness. Furthermore, due tq the use of local matching, it does not require registration for nearly frontal facial scans and only needs a coarse alignment for the ones with severe pose variations, in contrast to most of the related tasks that are based on a time-consuming fine registration step. Considering that most of the current 3D imaging systems deliver 3D face models along with their aligned texture counterpart, a major trend in the literature is to adopt both the 3D shape and 2D texture based modalities, arguing that the joint use of both clues can generally provides more accurate and robust performance than utilizing only either of the single modality. Two important factors in this issue are facial representation on both types of data as well as result fusion. In this thesis, we propose a biological vision-based facial representation, named Oriented Gradient Maps (OGMs), which can be applied to both facial range and texture images. The OGMs simulate the response of complex neurons to gradient information within a given neighborhood and have properties of being highly distinctive and robust to affine illumination and geometric transformations. The previously proposed matching process is then adopted to calculate similarity measurements between probe and gallery faces. Because the biological vision-based facial representation produces an OGM for each quantized orientation of facial range and texture images, we finally use a score level fusion strategy that optimizes weights by a genetic algorithm in a learning pro cess. The experimental results achieved on the FRGC v2.0 and 3DTEC datasets display the effectiveness of the proposed biological vision-based facial description and the optimized weighted sum fusion. [...]
|
Page generated in 0.1103 seconds