• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 6
  • 6
  • 6
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

3D face recognition using multicomponent feature extraction from the nasal region and its environs

Gao, Jiangning January 2016 (has links)
This thesis is dedicated to extracting expression robust features for 3D face recognition. The use of 3D imaging enables the extraction of discriminative features that can significantly improve the recognition performance due to the availability of facial surface information such as depth, surface normals and curvature. Expression robust analysis using information from both depth and surface normals is investigated by dividing the main facial region into patches of different scales. The nasal region and adjoining parts of the cheeks are utilized as they are more consistent over different expressions and are hard to deliberately occlude. In addition, in comparison with other parts of the face, these regions have a high potential to produce discriminative features for recognition and overcome pose variations. An overview and classification methodology of the widely used 3D face databases are first introduced to provide an appropriate reference for 3D face database selection. Using the FRGC and Bosphorus databases, a low complexity pattern rejector for expression robust 3D face recognition is proposed by matching curves on the nasal and its environs, which results in a low-dimension feature set of only 60 points. To extract discriminative features more locally, a novel multi-scale and multi-component local shape descriptor is further proposed, which achieves more competitive performances under the identification and verification scenarios. In contrast with many of the existing work on 3D face recognition that consider captures obtained with laser scanners or structured light, this thesis also investigates applications to reconstructed 3D captures from lower cost photometric stereo imaging systems that have applications in real-world situations. To this end, the performance of the expression robust face recognition algorithms developed for captures from laser scanners are further evaluated on the Photoface database, which contains naturalistic expression variations. To improve the recognition performance of all types of 3D captures, a universal landmarking algorithm is proposed that makes uses of different components of the surface normals. Using facial profile signatures and thresholded surface normal maps, facial roll and yaw rotations are calibrated and five main landmarks are robustly detected on the well-aligned 3D nasal region. The landmarking results show that the detected landmarks demonstrate high within-class consistency and can achieve good recognition performances under different expressions. This is also the first landmarking work specifically developed for the reconstructed 3D captures from photometric stereo imaging systems.
2

Using the 3D shape of the nose for biometric authentication

Emambakhsh, Mehryar January 2014 (has links)
This thesis is dedicated to exploring the potential of the 3D shape of the nasal region for face recognition. In comparison to other parts of the face, the nose has a number of distinctive features that make it attractive for recognition purposes. It is relatively stable over different facial expressions, easy to detect because of its salient convexity, and difficult to be intentionally cover up without attracting suspicion. In addition compared to other facial parts, such as forehead, chin, mouth and eyes, the nose is not vulnerable to unintentional occlusions caused by scarves or hair. Prior to undertaking a thorough analysis of the discriminative features of the 3D nasal regions, an overview of denoising algorithms and their impact on the 3D face recognition algorithms is first provided. This analysis, which is one of the first to address this issue, evaluates the performance of 3D holistic algorithms when various denoising methods are applied. One important outcome of this evaluation is to determine the optimal denoising parameters in terms of the overall 3D face recognition performance. A novel algorithm is also proposed to learn the statistics of the noise generated by the 3D laser scanners and then simulate it over the face point clouds. Using this process, the denoising and 3D face recognition algorithms’ robustness over various noise powers can be quantitatively evaluated. A new algorithm is proposed to find the nose tip from various expressions and self-occluded samples. Furthermore, novel applications of the nose region to align the faces in 3D is provided through two pose correction methods. The algorithms are very consistent and robust against different expressions, partial and self-occlusions. The nose’s discriminative strength for 3D face recognition is analysed using two approaches. The first one creates its feature sets by applying nasal curves to the depth map. The second approach utilises a novel feature space, based on histograms of normal vectors to the response of the Gabor wavelets applied to the nasal region. To create the feature spaces, various triangular and spherical patches and nasal curves are employed, giving a very high class separability. A genetic algorithm (GA) based feature selector is then used to make the feature space more robust against facial expressions. The basis of both algorithms is a highly consistent and accurate nasal region landmarking, which is quantitatively evaluated and compared with previous work. The recognition ranks provide the highest identification performance ever reported for the 3D nasal region. The results are not only higher than the previous 3D nose recognition algorithms, but also better than or very close to recent results for whole 3D face recognition. The algorithms have been evaluated on three widely used 3D face datasets, FRGC, Bosphorus and UMB-DB.
3

Processing and analysis of 2.5D face models for non-rigid mapping based face recognition using differential geometry tools

Szeptycki, Przemyslaw 06 July 2011 (has links) (PDF)
This Ph.D thesis work is dedicated to 3D facial surface analysis, processing as well as to the newly proposed 3D face recognition modality, which is based on mapping techniques. Facial surface processing and analysis is one of the most important steps for 3Dface recognition algorithms. Automatic anthropometric facial features localization also plays an important role for face localization, face expression recognition, face registration ect., thus its automatic localization is a crucial step for 3D face processing algorithms. In this work we focused on precise and rotation invariant landmarks localization, which are later used directly for face recognition. The landmarks are localized combining local surface properties expressed in terms of differential geometry tools and global facial generic model, used for face validation. Since curvatures, which are differential geometry properties, are sensitive to surface noise, one of the main contributions of this thesis is a modification of curvatures calculation method. The modification incorporates the surface noise into the calculation method and helps to control smoothness of the curvatures. Therefore the main facial points can be reliably and precisely localized (100% nose tip localization using 8 mm precision)under the influence of rotations and surface noise. The modification of the curvatures calculation method was also tested under different face model resolutions, resulting in stable curvature values. Finally, since curvatures analysis leads to many facial landmark candidates, the validation of which is time consuming, facial landmarks localization based on learning technique was proposed. The learning technique helps to reject incorrect landmark candidates with a high probability, thus accelerating landmarks localization. Face recognition using 3D models is a relatively new subject, which has been proposed to overcome shortcomings of 2D face recognition modality. However, 3Dface recognition algorithms are likely more complicated. Additionally, since 3D face models describe facial surface geometry, they are more sensitive to facial expression changes. Our contribution is reducing dimensionality of the input data by mapping3D facial models on to 2D domain using non-rigid, conformal mapping techniques. Having 2D images which represent facial models, all previously developed 2D face recognition algorithms can be used. In our work, conformal shape images of 3Dfacial surfaces were fed in to 2D2 PCA, achieving more than 86% recognition rate rank-one using the FRGC data set. The effectiveness of all the methods has been evaluated using the FRGC and Bosphorus datasets.
4

Processing and analysis of 2.5D face models for non-rigid mapping based face recognition using differential geometry tools / Traitement et analyse des modèles 2.5 de visage utilisant les outils de la géométrie différentielle pour la reconnaissance faciale basée sur l'appariement non rigide

Szeptycki, Przemyslaw 06 July 2011 (has links)
Ce travail de thèse concerne l’analyse de surfaces faciales en 3D, ainsi que leur traitement, dans le récent cadre de la modalité de reconnaissance de visages en 3D,basé sur des techniques d’appariement. Le traitement de la surface faciale et son analyse constituent une étape importante dans les algorithmes de reconnaissance de visage en 3D. La localisation de points d’intérêt anthropométriques du visage joue par ailleurs un rôle important dans les techniques de localisation du visage, de reconnaissance d’expression, de recalage, etc. Ainsi, leur localisation automatique joue un rôle crucial dans les algorithmes de traitement du visage 3D. Dans ce travail, nous avons mis l’accent sur la localisation précise et invariante en rotation des points d’intérêt, qui seront utilisés plus tard pour la reconnaissance de visages. Ces points d’intérêt sont localisés en combinant les propriétés locales de la surface faciale, exprimées en termes de géométrie différentielle, et un modèle global et générique du visage. Etant donné que la sensibilité des courbures, qui sont des propriétés de géométrie différentielle, au bruit, une des contributions de cette thèse est la modification d’une méthode de calcul de courbures. Cette modification incorpore le bruit de la surface dans la méthode de calcul, et permet de contrôler la progressivité des courbures. Par conséquent, nous pouvons localiser les points d’intérêt de la surface faciale avec précision et fiabilité (100% de bonnes localisation du bout du nez avec une erreur maximale de 8mmpar exemple) y compris en présence de rotations et de bruit. La modification de la méthode de calcul de courbure a été également testée pour différentes résolutions de visage, présentant des valeurs de courbure stables. Enfin, étant donné que donné que l’analyse de courbures mène à de nombreux candidats de points d’intérêt du visage, dont la validation est coûteuse, nous proposons de localiser les points d’intérêt grâce à une méthode d’apprentissage. Cette méthode permet de rejeter précocement des faux candidats avec une grande confiance, accélérant d’autant la localisation des points d’intérêt. La reconnaissance de visages à l’aide de modèles 3D est un sujet relativement nouveau, qui a été propose pour palier aux insuffisantes de la modalité de reconnaissance de visages en 2D. Cependant, les algorithmes de reconnaissance de visage en 3D sont généralement plus complexes. De plus, étant donné que les modèles de visage 3D décrivent la géométrie du visage, ils sont plus sensibles que les images 2Dde texture aux expressions faciales. Notre contribution est de réduire la dimensionnalité des données de départ en appariant les modèles de visage 3D au domaine 2Dà l’aide de méthodes, non rigides, d’appariement conformal. L’existence de modèles2D représentant les visages permet alors d’utiliser les techniques précédemment développées dans le domaine de la reconnaissance de visages en 2D. Dans nos travaux, nous avons utilisé les cartes conformales de visages 3D en conjonction avec l’algorithme2D2 PCA, atteignant le score de 86% en reconnaissance de rang 1 sur la base de données FRGC. L’efficacité de toutes les méthodes a été évaluée sur les bases FRGC et Bosphorus. / This Ph.D thesis work is dedicated to 3D facial surface analysis, processing as well as to the newly proposed 3D face recognition modality, which is based on mapping techniques. Facial surface processing and analysis is one of the most important steps for 3Dface recognition algorithms. Automatic anthropometric facial features localization also plays an important role for face localization, face expression recognition, face registration ect., thus its automatic localization is a crucial step for 3D face processing algorithms. In this work we focused on precise and rotation invariant landmarks localization, which are later used directly for face recognition. The landmarks are localized combining local surface properties expressed in terms of differential geometry tools and global facial generic model, used for face validation. Since curvatures, which are differential geometry properties, are sensitive to surface noise, one of the main contributions of this thesis is a modification of curvatures calculation method. The modification incorporates the surface noise into the calculation method and helps to control smoothness of the curvatures. Therefore the main facial points can be reliably and precisely localized (100% nose tip localization using 8 mm precision)under the influence of rotations and surface noise. The modification of the curvatures calculation method was also tested under different face model resolutions, resulting in stable curvature values. Finally, since curvatures analysis leads to many facial landmark candidates, the validation of which is time consuming, facial landmarks localization based on learning technique was proposed. The learning technique helps to reject incorrect landmark candidates with a high probability, thus accelerating landmarks localization. Face recognition using 3D models is a relatively new subject, which has been proposed to overcome shortcomings of 2D face recognition modality. However, 3Dface recognition algorithms are likely more complicated. Additionally, since 3D face models describe facial surface geometry, they are more sensitive to facial expression changes. Our contribution is reducing dimensionality of the input data by mapping3D facial models on to 2D domain using non-rigid, conformal mapping techniques. Having 2D images which represent facial models, all previously developed 2D face recognition algorithms can be used. In our work, conformal shape images of 3Dfacial surfaces were fed in to 2D2 PCA, achieving more than 86% recognition rate rank-one using the FRGC data set. The effectiveness of all the methods has been evaluated using the FRGC and Bosphorus datasets.
5

3D face analysis : landmarking, expression recognition and beyond

Zhao, Xi 13 September 2010 (has links) (PDF)
This Ph.D thesis work is dedicated to automatic facial analysis in 3D, including facial landmarking and facial expression recognition. Indeed, facial expression plays an important role both in verbal and non verbal communication, and in expressing emotions. Thus, automatic facial expression recognition has various purposes and applications and particularly is at the heart of "intelligent" human-centered human/computer(robot) interfaces. Meanwhile, automatic landmarking provides aprior knowledge on location of face landmarks, which is required by many face analysis methods such as face segmentation and feature extraction used for instance for expression recognition. The purpose of this thesis is thus to elaborate 3D landmarking and facial expression recognition approaches for finally proposing an automatic facial activity (facial expression and action unit) recognition solution.In this work, we have proposed a Bayesian Belief Network (BBN) for recognizing facial activities, such as facial expressions and facial action units. A StatisticalFacial feAture Model (SFAM) has also been designed to first automatically locateface landmarks so that a fully automatic facial expression recognition system can be formed by combining the SFAM and the BBN. The key contributions are the followings. First, we have proposed to build a morphable partial face model, named SFAM, based on Principle Component Analysis. This model allows to learn boththe global variations in face landmark configuration and the local ones in terms of texture and local geometry around each landmark. Various partial face instances can be generated from SFAM by varying model parameters. Secondly, we have developed a landmarking algorithm based on the minimization an objective function describing the correlation between model instances and query faces. Thirdly, we have designed a Bayesian Belief Network with a structure describing the casual relationships among subjects, expressions and facial features. Facial expression oraction units are modelled as the states of the expression node and are recognized by identifying the maximum of beliefs of all states. We have also proposed a novel method for BBN parameter inference using a statistical feature model that can beconsidered as an extension of SFAM. Finally, in order to enrich information usedfor 3D face analysis, and particularly 3D facial expression recognition, we have also elaborated a 3D face feature, named SGAND, to characterize the geometry property of a point on 3D face mesh using its surrounding points.The effectiveness of all these methods has been evaluated on FRGC, BU3DFEand Bosphorus datasets for facial landmarking as well as BU3DFE and Bosphorus datasets for facial activity (expression and action unit) recognition.
6

3D face analysis : landmarking, expression recognition and beyond / Reconnaissance de l'expression du visage

Zhao, Xi 13 September 2010 (has links)
Cette thèse de doctorat est dédiée à l’analyse automatique de visages 3D, incluant la détection de points d’intérêt et la reconnaissance de l’expression faciale. En effet, l’expression faciale joue un rôle important dans la communication verbale et non verbale, ainsi que pour exprimer des émotions. Ainsi, la reconnaissance automatique de l’expression faciale offre de nombreuses opportunités et applications, et est en particulier au coeur d’interfaces homme-machine "intelligentes" centrées sur l’être humain. Par ailleurs, la détection automatique de points d’intérêt du visage (coins de la bouche et des yeux, ...) permet la localisation d’éléments du visage qui est essentielle pour de nombreuses méthodes d’analyse faciale telle que la segmentation du visage et l’extraction de descripteurs utilisée par exemple pour la reconnaissance de l’expression. L’objectif de cette thèse est donc d’élaborer des approches de détection de points d’intérêt sur les visages 3D et de reconnaissance de l’expression faciale pour finalement proposer une solution entièrement automatique de reconnaissance de l’activité faciale incluant l’expression et les unités d’action (ou Action Units). Dans ce travail, nous avons proposé un réseau de croyance bayésien (Bayesian Belief Network ou BBN) pour la reconnaissance d’expressions faciales ainsi que d’unités d’action. Un modèle statistique de caractéristiques faciales (Statistical Facial feAture Model ou SFAM) a également été élaboré pour permettre la localisation des points d’intérêt sur laquelle s’appuie notre BBN afin de permettre la mise en place d’un système entièrement automatique de reconnaissance de l’expression faciale. Nos principales contributions sont les suivantes. Tout d’abord, nous avons proposé un modèle de visage partiel déformable, nommé SFAM, basé sur le principe de l’analyse en composantes principales. Ce modèle permet d’apprendre à la fois les variations globales de la position relative des points d’intérêt du visage (configuration du visage) et les variations locales en terme de texture et de forme autour de chaque point d’intérêt. Différentes instances de visages partiels peuvent ainsi être produites en faisant varier les valeurs des paramètres du modèle. Deuxièmement, nous avons développé un algorithme de localisation des points d’intérêt du visage basé sur la minimisation d’une fonction objectif décrivant la corrélation entre les instances du modèle SFAM et les visages requête. Troisièmement, nous avons élaboré un réseau de croyance bayésien (BBN) dont la structure décrit les relations de dépendance entre les sujets, les expressions et les descripteurs faciaux. Les expressions faciales et les unités d’action sont alors modélisées comme les états du noeud correspondant à la variable expression et sont reconnues en identifiant le maximum de croyance pour tous les états. Nous avons également proposé une nouvelle approche pour l’inférence des paramètres du BBN utilisant un modèle de caractéristiques faciales pouvant être considéré comme une extension de SFAM. Finalement, afin d’enrichir l’information utilisée pour l’analyse de visages 3D, et particulièrement pour la reconnaissance de l’expression faciale, nous avons également élaboré un descripteur de visages 3D, nommé SGAND, pour caractériser les propriétés géométriques d’un point par rapport à son voisinage dans le nuage de points représentant un visage 3D. L’efficacité de ces méthodes a été évaluée sur les bases FRGC, BU3DFE et Bosphorus pour la localisation des points d’intérêt ainsi que sur les bases BU3DFE et Bosphorus pour la reconnaissance des expressions faciales et des unités d’action. / This Ph.D thesis work is dedicated to automatic facial analysis in 3D, including facial landmarking and facial expression recognition. Indeed, facial expression plays an important role both in verbal and non verbal communication, and in expressing emotions. Thus, automatic facial expression recognition has various purposes and applications and particularly is at the heart of "intelligent" human-centered human/computer(robot) interfaces. Meanwhile, automatic landmarking provides aprior knowledge on location of face landmarks, which is required by many face analysis methods such as face segmentation and feature extraction used for instance for expression recognition. The purpose of this thesis is thus to elaborate 3D landmarking and facial expression recognition approaches for finally proposing an automatic facial activity (facial expression and action unit) recognition solution.In this work, we have proposed a Bayesian Belief Network (BBN) for recognizing facial activities, such as facial expressions and facial action units. A StatisticalFacial feAture Model (SFAM) has also been designed to first automatically locateface landmarks so that a fully automatic facial expression recognition system can be formed by combining the SFAM and the BBN. The key contributions are the followings. First, we have proposed to build a morphable partial face model, named SFAM, based on Principle Component Analysis. This model allows to learn boththe global variations in face landmark configuration and the local ones in terms of texture and local geometry around each landmark. Various partial face instances can be generated from SFAM by varying model parameters. Secondly, we have developed a landmarking algorithm based on the minimization an objective function describing the correlation between model instances and query faces. Thirdly, we have designed a Bayesian Belief Network with a structure describing the casual relationships among subjects, expressions and facial features. Facial expression oraction units are modelled as the states of the expression node and are recognized by identifying the maximum of beliefs of all states. We have also proposed a novel method for BBN parameter inference using a statistical feature model that can beconsidered as an extension of SFAM. Finally, in order to enrich information usedfor 3D face analysis, and particularly 3D facial expression recognition, we have also elaborated a 3D face feature, named SGAND, to characterize the geometry property of a point on 3D face mesh using its surrounding points.The effectiveness of all these methods has been evaluated on FRGC, BU3DFEand Bosphorus datasets for facial landmarking as well as BU3DFE and Bosphorus datasets for facial activity (expression and action unit) recognition.

Page generated in 0.0696 seconds