• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 50
  • 50
  • 27
  • 22
  • 12
  • 11
  • 9
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Processing and analysis of 2.5D face models for non-rigid mapping based face recognition using differential geometry tools / Traitement et analyse des modèles 2.5 de visage utilisant les outils de la géométrie différentielle pour la reconnaissance faciale basée sur l'appariement non rigide

Szeptycki, Przemyslaw 06 July 2011 (has links)
Ce travail de thèse concerne l’analyse de surfaces faciales en 3D, ainsi que leur traitement, dans le récent cadre de la modalité de reconnaissance de visages en 3D,basé sur des techniques d’appariement. Le traitement de la surface faciale et son analyse constituent une étape importante dans les algorithmes de reconnaissance de visage en 3D. La localisation de points d’intérêt anthropométriques du visage joue par ailleurs un rôle important dans les techniques de localisation du visage, de reconnaissance d’expression, de recalage, etc. Ainsi, leur localisation automatique joue un rôle crucial dans les algorithmes de traitement du visage 3D. Dans ce travail, nous avons mis l’accent sur la localisation précise et invariante en rotation des points d’intérêt, qui seront utilisés plus tard pour la reconnaissance de visages. Ces points d’intérêt sont localisés en combinant les propriétés locales de la surface faciale, exprimées en termes de géométrie différentielle, et un modèle global et générique du visage. Etant donné que la sensibilité des courbures, qui sont des propriétés de géométrie différentielle, au bruit, une des contributions de cette thèse est la modification d’une méthode de calcul de courbures. Cette modification incorpore le bruit de la surface dans la méthode de calcul, et permet de contrôler la progressivité des courbures. Par conséquent, nous pouvons localiser les points d’intérêt de la surface faciale avec précision et fiabilité (100% de bonnes localisation du bout du nez avec une erreur maximale de 8mmpar exemple) y compris en présence de rotations et de bruit. La modification de la méthode de calcul de courbure a été également testée pour différentes résolutions de visage, présentant des valeurs de courbure stables. Enfin, étant donné que donné que l’analyse de courbures mène à de nombreux candidats de points d’intérêt du visage, dont la validation est coûteuse, nous proposons de localiser les points d’intérêt grâce à une méthode d’apprentissage. Cette méthode permet de rejeter précocement des faux candidats avec une grande confiance, accélérant d’autant la localisation des points d’intérêt. La reconnaissance de visages à l’aide de modèles 3D est un sujet relativement nouveau, qui a été propose pour palier aux insuffisantes de la modalité de reconnaissance de visages en 2D. Cependant, les algorithmes de reconnaissance de visage en 3D sont généralement plus complexes. De plus, étant donné que les modèles de visage 3D décrivent la géométrie du visage, ils sont plus sensibles que les images 2Dde texture aux expressions faciales. Notre contribution est de réduire la dimensionnalité des données de départ en appariant les modèles de visage 3D au domaine 2Dà l’aide de méthodes, non rigides, d’appariement conformal. L’existence de modèles2D représentant les visages permet alors d’utiliser les techniques précédemment développées dans le domaine de la reconnaissance de visages en 2D. Dans nos travaux, nous avons utilisé les cartes conformales de visages 3D en conjonction avec l’algorithme2D2 PCA, atteignant le score de 86% en reconnaissance de rang 1 sur la base de données FRGC. L’efficacité de toutes les méthodes a été évaluée sur les bases FRGC et Bosphorus. / This Ph.D thesis work is dedicated to 3D facial surface analysis, processing as well as to the newly proposed 3D face recognition modality, which is based on mapping techniques. Facial surface processing and analysis is one of the most important steps for 3Dface recognition algorithms. Automatic anthropometric facial features localization also plays an important role for face localization, face expression recognition, face registration ect., thus its automatic localization is a crucial step for 3D face processing algorithms. In this work we focused on precise and rotation invariant landmarks localization, which are later used directly for face recognition. The landmarks are localized combining local surface properties expressed in terms of differential geometry tools and global facial generic model, used for face validation. Since curvatures, which are differential geometry properties, are sensitive to surface noise, one of the main contributions of this thesis is a modification of curvatures calculation method. The modification incorporates the surface noise into the calculation method and helps to control smoothness of the curvatures. Therefore the main facial points can be reliably and precisely localized (100% nose tip localization using 8 mm precision)under the influence of rotations and surface noise. The modification of the curvatures calculation method was also tested under different face model resolutions, resulting in stable curvature values. Finally, since curvatures analysis leads to many facial landmark candidates, the validation of which is time consuming, facial landmarks localization based on learning technique was proposed. The learning technique helps to reject incorrect landmark candidates with a high probability, thus accelerating landmarks localization. Face recognition using 3D models is a relatively new subject, which has been proposed to overcome shortcomings of 2D face recognition modality. However, 3Dface recognition algorithms are likely more complicated. Additionally, since 3D face models describe facial surface geometry, they are more sensitive to facial expression changes. Our contribution is reducing dimensionality of the input data by mapping3D facial models on to 2D domain using non-rigid, conformal mapping techniques. Having 2D images which represent facial models, all previously developed 2D face recognition algorithms can be used. In our work, conformal shape images of 3Dfacial surfaces were fed in to 2D2 PCA, achieving more than 86% recognition rate rank-one using the FRGC data set. The effectiveness of all the methods has been evaluated using the FRGC and Bosphorus datasets.
12

Towards the development of an efficient integrated 3D face recognition system : enhanced face recognition based on techniques relating to curvature analysis, gender classification and facial expressions

Han, Xia January 2011 (has links)
The purpose of this research was to enhance the methods towards the development of an efficient three dimensional face recognition system. More specifically, one of our aims was to investigate how the use of curvature of the diagonal profiles, extracted from 3D facial geometry models can help the neutral face recognition processes. Another aim was to use a gender classifier employed on 3D facial geometry in order to reduce the search space of the database on which facial recognition is performed. 3D facial geometry with facial expression possesses considerable challenges when it comes face recognition as identified by the communities involved in face recognition research. Thus, one aim of this study was to investigate the effects of the curvature-based method in face recognition under expression variations. Another aim was to develop techniques that can discriminate both expression-sensitive and expression-insensitive regions for ii face recognition based on non-neutral face geometry models. In the case of neutral face recognition, we developed a gender classification method using support vector machines based on the measurements of area and volume of selected regions of the face. This method reduced the search range of a database initially for a given image and hence reduces the computational time. Subsequently, in the characterisation of the face images, a minimum feature set of diagonal profiles, which we call T shape profiles, containing diacritic information were determined and extracted to characterise face models. We then used a method based on computing curvatures of selected facial regions to describe this feature set. In addition to the neutral face recognition, to solve the problem arising from data with facial expressions, initially, the curvature-based T shape profiles were employed and investigated for this purpose. For this purpose, the feature sets of the expression-invariant and expression-variant regions were determined respectively and described by geodesic distances and Euclidean distances. By using regression models the correlations between expressions and neutral feature sets were identified. This enabled us to discriminate expression-variant features and there was a gain in face recognition rate. The results of the study have indicated that our proposed curvature-based recognition, 3D gender classification of facial geometry and analysis of facial expressions, was capable of undertaking face recognition using a minimum set of features improving efficiency and computation.
13

Using the 3D shape of the nose for biometric authentication

Emambakhsh, Mehryar January 2014 (has links)
This thesis is dedicated to exploring the potential of the 3D shape of the nasal region for face recognition. In comparison to other parts of the face, the nose has a number of distinctive features that make it attractive for recognition purposes. It is relatively stable over different facial expressions, easy to detect because of its salient convexity, and difficult to be intentionally cover up without attracting suspicion. In addition compared to other facial parts, such as forehead, chin, mouth and eyes, the nose is not vulnerable to unintentional occlusions caused by scarves or hair. Prior to undertaking a thorough analysis of the discriminative features of the 3D nasal regions, an overview of denoising algorithms and their impact on the 3D face recognition algorithms is first provided. This analysis, which is one of the first to address this issue, evaluates the performance of 3D holistic algorithms when various denoising methods are applied. One important outcome of this evaluation is to determine the optimal denoising parameters in terms of the overall 3D face recognition performance. A novel algorithm is also proposed to learn the statistics of the noise generated by the 3D laser scanners and then simulate it over the face point clouds. Using this process, the denoising and 3D face recognition algorithms’ robustness over various noise powers can be quantitatively evaluated. A new algorithm is proposed to find the nose tip from various expressions and self-occluded samples. Furthermore, novel applications of the nose region to align the faces in 3D is provided through two pose correction methods. The algorithms are very consistent and robust against different expressions, partial and self-occlusions. The nose’s discriminative strength for 3D face recognition is analysed using two approaches. The first one creates its feature sets by applying nasal curves to the depth map. The second approach utilises a novel feature space, based on histograms of normal vectors to the response of the Gabor wavelets applied to the nasal region. To create the feature spaces, various triangular and spherical patches and nasal curves are employed, giving a very high class separability. A genetic algorithm (GA) based feature selector is then used to make the feature space more robust against facial expressions. The basis of both algorithms is a highly consistent and accurate nasal region landmarking, which is quantitatively evaluated and compared with previous work. The recognition ranks provide the highest identification performance ever reported for the 3D nasal region. The results are not only higher than the previous 3D nose recognition algorithms, but also better than or very close to recent results for whole 3D face recognition. The algorithms have been evaluated on three widely used 3D face datasets, FRGC, Bosphorus and UMB-DB.
14

3d Face Reconstruction Using Stereo Vision

Dikmen, Mehmet 01 September 2006 (has links) (PDF)
3D face modeling is currently a popular area in Computer Graphics and Computer Vision. Many techniques have been introduced for this purpose, such as using one or more cameras, 3D scanners, and many other systems of sophisticated hardware with related software. But the main goal is to find a good balance between visual reality and the cost of the system. In this thesis, reconstruction of a 3D human face from a pair of stereo cameras is studied. Unlike many other systems, facial feature points are obtained automatically from two photographs with the help of a dot pattern projected on the object&amp / #8217 / s face. It is seen that using projection pattern also provided enough feature points to derive 3D face roughly. These points are then used to fit a generic face mesh for a more realistic model. To cover this 3D model, a single texture image is generated from the initial stereo photographs.
15

Texture Mapping By Multi-image Blending For 3d Face Models

Bayar, Hakan 01 December 2007 (has links) (PDF)
Computer interfaces has changed to 3D graphics environments due to its high number of applications ranging from scientific importance to entertainment. To enhance the realism of the 3D models, an established rendering technique, texture mapping, is used. In computer vision, a way to generate this texture is to combine extracted parts of multiple images of real objects and it is the topic studied in this thesis. While the 3D face model is obtained by using 3D scanner, the texture to cover the model is constructed from multiple images. After marking control points on images and also on 3D face model, a texture image to cover the 3D face model is generated. Moreover, effects of the some features of OpenGL, a graphical library, on 3D texture covered face model are studied.
16

Comparison Of 3d Facial Anchor Point Localization Methods

Yagcioglu, Mustafa 01 June 2008 (has links) (PDF)
Human identification systems are commonly used for security issues. Most of them are based on ID card. However, using an ID card for identification may not be safe enough since people may not have any protection against the theft. Another solution to the identification problem is to use iris or fingerprints. However, systems based on the iris or fingerprints need close interaction to identification machine. Identifying someone from his photograph overcomes all these problems which can be called as face recognition. Common face recognition systems are based on the 2D image recognition but success rates of these methods are strictly depending on the environment. Variations on brightness and pose, complex background are the main problems for 2D image recognition systems. At this point, three dimensional face recognition techniques gain importance. Although there are a lot of methods developed for 3D face recognition, many of them assume that face is not rotated and there is not any destructive (i.e. beard, moustache, hair, hat, and eyeglasses) on the face. However, identification needs to be done though these destructives. Basic step for the face recognition is the determination of the anchor points (i.e. nose tip, inner eye points). In this study, the goal is to implement previously proposed four face recognition methods based on anchor point detection / &ldquo / Multimodal Facial Feature Extraction for Automatic 3D Face Recognition&rdquo / , &ldquo / Automatic Feature Extraction for Multiview 3D Face Recognition&rdquo / , &ldquo / Multiple Nose Region Matching for 3D Face Recognition under Varying Facial Expression&rdquo / , &ldquo / 3D face detection using curvature analysis&rdquo / , to compare the success rates of them for rotated and destructed images and finally to propose improvements on these methods.
17

3d Face Recognition With Local Shape Descriptors

Inan, Tolga 01 September 2011 (has links) (PDF)
This thesis represents two approaches for three dimensional face recognition. In the first approach, a generic face model is fitted to human face. Local shape descriptors are located on the nodes of generic model mesh. Discriminative local shape descriptors on the nodes are selected and fed as input into the face recognition system. In the second approach, local shape descriptors which are uniformly distributed across the face are calculated. Among the calculated shape descriptors that are discriminative for recognition process are selected and used for three dimensional face recognition. Both approaches are tested with widely accepted FRGCv2.0 database and experiment protocol. Reported results are better than the state-of-theart systems. Recognition performances for neutral and non-neutral faces are also reported.
18

Modelling facial action units using partial differential equations

Ismail, Nur Baini Binti January 2015 (has links)
This thesis discusses a novel method for modelling facial action units. It presents facial action units model based on boundary value problems for accurate representation of human facial expression in three-dimensions. In particular, a solution to a fourth order elliptic Partial Differential Equation (PDE) subject to suitable boundary conditions is utilized, where the chosen boundary curves are based on muscles movement defined by Facial Action Coding System (FACS). This study involved three stages: modelling faces, manipulating faces and application to simple facial animation. In the first stage, PDE method is used in modelling and generating a smooth 3D face. The PDE formulation using small sets of parameters contributes to the efficiency of human face representation. In the manipulation stage, a generic PDE face of neutral expression is manipulated to a face with expression using PDE descriptors that uniquely represents an action unit. A combination of the PDE descriptor results in a generic PDE face having an expression, which successfully modelled four basic expressions: happy, sad, fear and disgust. An example of application is given using simple animation technique called blendshapes. This technique uses generic PDE face in animating basic expressions.
19

Estimativa da pose da cabeça em imagens monoculares usando um modelo no espaço 3D / Estimation of the head pose based on monocular images

Ramos, Yessenia Deysi Yari January 2013 (has links)
Esta dissertação apresenta um novo método para cálculo da pose da cabeça em imagens monoculares. Este cálculo é estimado no sistema de coordenadas da câmera, comparando as posições das características faciais específicas com as de múltiplas instâncias do modelo da face em 3D. Dada uma imagem de uma face humana, o método localiza inicialmente as características faciais, como nariz, olhos e boca. Estas últimas são detectadas e localizadas através de um modelo ativo de forma para faces. O algoritmo foi treinado sobre um conjunto de dados com diferentes poses de cabeça. Para cada face, obtemos um conjunto de pontos característicos no espaço de imagem 2D. Esses pontos são usados como referências na comparação com os respectivos pontos principais das múltiplas instâncias do nosso modelo de face em 3D projetado no espaço da imagem. Para obter a profundidade de cada ponto, usamos as restrições impostas pelo modelo 3D da face por exemplo, os olhos tem uma determinada profundidade em relação ao nariz. A pose da cabeça é estimada, minimizando o erro de comparação entre os pontos localizados numa instância do modelo 3D da face e os localizados na imagem. Nossos resultados preliminares são encorajadores e indicam que a nossa abordagem produz resultados mais precisos que os métodos disponíveis na literatura. / This dissertation presents a new method to accurately compute the head pose in mono cular images. The head pose is estimated in the camera coordinate system, by comparing the positions of specific facial features with the positions of these facial features in multiple instances of a prior 3D face model. Given an image containing a face, our method initially locates some facial features, such as nose, eyes, and mouth; these features are detected and located using an Adaptive Shape Model for faces , this algorithm was trained using on a data set with a variety of head poses. For each face, we obtain a collection of feature locations (i.e. points) in the 2D image space. These 2D feature locations are then used as references in the comparison with the respective feature locations of multiple instances of our 3D face model, projected on the same 2D image space. To obtain the depth of every feature point, we use the 3D spatial constraints imposed by our face model (i.e. eyes are at a certain depth with respect to the nose, and so on). The head pose is estimated by minimizing the comparison error between the 3D feature locations of the face in the image and a given instance of the face model (i.e. a geometrical transformation of the face model in the 3D camera space). Our preliminary experimental results are encouraging, and indicate that our approach can provide more accurate results than comparable methods available in the literature.
20

Estimativa da pose da cabeça em imagens monoculares usando um modelo no espaço 3D / Estimation of the head pose based on monocular images

Ramos, Yessenia Deysi Yari January 2013 (has links)
Esta dissertação apresenta um novo método para cálculo da pose da cabeça em imagens monoculares. Este cálculo é estimado no sistema de coordenadas da câmera, comparando as posições das características faciais específicas com as de múltiplas instâncias do modelo da face em 3D. Dada uma imagem de uma face humana, o método localiza inicialmente as características faciais, como nariz, olhos e boca. Estas últimas são detectadas e localizadas através de um modelo ativo de forma para faces. O algoritmo foi treinado sobre um conjunto de dados com diferentes poses de cabeça. Para cada face, obtemos um conjunto de pontos característicos no espaço de imagem 2D. Esses pontos são usados como referências na comparação com os respectivos pontos principais das múltiplas instâncias do nosso modelo de face em 3D projetado no espaço da imagem. Para obter a profundidade de cada ponto, usamos as restrições impostas pelo modelo 3D da face por exemplo, os olhos tem uma determinada profundidade em relação ao nariz. A pose da cabeça é estimada, minimizando o erro de comparação entre os pontos localizados numa instância do modelo 3D da face e os localizados na imagem. Nossos resultados preliminares são encorajadores e indicam que a nossa abordagem produz resultados mais precisos que os métodos disponíveis na literatura. / This dissertation presents a new method to accurately compute the head pose in mono cular images. The head pose is estimated in the camera coordinate system, by comparing the positions of specific facial features with the positions of these facial features in multiple instances of a prior 3D face model. Given an image containing a face, our method initially locates some facial features, such as nose, eyes, and mouth; these features are detected and located using an Adaptive Shape Model for faces , this algorithm was trained using on a data set with a variety of head poses. For each face, we obtain a collection of feature locations (i.e. points) in the 2D image space. These 2D feature locations are then used as references in the comparison with the respective feature locations of multiple instances of our 3D face model, projected on the same 2D image space. To obtain the depth of every feature point, we use the 3D spatial constraints imposed by our face model (i.e. eyes are at a certain depth with respect to the nose, and so on). The head pose is estimated by minimizing the comparison error between the 3D feature locations of the face in the image and a given instance of the face model (i.e. a geometrical transformation of the face model in the 3D camera space). Our preliminary experimental results are encouraging, and indicate that our approach can provide more accurate results than comparable methods available in the literature.

Page generated in 0.0641 seconds