• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 50
  • 50
  • 27
  • 22
  • 12
  • 11
  • 9
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Human Facial Animation Based on Real Image Sequence

Yeh, Shih-Hao 24 July 2001 (has links)
How to efficiently and relistically generate 3D human face models is a very interesting and difficult problem in computer graphics. animated face models are essential to computer games, films making, online chat, virtual presence, video conferencing, etc. As the progress of computer technology, people request for more and more multimedia effects. Therefore, construct 3D human face models and facial animation are enthusiastically investigated in recent years. There are many kinds of method that used to construct 3D human face models. Such as laser scanners and computer graphics. So far, the most popular commercially available tools have utilized laser scanners. But it is not able to trace moving object. We bring up a technique that construct 3D human face model based on real image sequence. The full procedure can be divided into 4 parts. In the first step we use two cameras take picture con human face simultaneously. By the distance within two cameras we can calculate the depth of human face and build up a 3D face model. The second step is aimed at one image sequence which is taken by the same camera. By comparing the feature poins on previous image afterward image we can get the motion vector of human face. Now we can construct a template of animated 3D face model. After that we can map any kind of 2D new character image into the template, then build new character's animation. The full procedure is automatic. We can construct exquisite human facial animation easily.
2

3D facial feature extraction and recognition : an investigation of 3D face recognition : correction and normalisation of the facial data, extraction of facial features and classification using machine learning techniques

Al-Qatawneh, Sokyna M. S. January 2010 (has links)
Face recognition research using automatic or semi-automatic techniques has emerged over the last two decades. One reason for growing interest in this topic is the wide range of possible applications for face recognition systems. Another reason is the emergence of affordable hardware, supporting digital photography and video, which have made the acquisition of high-quality and high resolution 2D images much more ubiquitous. However, 2D recognition systems are sensitive to subject pose and illumination variations and 3D face recognition which is not directly affected by such environmental changes, could be used alone, or in combination with 2D recognition. Recently with the development of more affordable 3D acquisition systems and the availability of 3D face databases, 3D face recognition has been attracting interest to tackle the limitations in performance of most existing 2D systems. In this research, we introduce a robust automated 3D Face recognition system that implements 3D data of faces with different facial expressions, hair, shoulders, clothing, etc., extracts features for discrimination and uses machine learning techniques to make the final decision. A novel system for automatic processing for 3D facial data has been implemented using multi stage architecture; in a pre-processing and registration stage the data was standardized, spikes were removed, holes were filled and the face area was extracted. Then the nose region, which is relatively more rigid than other facial regions in an anatomical sense, was automatically located and analysed by computing the precise location of the symmetry plane. Then useful facial features and a set of effective 3D curves were extracted. Finally, the recognition and matching stage was implemented by using cascade correlation neural networks and support vector machine for classification, and the nearest neighbour algorithms for matching. It is worth noting that the FRGC data set is the most challenging data set available supporting research on 3D face recognition and machine learning techniques are widely recognised as appropriate and efficient classification methods.
3

3D Facial Feature Extraction and Recognition. An investigation of 3D face recognition: correction and normalisation of the facial data, extraction of facial features and classification using machine learning techniques.

Al-Qatawneh, Sokyna M.S. January 2010 (has links)
Face recognition research using automatic or semi-automatic techniques has emerged over the last two decades. One reason for growing interest in this topic is the wide range of possible applications for face recognition systems. Another reason is the emergence of affordable hardware, supporting digital photography and video, which have made the acquisition of high-quality and high resolution 2D images much more ubiquitous. However, 2D recognition systems are sensitive to subject pose and illumination variations and 3D face recognition which is not directly affected by such environmental changes, could be used alone, or in combination with 2D recognition. Recently with the development of more affordable 3D acquisition systems and the availability of 3D face databases, 3D face recognition has been attracting interest to tackle the limitations in performance of most existing 2D systems. In this research, we introduce a robust automated 3D Face recognition system that implements 3D data of faces with different facial expressions, hair, shoulders, clothing, etc., extracts features for discrimination and uses machine learning techniques to make the final decision. A novel system for automatic processing for 3D facial data has been implemented using multi stage architecture; in a pre-processing and registration stage the data was standardized, spikes were removed, holes were filled and the face area was extracted. Then the nose region, which is relatively more rigid than other facial regions in an anatomical sense, was automatically located and analysed by computing the precise location of the symmetry plane. Then useful facial features and a set of effective 3D curves were extracted. Finally, the recognition and matching stage was implemented by using cascade correlation neural networks and support vector machine for classification, and the nearest neighbour algorithms for matching. It is worth noting that the FRGC data set is the most challenging data set available supporting research on 3D face recognition and machine learning techniques are widely recognised as appropriate and efficient classification methods.
4

3D face recognition using multicomponent feature extraction from the nasal region and its environs

Gao, Jiangning January 2016 (has links)
This thesis is dedicated to extracting expression robust features for 3D face recognition. The use of 3D imaging enables the extraction of discriminative features that can significantly improve the recognition performance due to the availability of facial surface information such as depth, surface normals and curvature. Expression robust analysis using information from both depth and surface normals is investigated by dividing the main facial region into patches of different scales. The nasal region and adjoining parts of the cheeks are utilized as they are more consistent over different expressions and are hard to deliberately occlude. In addition, in comparison with other parts of the face, these regions have a high potential to produce discriminative features for recognition and overcome pose variations. An overview and classification methodology of the widely used 3D face databases are first introduced to provide an appropriate reference for 3D face database selection. Using the FRGC and Bosphorus databases, a low complexity pattern rejector for expression robust 3D face recognition is proposed by matching curves on the nasal and its environs, which results in a low-dimension feature set of only 60 points. To extract discriminative features more locally, a novel multi-scale and multi-component local shape descriptor is further proposed, which achieves more competitive performances under the identification and verification scenarios. In contrast with many of the existing work on 3D face recognition that consider captures obtained with laser scanners or structured light, this thesis also investigates applications to reconstructed 3D captures from lower cost photometric stereo imaging systems that have applications in real-world situations. To this end, the performance of the expression robust face recognition algorithms developed for captures from laser scanners are further evaluated on the Photoface database, which contains naturalistic expression variations. To improve the recognition performance of all types of 3D captures, a universal landmarking algorithm is proposed that makes uses of different components of the surface normals. Using facial profile signatures and thresholded surface normal maps, facial roll and yaw rotations are calibrated and five main landmarks are robustly detected on the well-aligned 3D nasal region. The landmarking results show that the detected landmarks demonstrate high within-class consistency and can achieve good recognition performances under different expressions. This is also the first landmarking work specifically developed for the reconstructed 3D captures from photometric stereo imaging systems.
5

3D facial data fitting using the biharmonic equation.

Ugail, Hassan January 2006 (has links)
This paper discusses how a boundary-based surface fitting approach can be utilised to smoothly reconstruct a given human face where the scan data corresponding to the face is provided. In particular, the paper discusses how a solution to the Biharmonic equation can be used to set up the corresponding boundary value problem. We show how a compact explicit solution method can be utilised for efficiently solving the chosen Biharmonic equation. Thus, given the raw scan data of a 3D face, we extract a series of profile curves from the data which can then be utilised as boundary conditions to solve the Biharmonic equation. The resulting solution provides us a continuous single surface patch describing the original face.
6

3D Face Reconstruction from a Front Image by Pose Extension in Latent Space

Zhang, Zhao 27 September 2023 (has links)
Numerous techniques for 3D face reconstruction from a single image exist, making use of large facial databases. However, they commonly encounter quality issues due to the absence of information from alternate perspectives. For example, 3D reconstruction with a single front view input data has limited realism, particularly for profile views. We have observed that multiple-view 3D face reconstruction yields higher-quality models compared to single-view reconstruction. Based on this observation, we propose a novel pipeline that combines several deep-learning methods to enhance the quality of reconstruction from a single frontal view. Our method requires only a single image (front view) as input, yet it generates multiple realistic facial viewpoints using various deep-learning networks. These viewpoints are utilized to create a 3D facial model, significantly enhancing the 3D face quality. Traditional image-space editing has limitations in manipulating content and styles while preserving high quality. However, editing in the latent space, which is the space after encoding or before decoding in a neural network, offers greater capabilities for manipulating a given photo. Motivated by the ability of neural networks to generate 2D images from an extensive database and recognizing that multi-view 3D face reconstruction outperforms single-view approaches, we propose a new pipeline. This pipeline involves latent space manipulation by first finding a latent vector corresponding to a given image using the Generative Adversarial Network (GAN) inversion method. We then search for nearby latent vectors to synthesize multiple pose images from the provided input image, aiming to enhance 3D face reconstruction. The generated images are then fed into Diffusion models, another image synthesis network, to generate their respective profile views. The Diffusion model is known for producing more realistic large-angle variations of a given object than GAN models do. Subsequently, all these images (multi-view images) are fed into an Autoencoder, a neural network designed for 3D face model predictions, to derive the 3D structure of the face. Finally, the texture of the 3D face model is combined to enhance its realism, and certain areas of the 3D shape are refined to correct any unrealistic aspects. Our experimental results validate the effectiveness and efficiency of our method in reconstructing highly accurate 3D models of human faces from a single input (front view input) image. The reconstructed models retain high visual fidelity to the original image, even without the need for a 3D database.
7

Three Dimensional Face Recognition Using Two Dimensional Principal Component Analysis

Aljarrah, Inad A. 14 April 2006 (has links)
No description available.
8

3D facial data fitting using the biharmonic equation

Ugail, Hassan January 2006 (has links)
This paper discusses how a boundary-based surface fitting approach can be utilised to smoothly reconstruct a given human face where the scan data corresponding to the face is provided. In particular, the paper discusses how a solution to the Biharmonic equation can be used to set up the corresponding boundary value problem. We show how a compact explicit solution method can be utilised for efficiently solving the chosen Biharmonic equation. Thus, given the raw scan data of a 3D face, we extract a series of profile curves from the data which can then be utilised as boundary conditions to solve the Biharmonic equation. The resulting solution provides us a continuous single surface patch describing the original face.
9

3D face recognition based on machine learning

Qatawneh, S., Ipson, Stanley S., Qahwaji, Rami S.R., Ugail, Hassan January 2008 (has links)
3D facial data has a great potential for overcoming the problems of illumination and pose variation in face recognition. In this paper, we present a 3D facial system based on the machine learning. We used landmarks for feature extraction and Cascade Correlation neural network to make the final decision. Experiments are presented using 3D face images from the Face Recognition Grand Challenge database version 2.0. For CCNN using Jack-knife evaluation, an accuracy of 100% has been achieved for 7 faces with different expression, with 100% for both of specificity and sensitivity.
10

Processing and analysis of 2.5D face models for non-rigid mapping based face recognition using differential geometry tools

Szeptycki, Przemyslaw 06 July 2011 (has links) (PDF)
This Ph.D thesis work is dedicated to 3D facial surface analysis, processing as well as to the newly proposed 3D face recognition modality, which is based on mapping techniques. Facial surface processing and analysis is one of the most important steps for 3Dface recognition algorithms. Automatic anthropometric facial features localization also plays an important role for face localization, face expression recognition, face registration ect., thus its automatic localization is a crucial step for 3D face processing algorithms. In this work we focused on precise and rotation invariant landmarks localization, which are later used directly for face recognition. The landmarks are localized combining local surface properties expressed in terms of differential geometry tools and global facial generic model, used for face validation. Since curvatures, which are differential geometry properties, are sensitive to surface noise, one of the main contributions of this thesis is a modification of curvatures calculation method. The modification incorporates the surface noise into the calculation method and helps to control smoothness of the curvatures. Therefore the main facial points can be reliably and precisely localized (100% nose tip localization using 8 mm precision)under the influence of rotations and surface noise. The modification of the curvatures calculation method was also tested under different face model resolutions, resulting in stable curvature values. Finally, since curvatures analysis leads to many facial landmark candidates, the validation of which is time consuming, facial landmarks localization based on learning technique was proposed. The learning technique helps to reject incorrect landmark candidates with a high probability, thus accelerating landmarks localization. Face recognition using 3D models is a relatively new subject, which has been proposed to overcome shortcomings of 2D face recognition modality. However, 3Dface recognition algorithms are likely more complicated. Additionally, since 3D face models describe facial surface geometry, they are more sensitive to facial expression changes. Our contribution is reducing dimensionality of the input data by mapping3D facial models on to 2D domain using non-rigid, conformal mapping techniques. Having 2D images which represent facial models, all previously developed 2D face recognition algorithms can be used. In our work, conformal shape images of 3Dfacial surfaces were fed in to 2D2 PCA, achieving more than 86% recognition rate rank-one using the FRGC data set. The effectiveness of all the methods has been evaluated using the FRGC and Bosphorus datasets.

Page generated in 0.0309 seconds