• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 50
  • 50
  • 27
  • 22
  • 12
  • 11
  • 9
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Estimativa da pose da cabeça em imagens monoculares usando um modelo no espaço 3D / Estimation of the head pose based on monocular images

Ramos, Yessenia Deysi Yari January 2013 (has links)
Esta dissertação apresenta um novo método para cálculo da pose da cabeça em imagens monoculares. Este cálculo é estimado no sistema de coordenadas da câmera, comparando as posições das características faciais específicas com as de múltiplas instâncias do modelo da face em 3D. Dada uma imagem de uma face humana, o método localiza inicialmente as características faciais, como nariz, olhos e boca. Estas últimas são detectadas e localizadas através de um modelo ativo de forma para faces. O algoritmo foi treinado sobre um conjunto de dados com diferentes poses de cabeça. Para cada face, obtemos um conjunto de pontos característicos no espaço de imagem 2D. Esses pontos são usados como referências na comparação com os respectivos pontos principais das múltiplas instâncias do nosso modelo de face em 3D projetado no espaço da imagem. Para obter a profundidade de cada ponto, usamos as restrições impostas pelo modelo 3D da face por exemplo, os olhos tem uma determinada profundidade em relação ao nariz. A pose da cabeça é estimada, minimizando o erro de comparação entre os pontos localizados numa instância do modelo 3D da face e os localizados na imagem. Nossos resultados preliminares são encorajadores e indicam que a nossa abordagem produz resultados mais precisos que os métodos disponíveis na literatura. / This dissertation presents a new method to accurately compute the head pose in mono cular images. The head pose is estimated in the camera coordinate system, by comparing the positions of specific facial features with the positions of these facial features in multiple instances of a prior 3D face model. Given an image containing a face, our method initially locates some facial features, such as nose, eyes, and mouth; these features are detected and located using an Adaptive Shape Model for faces , this algorithm was trained using on a data set with a variety of head poses. For each face, we obtain a collection of feature locations (i.e. points) in the 2D image space. These 2D feature locations are then used as references in the comparison with the respective feature locations of multiple instances of our 3D face model, projected on the same 2D image space. To obtain the depth of every feature point, we use the 3D spatial constraints imposed by our face model (i.e. eyes are at a certain depth with respect to the nose, and so on). The head pose is estimated by minimizing the comparison error between the 3D feature locations of the face in the image and a given instance of the face model (i.e. a geometrical transformation of the face model in the 3D camera space). Our preliminary experimental results are encouraging, and indicate that our approach can provide more accurate results than comparable methods available in the literature.
22

Hybridní rozpoznávání 3D obličeje / Hybrid 3D Face Recognition

Mráček, Štěpán Unknown Date (has links)
Tato disertační práce se zabývá biometrickým rozpoznáváním 3D obličejů. V úvodu práce jsou prezentovány současné metody a techniky pro rozpoznávání. Následně je navržen nový algoritmus, který využívá tzv. multialgoritmickou biometrickou fúzi. Vstupní snímek 3D obličeje je paralelně zpracování dílčími rozpoznávacími podalgoritmy a celkové rozhodnutí o identitě nebo verifikaci identity uživatele je výsledkem sloučení výstupu těchto podalgoritmů. Rozpoznávací algoritmus byl testován na veřejně přístupné databázi 3D obličejů FRGC v 2.0 i vlastních databázich, které byly pořízeny pomocí senzorů Microsoft Kinect a SoftKinetic DS325.
23

Data Driven Dense 3D Facial Reconstruction From 3D Skull Shape

Gorrila, Anusha 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This thesis explores a data driven machine learning based solution for Facial reconstruction from three dimensional (3D) skull shape for recognizing or identifying unknown subjects during forensic investigation. With over 8000 unidentified bodies during the past 3 decades, facial reconstruction of disintegrated bodies in helping with identification has been a critical issue for forensic practitioners. Historically, clay modelling has been used for facial reconstruction that not only requires an expert in the field but also demands a substantial amount of time for modelling, even after acquiring the skull model. Such manual reconstruction typically takes from a month to over 3 months of time and effort. The solution presented in this thesis uses 3D Cone Beam Computed Tomography (CBCT) data collected from many people to build a model of the relationship of facial skin to skull bone over a dense set of locations on the face. It then uses this skin-to-bone relationship model learned from the data to reconstruct the predicted face model from a skull shape of an unknown subject. The thesis also extends the algorithm in a way that could help modify the reconstructed face model interactively to account for the effects of age or weight. This uses the predicted face model as a starting point and creates different hypotheses of the facial appearances for different physical attributes. Attributes like age and body mass index (BMI) are used to show the physical facial appearance changes with the help of a tool we constructed. This could improve the identification process. The thesis also presents a methods designed for testing and validating the facial reconstruction algorithm.
24

Towards the Development of an Efficient Integrated 3D Face Recognition System. Enhanced Face Recognition Based on Techniques Relating to Curvature Analysis, Gender Classification and Facial Expressions.

Han, Xia January 2011 (has links)
The purpose of this research was to enhance the methods towards the development of an efficient three dimensional face recognition system. More specifically, one of our aims was to investigate how the use of curvature of the diagonal profiles, extracted from 3D facial geometry models can help the neutral face recognition processes. Another aim was to use a gender classifier employed on 3D facial geometry in order to reduce the search space of the database on which facial recognition is performed. 3D facial geometry with facial expression possesses considerable challenges when it comes face recognition as identified by the communities involved in face recognition research. Thus, one aim of this study was to investigate the effects of the curvature-based method in face recognition under expression variations. Another aim was to develop techniques that can discriminate both expression-sensitive and expression-insensitive regions for ii face recognition based on non-neutral face geometry models. In the case of neutral face recognition, we developed a gender classification method using support vector machines based on the measurements of area and volume of selected regions of the face. This method reduced the search range of a database initially for a given image and hence reduces the computational time. Subsequently, in the characterisation of the face images, a minimum feature set of diagonal profiles, which we call T shape profiles, containing diacritic information were determined and extracted to characterise face models. We then used a method based on computing curvatures of selected facial regions to describe this feature set. In addition to the neutral face recognition, to solve the problem arising from data with facial expressions, initially, the curvature-based T shape profiles were employed and investigated for this purpose. For this purpose, the feature sets of the expression-invariant and expression-variant regions were determined respectively and described by geodesic distances and Euclidean distances. By using regression models the correlations between expressions and neutral feature sets were identified. This enabled us to discriminate expression-variant features and there was a gain in face recognition rate. The results of the study have indicated that our proposed curvature-based recognition, 3D gender classification of facial geometry and analysis of facial expressions, was capable of undertaking face recognition using a minimum set of features improving efficiency and computation.
25

Modelling facial action units using partial differential equations.

Ismail, Nur B.B. January 2015 (has links)
This thesis discusses a novel method for modelling facial action units. It presents facial action units model based on boundary value problems for accurate representation of human facial expression in three-dimensions. In particular, a solution to a fourth order elliptic Partial Differential Equation (PDE) subject to suitable boundary conditions is utilized, where the chosen boundary curves are based on muscles movement defined by Facial Action Coding System (FACS). This study involved three stages: modelling faces, manipulating faces and application to simple facial animation. In the first stage, PDE method is used in modelling and generating a smooth 3D face. The PDE formulation using small sets of parameters contributes to the efficiency of human face representation. In the manipulation stage, a generic PDE face of neutral expression is manipulated to a face with expression using PDE descriptors that uniquely represents an action unit. A combination of the PDE descriptor results in a generic PDE face having an expression, which successfully modelled four basic expressions: happy, sad, fear and disgust. An example of application is given using simple animation technique called blendshapes. This technique uses generic PDE face in animating basic expressions. / Ministry of Higher Education, Malaysia and Universiti Malaysia Terengganu
26

Optimizing Realistic 3D Facial Models for VR Avatars through Mesh Simplification / Optimering av realistiska 3D-ansiktsmodeller för VR-avatarer genom nätverksförenkling

Liu, Beiqian January 2023 (has links)
The use of realistic 3D avatars in Virtual Reality (VR) has gained significant traction in applications such as telecommunication and gaming, offering immersive experiences and face-to-face interactions. However, standalone VR devices often face limitations in computational resources and real-time rendering requirements, necessitating the optimization of 3D models through mesh simplification to enhance performance and ensure a smooth user experience. This thesis presents a pipeline that utilizes a Convolutional Neural Network to reconstruct realistic 3D human facial models in a static form from single RGB head images. The reconstructed models are then subjected to the Quadric Error Metrics simplification algorithm, enabling different levels of simplification to be achieved. An evaluation was conducted, utilizing 30 photos from the NoW dataset, to examine the trade-offs associated with employing mesh simplification on the generated facial models within the VR environment. The evaluation results demonstrate that reducing the polygon count improves frame rates and reduces GPU usage in VR, thereby enhancing overall performance. However, this improvement comes at the cost of increased simplification execution time and geometric errors, and decreased perceptual quality. This research contributes to the understanding of mesh simplification’s impact on human facial models within the VR context, providing insights into balancing model complexity and real-time rendering performance, particularly in resource-constrained environments such as mobile devices or cloud-based applications, as well as for models located farther away from the cameras. / Användningen av realistiska 3D-avatarer inom Virtuell Verklighet (VR) har fått betydande uppmärksamhet inom tillämpningar som telekommunikation och spel, vilket erbjuder en uppslukande upplevelse och ansikte mot ansikte-interaktioner. Dock möter fristående VR-enheter ofta begränsningar i beräkningsresurser och krav på realtidsrendering, vilket gör det nödvändigt att optimera 3D-modeller genom nätverksförenkling för att förbättra prestanda och säkerställa en smidig användarupplevelse. Denna avhandling presenterar en pipeline som använder sig av ett konvolutionellt neuralt nätverk för att rekonstruera realistiska 3D-modeller av mänskliga ansikten i en statisk form från enstaka RGB-bilder av huvudet. De rekonstruerade modellerna genomgår sedan nätverksförenkling med Quadric Error Metrics-algoritmen, vilket möjliggör olika nivåer av förenkling. En utvärdering genomfördes, med hjälp av 30 foton från NoW-datasetet, för att undersöka avvägningarna i samband med att använda nätverksförenkling på de genererade ansiktsmodellerna inom VR-miljön. Utvärderingsresultaten visar att minskning av polygonantal förbättrar bildhastigheten och minskar GPU-användningen inom VR, vilket förbättrar den övergripande prestandan. Dock sker denna förbättring på bekostnad av ökad tid för förenklingsexekvering och geometriska fel, samt minskad perceptuell kvalitet. Denna forskning bidrar till förståelsen av nätverksförenklingens påverkan på mänskliga ansiktsmodeller inom VR-sammanhanget och ger insikter om att balansera modellkomplexitet och realtidsrenderingsprestanda, särskilt i resursbegränsade miljöer som mobilenheter eller molnbaserade applikationer, samt för modeller som är längre bort från kamerorna.
27

Synthesis and analysis of human faces using multi-view, multi-illumination image ensembles

Lee, Jinho 02 December 2005 (has links)
No description available.
28

Reconstruction of 3D human facial images using partial differential equations.

Elyan, Eyad, Ugail, Hassan January 2007 (has links)
One of the challenging problems in geometric modeling and computer graphics is the construction of realistic human facial geometry. Such geometry are essential for a wide range of applications, such as 3D face recognition, virtual reality applications, facial expression simulation and computer based plastic surgery application. This paper addresses a method for the construction of 3D geometry of human faces based on the use of Elliptic Partial Differential Equations (PDE). Here the geometry corresponding to a human face is treated as a set of surface patches, whereby each surface patch is represented using four boundary curves in the 3-space that formulate the appropriate boundary conditions for the chosen PDE. These boundary curves are extracted automatically using 3D data of human faces obtained using a 3D scanner. The solution of the PDE generates a continuous single surface patch describing the geometry of the original scanned data. In this study, through a number of experimental verifications we have shown the efficiency of the PDE based method for 3D facial surface reconstruction using scan data. In addition to this, we also show that our approach provides an efficient way of facial representation using a small set of parameters that could be utilized for efficient facial data storage and verification purposes.
29

Novel algorithms for 3D human face recognition

Gupta, Shalini, 1979- 27 April 2015 (has links)
Automated human face recognition is a computer vision problem of considerable practical significance. Existing two dimensional (2D) face recognition techniques perform poorly for faces with uncontrolled poses, lighting and facial expressions. Face recognition technology based on three dimensional (3D) facial models is now emerging. Geometric facial models can be easily corrected for pose variations. They are illumination invariant, and provide structural information about the facial surface. Algorithms for 3D face recognition exist, however the area is far from being a matured technology. In this dissertation we address a number of open questions in the area of 3D human face recognition. Firstly, we make available to qualified researchers in the field, at no cost, a large Texas 3D Face Recognition Database, which was acquired as a part of this research work. This database contains 1149 2D and 3D images of 118 subjects. We also provide 25 manually located facial fiducial points on each face in this database. Our next contribution is the development of a completely automatic novel 3D face recognition algorithm, which employs discriminatory anthropometric distances between carefully selected local facial features. This algorithm neither uses general purpose pattern recognition approaches, nor does it directly extend 2D face recognition techniques to the 3D domain. Instead, it is based on an understanding of the structurally diverse characteristics of human faces, which we isolate from the scientific discipline of facial anthropometry. We demonstrate the effectiveness and superior performance of the proposed algorithm, relative to existing benchmark 3D face recognition algorithms. A related contribution is the development of highly accurate and reliable 2D+3D algorithms for automatically detecting 10 anthropometric facial fiducial points. While developing these algorithms, we identify unique structural/textural properties associated with the facial fiducial points. Furthermore, unlike previous algorithms for detecting facial fiducial points, we systematically evaluate our algorithms against manually located facial fiducial points on a large database of images. Our third contribution is the development of an effective algorithm for computing the structural dissimilarity of 3D facial surfaces, which uses a recently developed image similarity index called the complex-wavelet structural similarity index. This algorithm is unique in that unlike existing approaches, it does not require that the facial surfaces be finely registered before they are compared. Furthermore, it is nearly an order of magnitude more accurate than existing facial surface matching based approaches. Finally, we propose a simple method to combine the two new 3D face recognition algorithms that we developed, resulting in a 3D face recognition algorithm that is competitive with the existing state-of-the-art algorithms. / text
30

3D Human Face Reconstruction and 2D Appearance Synthesis

Zhao, Yajie 01 January 2018 (has links)
3D human face reconstruction has been an extensive research for decades due to its wide applications, such as animation, recognition and 3D-driven appearance synthesis. Although commodity depth sensors are widely available in recent years, image based face reconstruction are significantly valuable as images are much easier to access and store. In this dissertation, we first propose three image-based face reconstruction approaches according to different assumption of inputs. In the first approach, face geometry is extracted from multiple key frames of a video sequence with different head poses. The camera should be calibrated under this assumption. As the first approach is limited to videos, we propose the second approach then focus on single image. This approach also improves the geometry by adding fine grains using shading cue. We proposed a novel albedo estimation and linear optimization algorithm in this approach. In the third approach, we further loose the constraint of the input image to arbitrary in the wild images. Our proposed approach can robustly reconstruct high quality model even with extreme expressions and large poses. We then explore the applicability of our face reconstructions on four interesting applications: video face beautification, generating personalized facial blendshape from image sequences, face video stylizing and video face replacement. We demonstrate great potentials of our reconstruction approaches on these real-world applications. In particular, with the recent surge of interests in VR/AR, it is increasingly common to see people wearing head-mounted displays. However, the large occlusion on face is a big obstacle for people to communicate in a face-to-face manner. Our another application is that we explore hardware/software solutions for synthesizing the face image with presence of HMDs. We design two setups (experimental and mobile) which integrate two near IR cameras and one color camera to solve this problem. With our algorithm and prototype, we can achieve photo-realistic results. We further propose a deep neutral network to solve the HMD removal problem considering it as a face inpainting problem. This approach doesn't need special hardware and run in real-time with satisfying results.

Page generated in 0.0475 seconds