Spelling suggestions: "subject:"human face"" "subject:"suman face""
111 |
Novel algorithms for 3D human face recognitionGupta, Shalini, 1979- 27 April 2015 (has links)
Automated human face recognition is a computer vision problem of considerable practical significance. Existing two dimensional (2D) face recognition techniques perform poorly for faces with uncontrolled poses, lighting and facial expressions. Face recognition technology based on three dimensional (3D) facial models is now emerging. Geometric facial models can be easily corrected for pose variations. They are illumination invariant, and provide structural information about the facial surface. Algorithms for 3D face recognition exist, however the area is far from being a matured technology. In this dissertation we address a number of open questions in the area of 3D human face recognition. Firstly, we make available to qualified researchers in the field, at no cost, a large Texas 3D Face Recognition Database, which was acquired as a part of this research work. This database contains 1149 2D and 3D images of 118 subjects. We also provide 25 manually located facial fiducial points on each face in this database. Our next contribution is the development of a completely automatic novel 3D face recognition algorithm, which employs discriminatory anthropometric distances between carefully selected local facial features. This algorithm neither uses general purpose pattern recognition approaches, nor does it directly extend 2D face recognition techniques to the 3D domain. Instead, it is based on an understanding of the structurally diverse characteristics of human faces, which we isolate from the scientific discipline of facial anthropometry. We demonstrate the effectiveness and superior performance of the proposed algorithm, relative to existing benchmark 3D face recognition algorithms. A related contribution is the development of highly accurate and reliable 2D+3D algorithms for automatically detecting 10 anthropometric facial fiducial points. While developing these algorithms, we identify unique structural/textural properties associated with the facial fiducial points. Furthermore, unlike previous algorithms for detecting facial fiducial points, we systematically evaluate our algorithms against manually located facial fiducial points on a large database of images. Our third contribution is the development of an effective algorithm for computing the structural dissimilarity of 3D facial surfaces, which uses a recently developed image similarity index called the complex-wavelet structural similarity index. This algorithm is unique in that unlike existing approaches, it does not require that the facial surfaces be finely registered before they are compared. Furthermore, it is nearly an order of magnitude more accurate than existing facial surface matching based approaches. Finally, we propose a simple method to combine the two new 3D face recognition algorithms that we developed, resulting in a 3D face recognition algorithm that is competitive with the existing state-of-the-art algorithms. / text
|
112 |
Infrared imaging face recognition using nonlinear kernel-based classifiersDomboulas, Dimitrios I. 12 1900 (has links)
Approved for public release; distribution in unlimited. / In recent years there has been an increased interest in effective individual control and enhanced security measures, and face recognition schemes play an important role in this increasing market. In the past, most face recognition research studies have been conducted with visible imaging data. Only recently have IR imaging face recognition studies been reported for wide use applications, as uncooled IR imaging technology has improved to the point where the resolution of these much cheaper cameras closely approaches that of cooled counterparts. This study is part of an on-going research conducted at the Naval Postgraduate School which investigates the feasibility of applying a low cost uncooled IR camera for face recognition applications. This specific study investigates whether nonlinear kernel-based classifiers may improve overall classification rates over those obtained with linear classification schemes. The study is applied to a 50 subject IR database developed in house with a low resolution uncooled IR camera. Results show best overall mean classification performances around 98.55% which represents a 5% performance improvement over the best linear classifier results obtained previously on the same database. This study also considers several metrics to evaluate the impacts variations in various user-specified parameters have on the resulting classification performances. These results show that a low-cost, low-resolution IR camera combined with an efficient classifier can play an effective role in security related applications. / Captain, Hellenic Air Force
|
113 |
Measuring Community Consensus in Facial Characterization Using Spatial Databases and Fuzzy LogicMastros, James Lee 01 January 2005 (has links)
Spatial databases store geometric objects and capture spatial relationships that can be used to represent key features of the human face. One can search spatial databases for these objects, and seek the relationships between them, using fuzzy logic to provide a natural way to describe the human face for the purposes of facial characterization. This study focuses on community perception of short, average, or long nose length. Three algorithms were used to update community opinion of nose length. All three methods showed similar trends in nose length classification which could indicate that the effort to extract spatial data from images to classify nose length is not as crucial as previously thought since community consensus will ultimately give similar results. However, additional testing with larger groups is needed to further validate any conclusion that spatial data can be eliminated.
|
114 |
Pohlavní dimorfismus obličeje a jeho změny při stárnutí / Sexual dimorphism of the face and changes during senescenceMydlová, Miriama January 2013 (has links)
The human face shape is constantly changing during human life, even after one`s development stop (Hennessy a Moss, 2001; Williams a Slice, 2010). This work applies geometric morphometry method on study of sexual dimorphism of human face through ageing. Sexual dimorphism can be defined as a systematic difference in form between individuals of different sex of the same kind (Samal et al., 2007). Morphological changes, related to the process of ageing of human face, were analysed on data obtained from 3D surface models of human faces using methods of geometric morphometry (Dense Correspondence Analysis, Principal Component Analysis, Shell-to-Shell Deviation) and multivariatel statistics (Scree Plot, Hotelling`s Tš-test, permutational test, MANOVA). The work results indicates that the form (size with shape) and shape of men and women faces significantly change through ageing. Individuals aged between 20-40 years differ in form of the face, however the oldest men aged between 61-82 years differ from women only in shape of the face. The biggest differences in sexual dimorphism are in the middle age category (41-60 years), where there are significant differences not only in form, but also considering shape alone. Key words: ageing, form and shape, geometric morphometry, human face, sexual dimorphism 7
|
115 |
3D object retrieval and recognition. / Three-dimensional object retrieval and recognitionJanuary 2010 (has links)
Gong, Boqing. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (p. 53-59). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- 3D Object Representation --- p.1 / Chapter 1.1.1 --- Polygon Mesh --- p.2 / Chapter 1.1.2 --- Voxel --- p.2 / Chapter 1.1.3 --- Range Image --- p.3 / Chapter 1.2 --- Content-Based 3D Object Retrieval --- p.3 / Chapter 1.3 --- 3D Facial Expression Recognition --- p.4 / Chapter 1.4 --- Contributions --- p.5 / Chapter 2 --- 3D Object Retrieval --- p.6 / Chapter 2.1 --- A Conceptual Framework for 3D Object Retrieval --- p.6 / Chapter 2.1.1 --- Query Formulation and User Interface --- p.7 / Chapter 2.1.2 --- Canonical Coordinate Normalization --- p.8 / Chapter 2.1.3 --- Representations of 3D Objects --- p.10 / Chapter 2.1.4 --- Performance Evaluation --- p.11 / Chapter 2.2 --- Public Databases --- p.13 / Chapter 2.2.1 --- Databases of Generic 3D Objects --- p.14 / Chapter 2.2.2 --- A Database of Articulated Objects --- p.15 / Chapter 2.2.3 --- Domain-Specific Databases --- p.15 / Chapter 2.2.4 --- Data Sets for the Shrec Contest --- p.16 / Chapter 2.3 --- Experimental Systems --- p.16 / Chapter 2.4 --- Challenges in 3D Object Retrieval --- p.17 / Chapter 3 --- Boosting 3D Object Retrieval by Object Flexibility --- p.19 / Chapter 3.1 --- Related Work --- p.19 / Chapter 3.2 --- Object Flexibility --- p.21 / Chapter 3.2.1 --- Definition --- p.21 / Chapter 3.2.2 --- Computation of the Flexibility --- p.22 / Chapter 3.3 --- A Flexibility Descriptor for 3D Object Retrieval --- p.24 / Chapter 3.4 --- Enhancing Existing Methods --- p.25 / Chapter 3.5 --- Experiments --- p.26 / Chapter 3.5.1 --- Retrieving Articulated Objects --- p.26 / Chapter 3.5.2 --- Retrieving Generic Objects --- p.27 / Chapter 3.5.3 --- Experiments on Larger Databases --- p.28 / Chapter 3.5.4 --- Comparison of Times for Feature Extraction --- p.31 / Chapter 3.6 --- Conclusions & Analysis --- p.31 / Chapter 4 --- 3D Object Retrieval with Referent Objects --- p.32 / Chapter 4.1 --- 3D Object Retrieval with Prior --- p.32 / Chapter 4.2 --- 3D Object Retrieval with Referent Objects --- p.34 / Chapter 4.2.1 --- Natural and Man-made 3D Object Classification --- p.35 / Chapter 4.2.2 --- Inferring Priors Using 3D Object Classifier --- p.36 / Chapter 4.2.3 --- Reducing False Positives --- p.37 / Chapter 4.3 --- Conclusions and Future Work --- p.38 / Chapter 5 --- 3D Facial Expression Recognition --- p.39 / Chapter 5.1 --- Introduction --- p.39 / Chapter 5.2 --- Separation of BFSC and ESC --- p.43 / Chapter 5.2.1 --- 3D Face Alignment --- p.43 / Chapter 5.2.2 --- Estimation of BFSC --- p.44 / Chapter 5.3 --- Expressional Regions and an Expression Descriptor --- p.45 / Chapter 5.4 --- Experiments --- p.47 / Chapter 5.4.1 --- Testing the Ratio of Preserved Energy in the BFSC Estimation --- p.47 / Chapter 5.4.2 --- Comparison with Related Work --- p.48 / Chapter 5.5 --- Conclusions --- p.50 / Chapter 6 --- Conclusions --- p.51 / Bibliography --- p.53
|
116 |
3d Synthetic Human Face Modelling Tool Based On T-spline SurfacesAydogan, Ali 01 December 2007 (has links) (PDF)
In this thesis work, a 3D Synthetic Human Face Modelling Software is implemented using C++ and OpenGL. Bé / zier surfaces, B-spline surfaces, Nonuniform Rational B-spline surfaces, Hierarchical B-Spline surfaces and T-spline surfaces are evaluated as options for the surface description method. T-spline surfaces are chosen since they are found to be superior considering the requirements of the work. In the modelling process, a modular approach is followed. Firstly, high detailed facial regions (i.e. nose, eyes, mouth) are modelled, then these models are unified in a complete face model employing the merging capabilities of T-splines. Local and global features of the face model are parameterized in order to have the ability to create and edit various face models. To enhance the visual quality of the model, a region-variable rendering scheme is employed. In doing this, a new file format to define T-Spline surfaces is proposed. To reduce the computational and memory cost of the software,
a simplified version of the T-Spline surface description method is proposed and used.
|
117 |
Representations and matching techniques for 3D free-form object and face recognitionMian, Ajmal Saeed January 2007 (has links)
[Truncated abstract] The aim of visual recognition is to identify objects in a scene and estimate their pose. Object recognition from 2D images is sensitive to illumination, pose, clutter and occlusions. Object recognition from range data on the other hand does not suffer from these limitations. An important paradigm of recognition is model-based whereby 3D models of objects are constructed offline and saved in a database, using a suitable representation. During online recognition, a similar representation of a scene is matched with the database for recognizing objects present in the scene . . . The tensor representation is extended to automatic and pose invariant 3D face recognition. As the face is a non-rigid object, expressions can significantly change its 3D shape. Therefore, the last part of this thesis investigates representations and matching techniques for automatic 3D face recognition which are robust to facial expressions. A number of novelties are proposed in this area along with their extensive experimental validation using the largest available 3D face database. These novelties include a region-based matching algorithm for 3D face recognition, a 2D and 3D multimodal hybrid face recognition algorithm, fully automatic 3D nose ridge detection, fully automatic normalization of 3D and 2D faces, a low cost rejection classifier based on a novel Spherical Face Representation, and finally, automatic segmentation of the expression insensitive regions of a face.
|
118 |
Facial expression recognition for multi-player on-line gamesZhan, Ce. January 2008 (has links)
Thesis (M.Comp.Sc.)--University of Wollongong, 2008. / Typescript. Includes bibliographical references: leaf 88-98.
|
119 |
3D facial expression modeling and analysis with topographic informationWei, Xiaozhou. January 2008 (has links)
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Computer Science, 2008. / Includes bibliographical references.
|
120 |
3-D face recognitionEriksson, Anders 12 1900 (has links)
Thesis (MEng) -- Stellenbosch University , 1999. / ENGLISH ABSTRACT: In recent years face recognition has been a focus of intensive research but has still not
achieved its full potential, mainly due to the limited abilities of existing systems to cope
with varying pose and illumination. The most popular techniques to overcome this problem
are the use of 3-D models or stereo information as this provides a system with the necessary
information about the human face to ensure good recognition performance on faces with
largely varying poses.
In this thesis we present a novel approach to view-invariant face recognition that utilizes
stereo information extracted from calibrated stereo image pairs. The method is invariant
of scaling, rotation and variations in illumination. For each of the training image pairs a
number of facial feature points are located in both images using Gabor wavelets. From
this, along with the camera calibration information, a sparse 3-D mesh of the face can be
constructed. This mesh is then stored along with the Gabor wavelet coefficients at each
feature point, resulting in a model that contains both the geometric information of the
face as well as its texture, described by the wavelet coefficients. The recognition is then
conducted by filtering the test image pair with a Gabor filter bank, projecting the stored
models feature points onto the image pairs and comparing the Gabor coefficients from the
filtered image pairs with the ones stored in the model. The fit is optimised by rotating and
translating the 3-D mesh. With this method reliable recognition results were obtained on
a database with large variations in pose and illumination. / AFRIKAANSE OPSOMMING: Alhoewel gesigsherkenning die afgelope paar jaar intensief ondersoek is, het dit nog nie sy
volle potensiaal bereik nie. Dit kan hoofsaaklik toegeskryf word aan die feit dat huidige
stelsels nie aanpasbaar is om verskillende beligting en posisie van die onderwerp te hanteer
nie. Die bekendste tegniek om hiervoor te kompenseer is die gebruik van 3-D modelle of
stereo inligting. Dit stel die stelsel instaat om akkurate gesigsherkenning te doen op gesigte
met groot posisionele variansie.
Hierdie werk beskryf 'n nuwe metode om posisie-onafhanklike gesigsherkenning te doen
deur gebruik te maak van stereo beeldpare. Die metode is invariant vir skalering, rotasie
en veranderinge in beligting. 'n Aantal gesigspatrone word gevind in elke beeldpaar van die
oplei-data deur gebruik te maak van Gabor filters. Hierdie patrone en kamera kalibrasie
inligting word gebruik om 'n 3-D raamwerk van die gesig te konstrueer. Die gesigmodel wat
gebruik word om toetsbeelde te klassifiseer bestaan uit die gesigraamwerk en die Gabor
filter koeffisiente by elke patroonpunt.
Klassifisering van 'n toetsbeeldpaar word gedoen deur die toetsbeelde te filter met 'n Gabor
filterbank. Die gestoorde modelpatroonpunte word dan geprojekteer op die beeldpaar en
die Gabor koeffisiente van die gefilterde beelde word dan vergelyk met die koeffisiente wat
gestoor is in die model. Die passing word geoptimeer deur rotosie en translasie van die
3-D raamwerk. Die studie het getoon dat hierdie metode akkurate resultate verskaf vir 'n
databasis met 'n groot variansie in posisie en beligting.
|
Page generated in 0.0687 seconds