Spelling suggestions: "subject:"En fact""
741 |
Relative Contributions of Internal and External Features to Face RecognitionJarudi, Izzat N., Sinha, Pawan 01 March 2003 (has links)
The central challenge in face recognition lies in understanding the role different facial features play in our judgments of identity. Notable in this regard are the relative contributions of the internal (eyes, nose and mouth) and external (hair and jaw-line) features. Past studies that have investigated this issue have typically used high-resolution images or good-quality line drawings as facial stimuli. The results obtained are therefore most relevant for understanding the identification of faces at close range. However, given that real-world viewing conditions are rarely optimal, it is also important to know how image degradations, such as loss of resolution caused by large viewing distances, influence our ability to use internal and external features. Here, we report experiments designed to address this issue. Our data characterize how the relative contributions of internal and external features change as a function of image resolution. While we replicated results of previous studies that have shown internal features of familiar faces to be more useful for recognition than external features at high resolution, we found that the two feature sets reverse in importance as resolution decreases. These results suggest that the visual system uses a highly non-linear cue-fusion strategy in combining internal and external features along the dimension of image resolution and that the configural cues that relate the two feature sets play an important role in judgments of facial identity.
|
742 |
A facial animation model for expressive audio-visual speechSomasundaram, Arunachalam. January 2006 (has links)
Thesis (Ph. D.)--Ohio State University, 2006. / Title from first page of PDF file. Includes bibliographical references (p. 131-139).
|
743 |
Human Identification Based on Three-Dimensional Ear and Face ModelsCadavid, Steven 05 May 2011 (has links)
We propose three biometric systems for performing 1) Multi-modal Three-Dimensional (3D) ear + Two-Dimensional (2D) face recognition, 2) 3D face recognition, and 3) hybrid 3D ear recognition combining local and holistic features. For the 3D ear component of the multi-modal system, uncalibrated video sequences are utilized to recover the 3D ear structure of each subject within a database. For a given subject, a series of frames is extracted from a video sequence and the Region-of-Interest (ROI) in each frame is independently reconstructed in 3D using Shape from Shading (SFS). A fidelity measure is then employed to determine the model that most accurately represents the 3D structure of the subject’s ear. Shape matching between a probe and gallery ear model is performed using the Iterative Closest Point (ICP) algorithm. For the 2D face component, a set of facial landmarks is extracted from frontal facial images using the Active Shape Model (ASM) technique. Then, the responses of the facial images to a series of Gabor filters at the locations of the facial landmarks are calculated. The Gabor features are stored in the database as the face model for recognition. Match-score level fusion is employed to combine the match scores obtained from both the ear and face modalities. The aim of the proposed system is to demonstrate the superior performance that can be achieved by combining the 3D ear and 2D face modalities over either modality employed independently. For the 3D face recognition system, we employ an Adaboost algorithm to builda classifier based on geodesic distance features. Firstly, a generic face model is finely conformed to each face model contained within a 3D face dataset. Secondly, the geodesic distance between anatomical point pairs are computed across each conformed generic model using the Fast Marching Method. The Adaboost algorithm then generates a strong classifier based on a collection of geodesic distances that are most discriminative for face recognition. The identification and verification performances of three Adaboost algorithms, namely, the original Adaboost algorithm proposed by Freund and Schapire, and two variants – the Gentle and Modest Adaboost algorithms – are compared. For the hybrid 3D ear recognition system, we propose a method to combine local and holistic ear surface features in a computationally efficient manner. The system is comprised of four primary components, namely, 1) ear image segmentation, 2) local feature extraction and matching, 3) holistic feature extraction and matching, and 4) a fusion framework combining local and holistic features at the match score level. For the segmentation component, we employ our method proposed in [111], to localize a rectangular region containing the ear. For the local feature extraction and representation component, we extend the Histogram of Categorized Shapes (HCS) feature descriptor, proposed in [111], to an object-centered 3D shape descriptor, termed Surface Patch Histogram of Indexed Shapes (SPHIS), for surface patch representation and matching. For the holistic matching component, we introduce a voxelization scheme for holistic ear representation from which an efficient, element-wise comparison of gallery-probe model pairs can be made. The match scores obtained from both the local and holistic matching components are fused to generate the final match scores. Experimental results conducted on the University of Notre Dame (UND) collection J2 dataset demonstrate that theproposed approach outperforms state-of-the-art 3D ear biometric systems in both accuracy and efficiency.
|
744 |
An integration framework of feature selection and extraction for appearance-based recognitionLi, Qi. January 2006 (has links)
Thesis (Ph.D.)--University of Delaware, 2006. / Principal faculty advisor: Chandra Kambhamettu, Dept. of Computer & Information Sciences. Includes bibliographical references.
|
745 |
Self-organizing features for regularized image standardizationGökçay, Didem, January 2001 (has links) (PDF)
Thesis (Ph. D.)--University of Florida, 2001. / Title from first page of PDF file. Document formatted into pages; contains ix, 117 p.; also contains graphics. Vita. Includes bibliographical references (p. 109-116).
|
746 |
Analysis of abnormal craniofacial and ear development of a transgenic mutant with ectopic hoxb3 expression /Wong, Yee-man, Elaine. January 2006 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2006. / Also available online.
|
747 |
3D face structure extraction from images at arbitrary poses and under arbitrary illumination conditions /Zhang, Cuiping. Cohen, Fernand S. January 2006 (has links)
Thesis (Ph. D.)--Drexel University, 2006. / Includes abstract and vita. Includes bibliographical references (leaves 165-171).
|
748 |
Context-Based Algorithm for Face DetectionWall, Helene January 2005 (has links)
Face detection has been a research area for more than ten years. It is a complex problem due to the high variability in faces and amongst faces; therefore it is not possible to extract a general pattern to be used for detection. This is what makes the face detection problem a challenge. This thesis gives the reader a background to the face detection problem, where the two main approaches of the problem are described. A face detection algorithm is implemented using a context-based method in combination with an evolving neural network. The algorithm consists of two majors steps: detect possible face areas and within these areas detect faces. This method makes it possible to reduce the search space. The performance of the algorithm is evaluated and analysed. There are several parameters that affect the performance; the feature extraction method, the classifier and the images used. This work resulted in a face detection algorithm and the performance of the algorithm is evaluated and analysed. The analysis of the problems that occurred has provided a deeper understanding for the complexity of the face detection problem.
|
749 |
Gender differences in face recognition: The role of interest and friendshipLovén, Johanna January 2006 (has links)
Women outperform men in face recognition and are especially good at recognizing other females’ faces. This may be caused by a larger female interest in faces. The aims of this study were to investigate if women were more interested in female faces and if depth of friendship was related to face recognition. Forty-one women and 16 men completed two face recognition tasks: one in which the faces shown earlier had been presented one at a time, and one where they had been shown two and two. The Network of Relationships Inventory was used to assess depth of friendships. As hypothesized, but not statistically significant, women tended to recognize more female faces when faces were presented two and two. No relationships were found between depth of friendships and face recognition. The results gave some support for the previously untested hypothesis that interest has importance in women’s recognition of female faces.
|
750 |
Harnessing Social Networks for Social Awareness via Mobile Face RecognitionBloess, Mark 14 February 2013 (has links)
With more and more images being uploaded to social networks each day, the resources for identifying a large portion of the world are available. However the tools to harness and utilize this information are not sufficient. This thesis presents a system, called PhacePhinder, which can build a face database from a social network and have it accessible from mobile devices. Through combining existing technologies, this is made possible. It also makes use of a fusion probabilistic latent semantic analysis to determine strong connections between users and content. Using this information we can determine the most meaningful social connection to a recognized person, allowing us to inform the user of how they know the person being recognized. We conduct a series of offline and user tests to verify our results and compare them to existing algorithms. We show, that through combining a user’s friendship information as well as picture occurrence information, we can make stronger recommendations than based on friendship alone. We demonstrate a working prototype that can identify a face from a picture taken from a mobile phone, using a database derived from images gathered directly from a social network, and return a meaningful social connection to the recognized face.
|
Page generated in 0.0829 seconds