• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 252
  • 52
  • 26
  • 23
  • 16
  • 16
  • 10
  • 7
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 475
  • 475
  • 189
  • 138
  • 130
  • 83
  • 75
  • 70
  • 66
  • 55
  • 54
  • 50
  • 50
  • 48
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Face recognition from video

Harguess, Joshua David 30 January 2012 (has links)
While the area of face recognition has been extensively studied in recent years, it remains a largely open problem, despite what movie and television studios would leave you to believe. Frontal, still face recognition research has seen a lot of success in recent years from any different researchers. However,the accuracy of such systems can be greatly diminished in cases such as increasing the variability of the database,occluding the face, and varying the illumination of the face. Further varying the pose of the face (yaw, pitch, and roll) and the face expression (smile, frown, etc.) adds even more complexity to the face recognition task, such as in the case of face recognition from video. In a more realistic video surveillance setting, a face recognition system should be robust to scale, pose, resolution, and occlusion as well as successfully track the face between frames. Also, a more advanced face recognition system should be able to improve the face recognition result by utilizing the information present in multiple video cameras. We approach the problem of face recognition from video in the following manner. We assume that the training data for the system consists of only still image data, such as passport photos or mugshots in a real-world system. We then transform the problem of face recognition from video to a still face recognition problem. Our research focuses on solutions to detecting, tracking and extracting face information from video frames so that they may be utilized effectively in a still face recognition system. We have developed four novel methods that assist in face recognition from video and multiple cameras. The first uses a patch-based method to handle the face recognition task when only patches, or parts, of the face are seen in a video, such as when occlusion of the face happens often. The second uses multiple cameras to fuse the recognition results of multiple cameras to improve the recognition accuracy. In the third solution, we utilize multiple overlapping video cameras to improve the face tracking result which thus improves the face recognition accuracy of the system. We additionally implement a methodology to detect and handle occlusion so that unwanted information is not used in the tracking algorithm. Finally, we introduce the average-half-face, which is shown to improve the results of still face recognition by utilizing the symmetry of the face. In one attempt to understand the use of the average-half-face in face recognition, an analysis of the effect of face symmetry on face recognition results is shown. / text
72

Multiview Face Detection And Free Form Face Recognition For Surveillance

Anoop, K R 05 1900 (has links) (PDF)
The problem of face detection and recognition within a given database has become one of the important problems in computer vision. A simple approach for Face Detection in video is to run a learning based face detector every frame. But such an approach is computationally expensive and completely ignores the temporal continuity present in videos. Moreover the search space can be reduced by utilizing visual cues extracted based on the relevant task at hand(top down approach). Once detection is done next step is to perform a face recognition based on the available database. But the faces detected from face detect or output is neither aligned nor well cropped and is prone to scale change. We call such faces as free form faces. But the current existing algorithms on face recognition assume faces to be properly aligned and cropped, and having the same scale as the faces in the database, which is highly constrained. In this thesis, we propose an integrated detect-track framework for Multiview face detection in videos. We overcome the limitations of the frame based approaches, by utilizing the temporal continuity present in videos and also incorporating the top down information of the task. We model the problem based on the concept from Experiential sampling [2]. This consists of determining certain key positions which are relevant to the task(face detection). These key positions are referred to as attention samples and Multiview face detection is performed only at these locations. These statistical samples are estimated based on the visual cues, past experience and the temporal continuity and is modeled as a Bayesian filtering problem, which is solved using Particle Filters. In order to detect all views we use a tracker integrated with the detector and come out with a novel track termination algorithm using the concepts from Track Before Detect(TBD)[26]. Such an approach is computationally efficient and also results in lower false positive rate. We provide experiments showing the efficiency of the integrated detect-track approach over the multiview face detector approach without a tracker. For free form face recognition we propose to use the concept of Principal Geodesic Analysis(PGA) of the Covariance descriptors obtained from Gabor filters. This is similar to Principal Component Analysis in Euclidean spaces (Covariance descriptors lie on a Riemannian manifold). Such a descriptor is robust to alignment and scaling problems and also are of lower dimensions. We also employ sparse modeling technique for Face recognition task using these Covariance descriptor which are dimensionally reduced by transforming them on to a tangent space, which we call PGA feature. Further, we improve upon the recognition results of linear sparse modeling, by non-linear mapping of the PGA features by employing “Kernel Trick” for these sparse models. We show that the Kernelized sparse models using the PGA features are indeed very efficient for free form face recognition by testing on two standard databases namely AR and YaleB database.
73

Feature based dynamic intra-video indexing

Asghar, Muhammad Nabeel January 2014 (has links)
With the advent of digital imagery and its wide spread application in all vistas of life, it has become an important component in the world of communication. Video content ranging from broadcast news, sports, personal videos, surveillance, movies and entertainment and similar domains is increasing exponentially in quantity and it is becoming a challenge to retrieve content of interest from the corpora. This has led to an increased interest amongst the researchers to investigate concepts of video structure analysis, feature extraction, content annotation, tagging, video indexing, querying and retrieval to fulfil the requirements. However, most of the previous work is confined within specific domain and constrained by the quality, processing and storage capabilities. This thesis presents a novel framework agglomerating the established approaches from feature extraction to browsing in one system of content based video retrieval. The proposed framework significantly fills the gap identified while satisfying the imposed constraints of processing, storage, quality and retrieval times. The output entails a framework, methodology and prototype application to allow the user to efficiently and effectively retrieved content of interest such as age, gender and activity by specifying the relevant query. Experiments have shown plausible results with an average precision and recall of 0.91 and 0.92 respectively for face detection using Haar wavelets based approach. Precision of age ranges from 0.82 to 0.91 and recall from 0.78 to 0.84. The recognition of gender gives better precision with males (0.89) compared to females while recall gives a higher value with females (0.92). Activity of the subject has been detected using Hough transform and classified using Hiddell Markov Model. A comprehensive dataset to support similar studies has also been developed as part of the research process. A Graphical User Interface (GUI) providing a friendly and intuitive interface has been integrated into the developed system to facilitate the retrieval process. The comparison results of the intraclass correlation coefficient (ICC) shows that the performance of the system closely resembles with that of the human annotator. The performance has been optimised for time and error rate.
74

Face Recognition with Preprocessing and Neural Networks

Habrman, David January 2016 (has links)
Face recognition is the problem of identifying individuals in images. This thesis evaluates two methods used to determine if pairs of face images belong to the same individual or not. The first method is a combination of principal component analysis and a neural network and the second method is based on state-of-the-art convolutional neural networks. They are trained and evaluated using two different data sets. The first set contains many images with large variations in, for example, illumination and facial expression. The second consists of fewer images with small variations. Principal component analysis allowed the use of smaller networks. The largest network has 1.7 million parameters compared to the 7 million used in the convolutional network. The use of smaller networks lowered the training time and evaluation time significantly. Principal component analysis proved to be well suited for the data set with small variations outperforming the convolutional network which need larger data sets to avoid overfitting. The reduction in data dimensionality, however, led to difficulties classifying the data set with large variations. The generous amount of images in this set allowed the convolutional method to reach higher accuracies than the principal component method.
75

Gender differences in face recognition: The role of interest and friendship

Lovén, Johanna January 2006 (has links)
<p>Women outperform men in face recognition and are especially good at recognizing other females’ faces. This may be caused by a larger female interest in faces. The aims of this study were to investigate if women were more interested in female faces and if depth of friendship was related to face recognition. Forty-one women and 16 men completed two face recognition tasks: one in which the faces shown earlier had been presented one at a time, and one where they had been shown two and two. The Network of Relationships Inventory was used to assess depth of friendships. As hypothesized, but not statistically significant, women tended to recognize more female faces when faces were presented two and two. No relationships were found between depth of friendships and face recognition. The results gave some support for the previously untested hypothesis that interest has importance in women’s recognition of female faces.</p>
76

Cognitive Mechanisms of False Facial Recognition

Edmonds, Emily Charlotte January 2011 (has links)
Face recognition involves a number of complex cognitive processes, including memory, executive functioning, and perception. A breakdown of one or more of these processes may result in false facial recognition, a memory distortion in which one mistakenly believes that novel faces are familiar. This study examined the cognitive mechanisms underlying false facial recognition in healthy older and younger adults, patients with frontotemporal dementia, and individuals with congenital prosopagnosia. Participants completed face recognition memory tests that included several different types of lures, as well as tests of face perception. Older adults demonstrated a familiarity-based response strategy, reflecting a deficit in source monitoring and impaired recollection of context, as they could not reliably discriminate between study faces and highly familiar lures. In patients with frontotemporal dementia, temporal lobe atrophy alone was associated with a reduction of true facial recognition, while concurrent frontal lobe damage was associated with increased false recognition, a liberal response bias, and an overreliance on "gist" memory when making recognition decisions. Individuals with congenital prosopagnosia demonstrated deficits in configural processing of faces and a reliance on feature-based processing, leading to false recognition of lures that had features in common from study to test. These findings may have important implications for the development of training programs that could serve to help individuals improve their ability to accurately recognize faces.
77

Robust Face Detection Using Template Matching Algorithm

Faizi, Amir 24 February 2009 (has links)
Human face detection and recognition techniques play an important role in applica- tions like face recognition, video surveillance, human computer interface and face image databases. Using color information in images is one of the various possible techniques used for face detection. The novel technique used in this project was the combination of various techniques such as skin color detection, template matching, gradient face de- tection to achieve high accuracy of face detection in frontal faces. The objective in this work was to determine the best rotation angle to achieve optimal detection. Also eye and mouse template matching have been put to test for feature detection.
78

Linear Feature Extraction with Emphasis on Face Recognition

Mahanta, Mohammad Shahin 15 February 2010 (has links)
Feature extraction is an important step in the classification of high-dimensional data such as face images. Furthermore, linear feature extractors are more prevalent due to computational efficiency and preservation of the Gaussianity. This research proposes a simple and fast linear feature extractor approximating the sufficient statistic for Gaussian distributions. This method preserves the discriminatory information in both first and second moments of the data and yields the linear discriminant analysis as a special case. Additionally, an accurate upper bound on the error probability of a plug-in classifier can be used to approximate the number of features minimizing the error probability. Therefore, tighter error bounds are derived in this work based on the Bayes error or the classification error on the trained distributions. These bounds can also be used for performance guarantee and to determine the required number of training samples to guarantee approaching the Bayes classifier performance.
79

Component-based face recognition.

Dombeu, Jean Vincent Fonou. January 2008 (has links)
Component-based automatic face recognition has been of interest to a growing number of researchers in the past fifteen years. However, the main challenge remains the automatic extraction of facial components for recognition in different face orientations without any human intervention; or any assumption on the location of these components. In this work, we investigate a solution to this problem. Facial components: eyes, nose, and mouth are firstly detected in different orientations of face. To ensure that the components detected are appropriate for recognition, the Support Vector Machine (SVM) classifer is applied to identify facial components that have been accurately detected. Thereafter, features are extracted from the correctly detected components by Gabor Filters and Zernike Moments combined. Gabor Filters are used to extract the texture characteristics of the eyes and Zernike Moments are applied to compute the shape characteristics of the nose and the mouth. The texture and the shape features are concatenated and normalized to build the final feature vector of the input face image. Experiments show that our feature extraction strategy is robust, it also provides a more compact representation of face images and achieves an average recognition rate of 95% in different face orientations. / Thesis (M.Sc.)-University of KwaZulu-Natal, 2008.
80

Situated face detection

Espinosa-Romero, Arturo January 2001 (has links)
In the last twenty years, important advances have been made in the field of automatic face processing, given the importance of human faces for personal identification, emotional expression and verbal and non verbal communication. The very first step in a face processing algorithm is the detection of faces; while this is a trivial problem in controlled environments, the detection of faces in real environments is still a challenging task. Until now, the most successful approaches for face detection represent the face as a grey-level pattern, and the problem itself is considered as the classification between "face" and "non-face" patterns. Satisfactory results have been achieved in this area. The main disadvantage is that an exhaustive search has to be done on each image in order to locate the faces. This search normally involves testing every single position on the image at different scales, and although this does not represent an important drawback in off-line face processing systems, in those cases where a real-time response is needed it is still a problem. In the different proposed methods for face detection, the "observer" is a disembodied entity, which holds no relationship with the observed scene. This thesis presents a framework for an efficient location of faces in real scenes, in which, by considering both the observer to be situated in the world, and the relationships that hold between the two, a set of constraints in the search space can be defined. The constraints rely on two main assumptions; first, the observer can purposively interact with the world (i.e. change its position relative to the observed scene) and second, the camera is fully calibrated. The first source constraint is the structural information about the observer environment, represented as a depth map of the scene in front of the camera. From this representation the search space can be constrained in terms of the range of scales where a face might be found as different positions in the image. The second source of constraint is the geometrical relationship between the camera and the scene, which allows us to project a model of the subject into the scene in order to eliminate those areas where faces are unlikely to be found. In order to test the proposed framework, a system based on the premises stated above was constructed. It is based on three different modules: a face/non-face classifier, a depth estimation module and a search module. The classifier is composed of a set of convolutional neural networks (CNN) that were trained to differentiate between face and non-face patterns, the depth estimation modules uses a multilevel algorithm to compute the scene depth map from a sequence of images captured the depth information and the subject model into the image where the search will be performed in order to constrain the search space. Finally, the proposed system was validated by running a set of experiments on the individual modules and then on the whole system.

Page generated in 0.125 seconds