• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 251
  • 52
  • 26
  • 23
  • 16
  • 16
  • 10
  • 7
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 474
  • 474
  • 189
  • 138
  • 130
  • 83
  • 75
  • 70
  • 65
  • 54
  • 54
  • 50
  • 50
  • 48
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Face recognition using structural approach. / CUHK electronic theses & dissertations collection

January 2006 (has links)
Face recognition is an important biological authentication technology. In this thesis, we study face recognition using structural approach, in which structural information of the face is extracted and used for the recognition. / The first part of this thesis discusses the methods for the detection of some facial features and their applications in face recognition. Generally, the more features with good accuracy are detected and used for face recognition, the better is the recognition result. We first propose a method to extract the eyebrow contours from the face image by an enhanced K-means clustering algorithm and a revised Snake algorithm. The reliable part of the extracted eyebrow contour is then used as a feature for face recognition. Then we introduce a novel method to estimate the chin contour for face recognition. The method first estimates several possible locations of chin and check points which are used to build a number of curves as chin contour candidates. Based on the chin like edges extracted by a modified Canny edge detector, the curve with the largest degree of likeliness to be the actual chin contour is selected. Finally, the estimated chin contours with high enough likeliness are used as a geometric feature for face recognition. Experimental results show that the proposed algorithms can extract eyebrows and chin contours with good accuracy and the extracted features are effective for improving face recognition rates. / The second part of this thesis deals with pose estimation and pose invariant face recognition. Pose estimation is achieved based on the detected structural information of the face. We first propose a method for recognition of a face at any pose from a single frontal view image. The first step of the method is feature detection. In this step, we detect the ear points by a novel algorithm. Then, a set of 3D head models is constructed for each test image based on the geometric features extracted from both the input image and each frontal view image in the gallery. Using this set of potential models, we can obtain a set of potential poses. Based on these potential models and poses, feature templates and geometric features of the input face are then rectified to form the potential frontal views. The last step is the feature comparison and final pose estimation. The major contribution of the proposed algorithm is that it can estimate and compensate both sidespin and seesaw rotations while existing model based algorithms from a single frontal view can only handle sidespin rotation. We also propose a method of pose invariant face recognition from multi-view images. First, the 3D poses of face in 2D images are estimated by using a 3D reference face model in three-layer linear iterative processes. The 3D model is updated to fit a particular person using an iterative algorithm. Then we construct the virtual frontal view face images from the input 2D face images based on the estimated poses and the matched 3D face models. We extract the waveletfaces from these virtual frontal views based on wavelet transform and perform linear discriminant analysis on these waveletfaces. Finally, the nearest feature space classifier is employed for feature comparison. These proposed methods were tested using commonly used face databases. Experimental results show that the proposed face recognition methods are robust and compare favourably with existing methods in terms of recognition rate. / Chen Qinran. / "September 2006." / Adviser: Wai Kuen Cham. / Source: Dissertation Abstracts International, Volume: 68-03, Section: B, page: 1814. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (p. 134-154). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
22

Face recognition using different training data.

January 2003 (has links)
Li Zhifeng. / Thesis submitted in: December 2002. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 49-53). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgments --- p.v / Table of Contents --- p.vi / List of Figures --- p.viii / List of Tables --- p.ix / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Face Recognition Problem and Challenge --- p.1 / Chapter 1.2 --- Applications --- p.2 / Chapter 1.3 --- Face Recognition Methods --- p.3 / Chapter 1.4 --- The Relationship Between the Face Recognition Performance and Different Training Data --- p.5 / Chapter 1.5 --- Thesis Overview --- p.6 / Chapter Chapter 2 --- PCA-based Recognition Method --- p.7 / Chapter 2.1 --- Review --- p.7 / Chapter 2.2 --- Formulation --- p.8 / Chapter 2.2.1 --- Karhunen-Loeve transform (KLT) --- p.8 / Chapter 2.2.2 --- Multilevel Dominant Eigenvector Estimation (MDEE) --- p.12 / Chapter 2.3 --- Analysis of The Effect of Training Data on PCA-based Method --- p.13 / Chapter Chapter 3 --- LDA-based Recognition Method --- p.17 / Chapter 3.1 --- Review --- p.17 / Chapter 3.2 --- Formulation --- p.18 / Chapter 3.2.1 --- The Pure LDA --- p.18 / Chapter 3.2.2 --- LDA-based method --- p.19 / Chapter 3.3 --- Analysis of The Effect of Training Data on LDA-based Method --- p.21 / Chapter Chapter 4 --- Experiments --- p.23 / Chapter 4.1 --- Face Database --- p.23 / Chapter 4.1.1 --- AR face database --- p.23 / Chapter 4.1.2 --- XM2VTS face database --- p.24 / Chapter 4.1.3 --- MMLAB face database --- p.26 / Chapter 4.1.4 --- Face Data Preprocessing --- p.27 / Chapter 4.2 --- Recognition Formulation --- p.29 / Chapter 4.3 --- PCA-based Recognition Using Different Training Data Sets --- p.29 / Chapter 4.3.1 --- Experiments on MMLAB Face Database --- p.30 / Chapter 4.3.1.1 --- Training Data Sets and Testing Data Sets --- p.30 / Chapter 4.3.1.2 --- Face Recognition Performance Using Different Training Data Sets --- p.31 / Chapter 4.3.2 --- Experiments on XM2VTS Face Database --- p.33 / Chapter 4.3.3 --- Comparison of MDEE and KLT --- p.36 / Chapter 4.3.4 --- Summary --- p.38 / Chapter 4.4 --- LDA-based Recognition Using Different Training Data Sets --- p.38 / Chapter 4.4.1 --- Experiments on AR Face Database --- p.38 / Chapter 4.4.1.1 --- The Selection of Training Data and Testing Data --- p.38 / Chapter 4.4.1.2 --- LDA-based recognition on AR face database --- p.39 / Chapter 4.4.2 --- Experiments on XM2VTS Face Database --- p.40 / Chapter 4.4.3 --- Training Data Sets and Testing Data Sets --- p.41 / Chapter 4.4.4 --- Experiments on XM2VTS Face Database --- p.42 / Chapter 4.4.5 --- Summary --- p.46 / Chapter Chapter 5 --- Summary --- p.47 / Bibliography --- p.49
23

A unified framework for subspace based face recognition.

January 2003 (has links)
Wang Xiaogang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 88-91). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgments --- p.v / Table of Contents --- p.vi / List of Figures --- p.viii / List of Tables --- p.x / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Face recognition --- p.1 / Chapter 1.2 --- Subspace based face recognition technique --- p.2 / Chapter 1.3 --- Unified framework for subspace based face recognition --- p.4 / Chapter 1.4 --- Discriminant analysis in dual intrapersonal subspaces --- p.5 / Chapter 1.5 --- Face sketch recognition and hallucination --- p.6 / Chapter 1.6 --- Organization of this thesis --- p.7 / Chapter Chapter 2 --- Review of Subspace Methods --- p.8 / Chapter 2.1 --- PCA --- p.8 / Chapter 2.2 --- LDA --- p.9 / Chapter 2.3 --- Bayesian algorithm --- p.12 / Chapter Chapter 3 --- A Unified Framework --- p.14 / Chapter 3.1 --- PCA eigenspace --- p.16 / Chapter 3.2 --- Intrapersonal and extrapersonal subspaces --- p.17 / Chapter 3.3 --- LDA subspace --- p.18 / Chapter 3.4 --- Comparison of the three subspaces --- p.19 / Chapter 3.5 --- L-ary versus binary classification --- p.22 / Chapter 3.6 --- Unified subspace analysis --- p.23 / Chapter 3.7 --- Discussion --- p.26 / Chapter Chapter 4 --- Experiments on Unified Subspace Analysis --- p.28 / Chapter 4.1 --- Experiments on FERET database --- p.28 / Chapter 4.1.1 --- PCA Experiment --- p.28 / Chapter 4.1.2 --- Bayesian experiment --- p.29 / Chapter 4.1.3 --- Bayesian analysis in reduced PCA subspace --- p.30 / Chapter 4.1.4 --- Extract discriminant features from intrapersonal subspace --- p.33 / Chapter 4.1.5 --- Subspace analysis using different training sets --- p.34 / Chapter 4.2 --- Experiments on the AR face database --- p.36 / Chapter 4.2.1 --- "Experiments on PCA, LDA and Bayes" --- p.37 / Chapter 4.2.2 --- Evaluate the Bayesian algorithm for different transformation --- p.38 / Chapter Chapter 5 --- Discriminant Analysis in Dual Subspaces --- p.41 / Chapter 5.1 --- Review of LDA in the null space of and direct LDA --- p.42 / Chapter 5.1.1 --- LDA in the null space of --- p.42 / Chapter 5.1.2 --- Direct LDA --- p.43 / Chapter 5.1.3 --- Discussion --- p.44 / Chapter 5.2 --- Discriminant analysis in dual intrapersonal subspaces --- p.45 / Chapter 5.3 --- Experiment --- p.50 / Chapter 5.3.1 --- Experiment on FERET face database --- p.50 / Chapter 5.3.2 --- Experiment on the XM2VTS database --- p.53 / Chapter Chapter 6 --- Eigentransformation: Subspace Transform --- p.54 / Chapter 6.1 --- Face sketch recognition --- p.54 / Chapter 6.1.1 --- Eigentransformation --- p.56 / Chapter 6.1.2 --- Sketch synthesis --- p.59 / Chapter 6.1.3 --- Face sketch recognition --- p.61 / Chapter 6.1.4 --- Experiment --- p.63 / Chapter 6.2 --- Face hallucination --- p.69 / Chapter 6.2.1 --- Multiresolution analysis --- p.71 / Chapter 6.2.2 --- Eigentransformation for hallucination --- p.72 / Chapter 6.2.3 --- Discussion --- p.75 / Chapter 6.2.4 --- Experiment --- p.77 / Chapter 6.3 --- Discussion --- p.83 / Chapter Chapter 7 --- Conclusion --- p.85 / Publication List of This Thesis --- p.87 / Bibliography --- p.88
24

Conditions for Viewpoint Dependent Face Recognition

Schyns, Philippe G., Bulthoff, Heinrich H. 01 August 1993 (has links)
Poggio and Vetter (1992) showed that learning one view of a bilaterally symmetric object could be sufficient for its recognition, if this view allows the computation of a symmetric, "virtual," view. Faces are roughly bilaterally symmetric objects. Learning a side-view--which always has a symmetric view--should allow for better generalization performances than learning the frontal view. Two psychophysical experiments tested these predictions. Stimuli were views of shaded 3D models of laser-scanned faces. The first experiment tested whether a particular view of a face was canonical. The second experiment tested which single views of a face give rise to best generalization performances. The results were compatible with the symmetry hypothesis: Learning a side view allowed better generalization performances than learning the frontal view.
25

Feature extraction in face recognition on the use of internal and external features

Masip, David January 2005 (has links)
Zugl.: Barcelona, Autonomous University of Barcelona, Diss., 2005 / Hergestellt on demand
26

Single sample face recognition under complex environment

Pang, Meng 02 September 2019 (has links)
Single sample per person face recognition (SSPP FR), i.e., recognizing a person with a single face image in the biometric enrolment database only for training, has lots of attractive real-world applications such as criminal identification, law enforcement, access control, video surveillance, just to name a few. This thesis studies two important problems in SSPP FR, i.e., 1) SSPP FR with a standard biometric enrolment database (SSPP-se FR), and 2) SSPP FR with a contaminated biometric enrolment database (SSPP-ce FR). The SSPP-ce FR is more challenging than SSPP-se FR since the enrolment samples are collected under more complex environments and can be contaminated by nuisance variations. In this thesis, we propose one patch-based method called robust heterogeneous discriminative analysis (RHDA) to tackle SSPP-se FR, and propose two generic learning methods called synergistic generic learning (SGL) and iterative dynamic generic learning (IDGL), respectively, to tackle SSPP-ce FR. RHDA is proposed to address the limitations in the existing patch-based methods, and to enhance the robustness against complex facial variations for SSPP-se FR from two aspects. First, for feature extraction, a new graph-based Fisher-like criterion is presented to extract the hidden discriminant information across two heterogeneous adjacency graphs, and meanwhile improve the discriminative ability of patch distribution in underlying subspaces. Second, a joint majority voting strategy is developed by considering both the patch-to-patch and patch-to-manifold distances, which can generate complementary information as well as increase error tolerance for identification. SGL is proposed to address the SSPP-ce FR problem. Different from the existing generic learning methods simply based on prototype plus variation (P+V) model, SGL presents a new "learned P + learned V" framework that enables the prototype learning and variation dictionary learning to work collaboratively to identify new probe samples. Specifically, SGL learns prototypes for contaminated enrolment samples by preserving the more discriminative parts while learns variation dictionary by extracting the less discriminative intra-personal variants from an auxiliary generic set, on account of a linear Fisher information-based feature regrouping (FIFR). IDGL is proposed to address the limitations in SGL and thus better handling the SSPP-ce FR problem. IDGL is also based on the "learned P + learned V" framework. However, rather than using the linear FIFR to recover prototypes for contaminated enrolment samples, IDGL constructs a dynamic label-feedback network to update prototypes iteratively, where both linear and non-linear variations can be well removed. Besides, the supplementary information in probe set is effectively employed to enhance the correctness of the prototypes to represent the enrolment persons. Furthermore, IDGL proposes a new "sample-specific" corruption strategy to learn a representative variation dictionary. Comprehensive validations and evaluations are conducted on various benchmark face datasets. The computational complexities of the proposed methods are analyzed and empirical studies on parameter sensitivities are provided. Experimental results demonstrate the superior performance of the proposed methods for both SSPP-se FR and SSPP-ce FR.
27

L1-norm local preserving projection and its application

Shen, Chenyang 01 January 2012 (has links)
No description available.
28

微弱光源下之人臉辨識

李黛雲, Tai-Yun Li Unknown Date (has links)
本論文的主要目的是建立一套人臉辨識系統,即使在光源不足或甚至是完全黑暗的環境下也能夠正確地進行身分辨識。在完全黑暗的情形下,我們可以利用具有夜視功能(近紅外線)的攝影機來擷取影像,然而,近紅外線影像通常呈現亮度非常不均勻的情形,導致我們無法直接利用現有的人臉辨識系統來作辨識。因此,我們首先觀察近紅外線影像的特性,然後依據此特性提出一個影像成像的模型;接著,利用同構增晰的原理來減低因成像過程所造成的不均勻現象;經由實驗的結果,我們得知現有的全域式人臉辨識系統無法有效地處理近紅外線影像,因此,我們提出了一個新的區域式的人臉辨識演算法,針對光線不足的情況作特殊考量,以得到較佳的辨識結果。本論文實作的系統是以最近點分類法來作身份辨識,在現有的32個人臉影像資料集中,正確的辨識率達75%。 / The main objective of this thesis is to develop a face recognition system that could recognize human faces even when the surrounding environment is totally dark. The images of objects in total darkness can be captured using a relatively low-cost camcorder with the NightShot® function. By overcoming the illumination factor, a face recognition system would continue to function independent of the surrounding lighting condition. However, images acquired exhibit non-uniformity due to irregular illumination and current face recognition systems may not be put in use directly. In this thesis, we first investigate the characteristics of NIR images and propose an image formation model. A homomorphic processing technique built upon the image model is then developed to reduce the artifact of the captured images. After that, we conduct experiments to show that existing holistic face recognition systems perform poorly with NIR images. Finally, a more robust feature-based method is proposed to achieve better recognition rate under low illumination. A nearest neighbor classifier using Euclidean distance function is employed to recognize familiar faces from a database. The feature-based recognition method we developed achieves a recognition rate of 75% on a database of 32 people, with one sample image for each subject.
29

Remote surveillance and face tracking with mobile phones (smart eyes).

Da Silva, Sandro Cahanda Marinho January 2005 (has links)
This thesis addresses analysis, evaluation and simulation of low complexity face detection algorithms and tracking that could be used on mobile phones. Network access control using face recognition increases the user-friendliness in human-computer interaction. In order to realize a real time system implemented on handheld devices with low computing power, low complexity algorithms for face detection and face tracking are implemented. Skin color detection algorithms and face matching have low implementation complexity suitable for authentication of cellular network services. Novel approaches for reducing the complexities of these algorithms and fast implementation are introduced in this thesis. This includes a fast algorithm for face detection in video sequences, using a skin color model in the HSV (Hue-Saturation-Value) color space. It is combined with a Gaussian model of the H and S statistics and adaptive thresholds. These algorithms permit segmentation and detection of multiple faces in thumbnail images. Furthermore we evaluate and compare our results with those of a method implemented in the Chromatic Color space (YCbCr). We also test our test data on face detection method using Convolutional Neural Network architecture to study the suitability of using other approaches besides skin color as the basic feature for face detection. Finally, face tracking is done in 2D color video streams using HSV as the histogram color space. The program is used to compute 3D trajectories for a remote surveillance system.
30

Automatic segmentation and registration techniques for 3D face recognition. / Automatic segmentation and registration techniques for three-dimensional face recognition / CUHK electronic theses & dissertations collection

January 2008 (has links)
A 3D range image acquired by 3D sensing can explicitly represent a three-dimensional object's shape regardless of the viewpoint and lighting variations. This technology has great potential to resolve the face recognition problem eventually. An automatic 3D face recognition system consists of three stages: facial region segmentation, registration and recognition. The success of each stage influences the system's ultimate decision. Lately, research efforts are mainly devoted to the last recognition stage in 3D face recognition research. In this thesis, our study mainly focuses on segmentation and registration techniques, with the aim of providing a more solid foundation for future 3D face recognition research. / Then we propose a fully automatic registration method that can handle facial expressions with high accuracy and robustness for 3D face image alignment. In our method, the nose region, which is relatively more rigid than other facial regions in the anatomical sense, is automatically located and analyzed for computing the precise location of a symmetry plane. Extensive experiments have been conducted using the FRGC (V1.0 and V2.0) benchmark 3D face dataset to evaluate the accuracy and robustness of our registration method. Firstly, we compare its results with two other registration methods. One of these methods employs manually marked points on visualized face data and the other is based on the use of a symmetry plane analysis obtained from the whole face region. Secondly, we combine the registration method with other face recognition modules and apply them in both face identification and verification scenarios. Experimental results show that our approach performs better than the other two methods. For example, 97.55% Rank-1 identification rate and 2.25% EER score are obtained by using our method for registration and the PCA method for matching on the FRGC V1.0 dataset. All these results are the highest scores ever reported using the PCA method applied to similar datasets. / We firstly propose an automatic 3D face segmentation method. This method is based on deep understanding of 3D face image. Concepts of proportions of the facial and nose regions are acquired from anthropometrics for locating such regions. We evaluate this segmentation method on the FRGC dataset, and obtain a success rate as high as 98.87% on nose tip detection. Compared with results reported by other researchers in the literature, our method yields the highest score. / Tang, Xinmin. / Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3616. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 109-117). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.

Page generated in 0.0822 seconds