• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 252
  • 52
  • 26
  • 23
  • 16
  • 16
  • 10
  • 7
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 475
  • 475
  • 189
  • 138
  • 130
  • 83
  • 75
  • 70
  • 66
  • 55
  • 54
  • 50
  • 50
  • 48
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Wavelet and manifold learning and their applications

Cui, Limin 01 January 2010 (has links)
No description available.
92

Distortion Robust Biometric Recognition

January 2018 (has links)
abstract: Information forensics and security have come a long way in just a few years thanks to the recent advances in biometric recognition. The main challenge remains a proper design of a biometric modality that can be resilient to unconstrained conditions, such as quality distortions. This work presents a solution to face and ear recognition under unconstrained visual variations, with a main focus on recognition in the presence of blur, occlusion and additive noise distortions. First, the dissertation addresses the problem of scene variations in the presence of blur, occlusion and additive noise distortions resulting from capture, processing and transmission. Despite their excellent performance, ’deep’ methods are susceptible to visual distortions, which significantly reduce their performance. Sparse representations, on the other hand, have shown huge potential capabilities in handling problems, such as occlusion and corruption. In this work, an augmented SRC (ASRC) framework is presented to improve the performance of the Spare Representation Classifier (SRC) in the presence of blur, additive noise and block occlusion, while preserving its robustness to scene dependent variations. Different feature types are considered in the performance evaluation including image raw pixels, HoG and deep learning VGG-Face. The proposed ASRC framework is shown to outperform the conventional SRC in terms of recognition accuracy, in addition to other existing sparse-based methods and blur invariant methods at medium to high levels of distortion, when particularly used with discriminative features. In order to assess the quality of features in improving both the sparsity of the representation and the classification accuracy, a feature sparse coding and classification index (FSCCI) is proposed and used for feature ranking and selection within both the SRC and ASRC frameworks. The second part of the dissertation presents a method for unconstrained ear recognition using deep learning features. The unconstrained ear recognition is performed using transfer learning with deep neural networks (DNNs) as a feature extractor followed by a shallow classifier. Data augmentation is used to improve the recognition performance by augmenting the training dataset with image transformations. The recognition performance of the feature extraction models is compared with an ensemble of fine-tuned networks. The results show that, in the case where long training time is not desirable or a large amount of data is not available, the features from pre-trained DNNs can be used with a shallow classifier to give a comparable recognition accuracy to the fine-tuned networks. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2018
93

Partial EBGM and face synthesis methods for non-frontal recognition. / 基於局部彈性束圖匹配及人臉整合的非正面人臉識別技術 / Ji yu ju bu tan xing shu tu pi pei ji ren lian zheng he de fei zheng mian ren lian shi bie ji shu

January 2009 (has links)
Cheung, Kin Wang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 76-82). / Abstract also in Chinese. / Chapter 1. --- INTRODUCTION --- p.1 / Chapter 1.1. --- Background --- p.1 / Chapter 1.1.1. --- Introduction to Biometrics --- p.1 / Chapter 1.1.2. --- Face Recognition in General --- p.2 / Chapter 1.1.3. --- A Typical Face Recognition System Architecture --- p.4 / Chapter 1.1.4. --- Face Recognition in Surveillance Cameras --- p.6 / Chapter 1.1.5. --- Face recognition under Pose Variation --- p.9 / Chapter 1.2. --- Motivation and Objectives --- p.11 / Chapter 1.3. --- Related Works --- p.13 / Chapter 1.3.1. --- Overview of Pose-invariant Face Recognition --- p.13 / Chapter 1.3.2. --- Standard Face Recognition Setting --- p.14 / Chapter 1.3.3. --- Multi-Probe Setting --- p.19 / Chapter 1.3.4. --- Multi-Gallery Setting --- p.21 / Chapter 1.3.5. --- Non-frontal Face Databases --- p.23 / Chapter 1.3.6. --- Evaluation Metrics --- p.26 / Chapter 1.3.7. --- Summary of Non-frontal Face Recognition Settings --- p.27 / Chapter 1.4. --- Proposed Methods for Non-frontal Face Recognition --- p.28 / Chapter 1.5. --- Thesis Organization --- p.30 / Chapter 2. --- PARTIAL ELASTIC BUNCH GRAPH MATCHING --- p.31 / Chapter 2.1. --- Introduction --- p.31 / Chapter 2.2. --- EBGM for Non-frontal Face Recognition --- p.31 / Chapter 2.2.1. --- Overview of Baseline EBGM Algorithm --- p.31 / Chapter 2.2.2. --- Modified EBGM for Non-frontal Face Matching --- p.33 / Chapter 2.3. --- Experiments --- p.35 / Chapter 2.3.1. --- Experimental Setup --- p.35 / Chapter 2.3.2. --- Experimental Results --- p.37 / Chapter 2.4. --- Discussions --- p.40 / Chapter 3. --- FACE RECOGNITION BY FRONTAL VIEW SYNTHESIS WITH CALIBRATED STEREO CAMERAS --- p.43 / Chapter 3.1. --- Introduction --- p.43 / Chapter 3.2. --- Proposed Method --- p.44 / Chapter 3.2.1. --- Image Rectification --- p.45 / Chapter 3.2.2. --- Face Detection --- p.49 / Chapter 3.2.3. --- Head Pose Estimation --- p.51 / Chapter 3.2.4. --- Virtual View Generation --- p.52 / Chapter 3.2.5. --- Feature Localization --- p.54 / Chapter 3.2.6. --- Face Morphing --- p.56 / Chapter 3.3. --- Experiments --- p.58 / Chapter 3.3.1. --- Data Collection --- p.58 / Chapter 3.3.2. --- Synthesized Results --- p.59 / Chapter 3.3.3. --- Experiment Setup --- p.60 / Chapter 3.3.4. --- Experiment Results on FERET database --- p.61 / Chapter 3.3.5. --- Experiment Results on CAS-PEAL-R1 database --- p.62 / Chapter 3.4. --- Discussions --- p.64 / Chapter 3.5. --- Summary --- p.66 / Chapter 4. --- "EXPERIMENTS, RESULTS AND OBSERVATIONS" --- p.67 / Chapter 4.1. --- Experiment Setup --- p.67 / Chapter 4.2. --- Experiment Results --- p.69 / Chapter 4.3. --- Discussions --- p.70 / Chapter 5. --- CONCLUSIONS --- p.74 / Chapter 6. --- BIBLIOGRAPHY --- p.76
94

Learning-based descriptor for 2-D face recognition.

January 2010 (has links)
Cao, Zhimin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 30-34). / Abstracts in English and Chinese. / Chapter 1 --- Introduction and related work --- p.1 / Chapter 2 --- Learning-based descriptor for face recognition --- p.7 / Chapter 2.1 --- Overview of framework --- p.7 / Chapter 2.2 --- Learning-based descriptor extraction --- p.9 / Chapter 2.2.1 --- Sampling and normalization --- p.9 / Chapter 2.2.2 --- Learning-based encoding and histogram rep-resentation --- p.11 / Chapter 2.2.3 --- PCA dimension reduction --- p.12 / Chapter 2.2.4 --- Multiple LE descriptors --- p.14 / Chapter 2.3 --- Pose-adaptive matching --- p.16 / Chapter 2.3.1 --- Component -level face alignment --- p.17 / Chapter 2.3.2 --- Pose-adaptive matching --- p.17 / Chapter 2.3.3 --- Evaluations of pose-adaptive matching --- p.19 / Chapter 3 --- Experiment --- p.21 / Chapter 3.1 --- Results on the LFW benchmark --- p.21 / Chapter 3.2 --- Results on Multi-PIE --- p.24 / Chapter 4 --- Conclusion and future work --- p.27 / Chapter 4.1 --- Conclusion --- p.27 / Chapter 4.2 --- Future work --- p.28 / Bibliography --- p.30
95

An adaptive near-infrared illuminator for outdoor face recognition. / 用於戶外人臉辨識的近紅外線適應性照明 / Yong yu hu wai ren lian bian shi de jin hong wai xian shi ying xing zhao ming

January 2010 (has links)
Cheung, Siu Ming. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 81-86). / Abstracts in English and Chinese. / Chapter 1. --- INTRODUCTION --- p.1 / Chapter 1.1. --- Introduction to Face Recognition --- p.2 / Chapter 1.1.1. --- Modes of Face Recognition --- p.2 / Chapter 1.1.2. --- Typical Face Recognition System --- p.3 / Chapter 1.1.3. --- Face Recognition Algorithms --- p.4 / Chapter 1.1.4. --- The State of the Art --- p.5 / Chapter 1.2. --- Outdoor Face Recognition --- p.6 / Chapter 1.2.1. --- The Outdoor Environment --- p.6 / Chapter 1.2.2. --- The Illumination Variation Problem in the Outdoors --- p.8 / Chapter 1.3. --- Related works --- p.10 / Chapter 1.3.1. --- Face Appearance Modeling --- p.10 / Chapter 1.3.2. --- Illumination Invariant Features and Representations --- p.13 / Chapter 1.3.3. --- Active Near-Infrared Illumination --- p.14 / Chapter 1.4. --- Proposed method --- p.17 / Chapter 1.5. --- Design Requirements --- p.18 / Chapter 2. --- COMPENSATION METHODOLOGY FOR OUTDOOR FACE RECOGNITION --- p.20 / Chapter 2.1. --- Illumination from the Sun --- p.21 / Chapter 2.2. --- Effect of Sunlight Illumination --- p.22 / Chapter 2.3. --- A Compensation Model --- p.24 / Chapter 2.4. --- A Face Lighting Simulator --- p.28 / Chapter 2.4.1. --- Face 3D Models --- p.29 / Chapter 2.4.2. --- Light Sources --- p.30 / Chapter 2.4.3. --- Synthesis of Face Image --- p.31 / Chapter 2.5. --- Simulation Results --- p.32 / Chapter 2.5.1. --- Optimum Compensation Angles --- p.33 / Chapter 2.5.2. --- Effect of Illuminator Intensity --- p.36 / Chapter 2.5.3. --- Effect of Illuminator Elevation Angle --- p.38 / Chapter 2.5.4. --- Effect of Sunlight Elevation Angle --- p.41 / Chapter 2.5.5. --- Illumination from Both Sides --- p.42 / Chapter 2.6. --- Summary --- p.43 / Chapter 3. --- AN ADAPTIVE ILLUMINATOR --- p.45 / Chapter 3.1. --- Hardware Design --- p.45 / Chapter 3.1.1. --- Near-infrared Camera --- p.45 / Chapter 3.1.2. --- Illumination Panels --- p.48 / Chapter 3.1.3. --- Illuminator Controller --- p.56 / Chapter 3.1.4. --- Illumination Characteristics --- p.59 / Chapter 3.2. --- Algorithms --- p.62 / Chapter 3.2.1. --- Light Balance Estimation --- p.63 / Chapter 4. --- EXPERIMENTS AND RESULTS --- p.67 / Chapter 4.1. --- Effect of compensation angle on face similarity --- p.68 / Chapter 4.2. --- Effect of illumination compensation under different sunlight conditions --- p.71 / Chapter 4.3. --- Impact on recognition performance --- p.72 / Chapter 5. --- CONCLUSIONS --- p.76 / Chapter 6. --- BIBLIOGRAPHY --- p.81
96

Video-based face alignment using efficient sparse and low-rank approach.

January 2011 (has links)
Wu, King Keung. / "August 2011." / Thesis (M.Phil.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (p. 119-126). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Overview of Face Alignment Algorithms --- p.1 / Chapter 1.1.1 --- Objectives --- p.1 / Chapter 1.1.2 --- Motivation: Photo-realistic Talking Head --- p.2 / Chapter 1.1.3 --- Existing methods --- p.5 / Chapter 1.2 --- Contributions --- p.8 / Chapter 1.3 --- Outline of the Thesis --- p.11 / Chapter 2 --- Sparse Signal Representation --- p.13 / Chapter 2.1 --- Introduction --- p.13 / Chapter 2.2 --- Problem Formulation --- p.15 / Chapter 2.2.1 --- l0-nonn minimization --- p.15 / Chapter 2.2.2 --- Uniqueness --- p.16 / Chapter 2.3 --- Basis Pursuit --- p.18 / Chapter 2.3.1 --- From l0-norm to l1-norm --- p.19 / Chapter 2.3.2 --- l0-l1 Equivalence --- p.20 / Chapter 2.4 --- l1-Regularized Least Squares --- p.21 / Chapter 2.4.1 --- Noisy case --- p.22 / Chapter 2.4.2 --- Over-determined systems of linear equations --- p.22 / Chapter 2.5 --- Summary --- p.24 / Chapter 3 --- Sparse Corruptions and Principal Component Pursuit --- p.25 / Chapter 3.1 --- Introduction --- p.25 / Chapter 3.2 --- Sparse Corruptions --- p.26 / Chapter 3.2.1 --- Sparse Corruptions and l1-Error --- p.26 / Chapter 3.2.2 --- l1-Error and Least Absolute Deviations --- p.28 / Chapter 3.2.3 --- l1-Regularized l1-Error --- p.29 / Chapter 3.3 --- Robust Principal Component Analysis (RPCA) and Principal Component Pursuit --- p.31 / Chapter 3.3.1 --- Principal Component Analysis (PCA) and RPCA --- p.31 / Chapter 3.3.2 --- Principal Component Pursuit --- p.33 / Chapter 3.4 --- Experiments of Sparse and Low-rank Approach on Surveillance Video --- p.34 / Chapter 3.4.1 --- Least Squares --- p.35 / Chapter 3.4.2 --- l1-Regularized Least Squares --- p.35 / Chapter 3.4.3 --- l1-Error --- p.36 / Chapter 3.4.4 --- l1-Regularized l1-Error --- p.36 / Chapter 3.5 --- Summary --- p.37 / Chapter 4 --- Split Bregman Algorithm for l1-Problem --- p.45 / Chapter 4.1 --- Introduction --- p.45 / Chapter 4.2 --- Bregman Distance --- p.46 / Chapter 4.3 --- Bregman Iteration for Constrained Optimization --- p.47 / Chapter 4.4 --- Split Bregman Iteration for l1-Regularized Problem --- p.50 / Chapter 4.4.1 --- Formulation --- p.51 / Chapter 4.4.2 --- Advantages of Split Bregman Iteration . . --- p.52 / Chapter 4.5 --- Fast l1 Algorithms --- p.54 / Chapter 4.5.1 --- l1-Regularized Least Squares --- p.54 / Chapter 4.5.2 --- l1-Error --- p.55 / Chapter 4.5.3 --- l1-Regularized l1-Error --- p.57 / Chapter 4.6 --- Summary --- p.58 / Chapter 5 --- Face Alignment Using Sparse and Low-rank Decomposition --- p.61 / Chapter 5.1 --- Robust Alignment by Sparse and Low-rank Decomposition for Linearly Correlated Images (RASL) --- p.61 / Chapter 5.2 --- Problem Formulation --- p.62 / Chapter 5.2.1 --- Theory --- p.62 / Chapter 5.2.2 --- Algorithm --- p.64 / Chapter 5.3 --- Direct Extension of RASL: Multi-RASL --- p.66 / Chapter 5.3.1 --- Formulation --- p.66 / Chapter 5.3.2 --- Algorithm --- p.67 / Chapter 5.4 --- Matlab Implementation Details --- p.68 / Chapter 5.4.1 --- Preprocessing --- p.70 / Chapter 5.4.2 --- Transformation --- p.73 / Chapter 5.4.3 --- Jacobian Ji --- p.74 / Chapter 5.5 --- Experiments --- p.75 / Chapter 5.5.1 --- Qualitative Evaluations Using Small Dataset --- p.76 / Chapter 5.5.2 --- Large Dataset Test --- p.81 / Chapter 5.5.3 --- Conclusion --- p.85 / Chapter 5.6 --- Sensitivity analysis on selection of references --- p.87 / Chapter 5.6.1 --- References from consecutive frames --- p.88 / Chapter 5.6.2 --- References from RASL-aligned images --- p.91 / Chapter 5.7 --- Summary --- p.92 / Chapter 6 --- Extension of RASL for video: One-by-One Approach --- p.96 / Chapter 6.1 --- One-by-One Approach --- p.96 / Chapter 6.1.1 --- Motivation --- p.97 / Chapter 6.1.2 --- Algorithm --- p.97 / Chapter 6.2 --- Choices of Optimization --- p.101 / Chapter 6.2.1 --- l1-Regularized Least Squares --- p.101 / Chapter 6.2.2 --- l1-Error --- p.102 / Chapter 6.2.3 --- l1-Regularized l1-Error --- p.103 / Chapter 6.3 --- Experiments --- p.104 / Chapter 6.3.1 --- Evaluation for Different l1 Algorithms --- p.104 / Chapter 6.3.2 --- Conclusion --- p.108 / Chapter 6.4 --- Exploiting Property of Video --- p.109 / Chapter 6.5 --- Summary --- p.110 / Chapter 7 --- Conclusion and Future Work --- p.112 / Chapter A --- Appendix --- p.117 / Bibliography --- p.119
97

Relative Contributions of Internal and External Features to Face Recognition

Jarudi, Izzat N., Sinha, Pawan 01 March 2003 (has links)
The central challenge in face recognition lies in understanding the role different facial features play in our judgments of identity. Notable in this regard are the relative contributions of the internal (eyes, nose and mouth) and external (hair and jaw-line) features. Past studies that have investigated this issue have typically used high-resolution images or good-quality line drawings as facial stimuli. The results obtained are therefore most relevant for understanding the identification of faces at close range. However, given that real-world viewing conditions are rarely optimal, it is also important to know how image degradations, such as loss of resolution caused by large viewing distances, influence our ability to use internal and external features. Here, we report experiments designed to address this issue. Our data characterize how the relative contributions of internal and external features change as a function of image resolution. While we replicated results of previous studies that have shown internal features of familiar faces to be more useful for recognition than external features at high resolution, we found that the two feature sets reverse in importance as resolution decreases. These results suggest that the visual system uses a highly non-linear cue-fusion strategy in combining internal and external features along the dimension of image resolution and that the configural cues that relate the two feature sets play an important role in judgments of facial identity.
98

Human Identification Based on Three-Dimensional Ear and Face Models

Cadavid, Steven 05 May 2011 (has links)
We propose three biometric systems for performing 1) Multi-modal Three-Dimensional (3D) ear + Two-Dimensional (2D) face recognition, 2) 3D face recognition, and 3) hybrid 3D ear recognition combining local and holistic features. For the 3D ear component of the multi-modal system, uncalibrated video sequences are utilized to recover the 3D ear structure of each subject within a database. For a given subject, a series of frames is extracted from a video sequence and the Region-of-Interest (ROI) in each frame is independently reconstructed in 3D using Shape from Shading (SFS). A fidelity measure is then employed to determine the model that most accurately represents the 3D structure of the subject’s ear. Shape matching between a probe and gallery ear model is performed using the Iterative Closest Point (ICP) algorithm. For the 2D face component, a set of facial landmarks is extracted from frontal facial images using the Active Shape Model (ASM) technique. Then, the responses of the facial images to a series of Gabor filters at the locations of the facial landmarks are calculated. The Gabor features are stored in the database as the face model for recognition. Match-score level fusion is employed to combine the match scores obtained from both the ear and face modalities. The aim of the proposed system is to demonstrate the superior performance that can be achieved by combining the 3D ear and 2D face modalities over either modality employed independently. For the 3D face recognition system, we employ an Adaboost algorithm to builda classifier based on geodesic distance features. Firstly, a generic face model is finely conformed to each face model contained within a 3D face dataset. Secondly, the geodesic distance between anatomical point pairs are computed across each conformed generic model using the Fast Marching Method. The Adaboost algorithm then generates a strong classifier based on a collection of geodesic distances that are most discriminative for face recognition. The identification and verification performances of three Adaboost algorithms, namely, the original Adaboost algorithm proposed by Freund and Schapire, and two variants – the Gentle and Modest Adaboost algorithms – are compared. For the hybrid 3D ear recognition system, we propose a method to combine local and holistic ear surface features in a computationally efficient manner. The system is comprised of four primary components, namely, 1) ear image segmentation, 2) local feature extraction and matching, 3) holistic feature extraction and matching, and 4) a fusion framework combining local and holistic features at the match score level. For the segmentation component, we employ our method proposed in [111], to localize a rectangular region containing the ear. For the local feature extraction and representation component, we extend the Histogram of Categorized Shapes (HCS) feature descriptor, proposed in [111], to an object-centered 3D shape descriptor, termed Surface Patch Histogram of Indexed Shapes (SPHIS), for surface patch representation and matching. For the holistic matching component, we introduce a voxelization scheme for holistic ear representation from which an efficient, element-wise comparison of gallery-probe model pairs can be made. The match scores obtained from both the local and holistic matching components are fused to generate the final match scores. Experimental results conducted on the University of Notre Dame (UND) collection J2 dataset demonstrate that theproposed approach outperforms state-of-the-art 3D ear biometric systems in both accuracy and efficiency.
99

An integration framework of feature selection and extraction for appearance-based recognition

Li, Qi. January 2006 (has links)
Thesis (Ph.D.)--University of Delaware, 2006. / Principal faculty advisor: Chandra Kambhamettu, Dept. of Computer & Information Sciences. Includes bibliographical references.
100

Self-organizing features for regularized image standardization

Gökçay, Didem, January 2001 (has links) (PDF)
Thesis (Ph. D.)--University of Florida, 2001. / Title from first page of PDF file. Document formatted into pages; contains ix, 117 p.; also contains graphics. Vita. Includes bibliographical references (p. 109-116).

Page generated in 0.0849 seconds