Spelling suggestions: "subject:"fontrecognition"" "subject:"andrecognition""
481 |
Real-time optical intensity correlation using photorefractive BSOWang, Zhao Qi January 1995 (has links)
Real-time optical intensity correlation using a photorefractive BSO crystal and a liquid crystal television is implemented. The underlying physics basis is considered, some specific techniques to improve the operation are proposed, and several optical pattern recognition tasks are achieved. Photorefractive BSO is used as the holographic recording medium in the real-time intensity correlator. To improve the dynamic holographic recording, a moving grating technique is adopted. The nonlinear effects of moving gratings at large fringe modulation are experimentally investigated, and are compared with numerical predictions. Optical bias is adopted to overcome the difficulty of a large drop in the optimum fringe velocity with moving gratings. The effects of optical bias on the optimum fringe velocity and on the diffraction efficiency are studied. To overcome the inherent drawback of low discrimination of intensity correlation in optical pattern recognition, real-time edge-enhanced intensity correlation is achieved by means of nonlinear holographic recording in BSO. Real-time colour object recognition is achieved by using a commercially available and inexpensive colour liquid crystal television in the intensity correlator. Multi-class object recognition is achieved with a synthetic discriminant function filter displayed by the Epson liquid crystal display in the real-time intensity correlator. The phase and intensity modulation properties of the Epson liquid crystal display are studied. A further research topic which uses the Epson liquid crystal display to realize a newly designed spatial filter, the quantized amplitude-compensated matched filter, is proposed. The performance merits of the filter are investigated by means of computer simulations.
|
482 |
Construction of a 3D Object Recognition and Manipulation Database from Grasp DemonstrationsKent, David E 09 April 2014 (has links)
Object recognition and manipulation are critical for enabling robots to operate within a household environment. There are many grasp planners that can estimate grasps based on object shape, but these approaches often perform poorly because they miss key information about non-visual object characteristics, such as weight distribution, fragility of materials, and usability characteristics. Object model databases can account for this information, but existing methods for constructing 3D object recognition databases are time and resource intensive, often requiring specialized equipment, and are therefore difficult to apply to robots in the field. We present an easy-to-use system for constructing object models for 3D object recognition and manipulation made possible by advances in web robotics. The database consists of point clouds generated using a novel iterative point cloud registration algorithm, which includes the encoding of manipulation data and usability characteristics. The system requires no additional equipment other than the robot itself, and non-expert users can demonstrate grasps through an intuitive web interface with virtually no training required. We validate the system with data collected from both a crowdsourcing user study and a set of grasps demonstrated by an expert user. We show that the crowdsourced grasps can produce successful autonomous grasps, and furthermore the demonstration approach outperforms purely vision-based grasp planning approaches for a wide variety of object classes.
|
483 |
Face recognition committee machine: methodology, experiments, and a system application.January 2003 (has links)
Tang Ho-Man. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 85-92). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Face Recognition --- p.2 / Chapter 1.3 --- Contributions --- p.4 / Chapter 1.4 --- Organization of this Thesis --- p.6 / Chapter 2 --- Literature Review --- p.8 / Chapter 2.1 --- Committee Machine --- p.8 / Chapter 2.1.1 --- Static Structure --- p.9 / Chapter 2.1.2 --- Dynamic Structure --- p.10 / Chapter 2.2 --- Face Recognition Algorithms Overview --- p.11 / Chapter 2.2.1 --- Eigenface --- p.12 / Chapter 2.2.2 --- Fisherface --- p.17 / Chapter 2.2.3 --- Elastic Graph Matching --- p.19 / Chapter 2.2.4 --- Support Vector Machines --- p.23 / Chapter 2.2.5 --- Neural Networks --- p.25 / Chapter 2.3 --- Commercial System and Applications --- p.27 / Chapter 2.3.1 --- FaceIT --- p.28 / Chapter 2.3.2 --- ZN-Face --- p.28 / Chapter 2.3.3 --- TrueFace --- p.29 / Chapter 2.3.4 --- Viisage --- p.30 / Chapter 3 --- Static Structure --- p.31 / Chapter 3.1 --- Introduction --- p.31 / Chapter 3.2 --- Architecture --- p.32 / Chapter 3.3 --- Result and Confidence --- p.33 / Chapter 3.3.1 --- "Eigenface, Fisherface, EGM" --- p.34 / Chapter 3.3.2 --- SVM --- p.35 / Chapter 3.3.3 --- Neural Networks --- p.36 / Chapter 3.4 --- Weight --- p.37 / Chapter 3.5 --- Voting Machine --- p.38 / Chapter 4 --- Dynamic Structure --- p.40 / Chapter 4.1 --- Introduction --- p.40 / Chapter 4.2 --- Architecture --- p.41 / Chapter 4.3 --- Gating Network --- p.42 / Chapter 4.4 --- Feedback Mechanism --- p.44 / Chapter 5 --- Face Recognition System --- p.46 / Chapter 5.1 --- Introduction --- p.46 / Chapter 5.2 --- System Architecture --- p.47 / Chapter 5.2.1 --- Face Detection Module --- p.48 / Chapter 5.2.2 --- Face Recognition Module --- p.49 / Chapter 5.3 --- Face Recognition Process --- p.50 / Chapter 5.3.1 --- Enrollment --- p.51 / Chapter 5.3.2 --- Recognition --- p.52 / Chapter 5.4 --- Distributed System --- p.54 / Chapter 5.4.1 --- Problems --- p.55 / Chapter 5.4.2 --- Distributed Architecture --- p.56 / Chapter 5.5 --- Conclusion --- p.59 / Chapter 6 --- Experimental Results --- p.60 / Chapter 6.1 --- Introduction --- p.60 / Chapter 6.2 --- Database --- p.61 / Chapter 6.2.1 --- ORL Face Database --- p.61 / Chapter 6.2.2 --- Yale Face Database --- p.62 / Chapter 6.2.3 --- AR Face Database --- p.62 / Chapter 6.2.4 --- HRL Face Database --- p.63 / Chapter 6.3 --- Experimental Details --- p.64 / Chapter 6.3.1 --- Pre-processing --- p.64 / Chapter 6.3.2 --- Cross Validation --- p.67 / Chapter 6.3.3 --- System details --- p.68 / Chapter 6.4 --- Result --- p.69 / Chapter 6.4.1 --- ORL Result --- p.69 / Chapter 6.4.2 --- Yale Result --- p.72 / Chapter 6.4.3 --- AR Result --- p.73 / Chapter 6.4.4 --- HRL Result --- p.75 / Chapter 6.4.5 --- Average Running Time --- p.76 / Chapter 6.5 --- Discussion --- p.77 / Chapter 6.5.1 --- Advantages --- p.78 / Chapter 6.5.2 --- Disadvantages --- p.79 / Chapter 6.6 --- Conclusion --- p.80 / Chapter 7 --- Conclusion --- p.82 / Bibliography --- p.92
|
484 |
Rotated face detection by coordinate transform with application to face tracking.January 2005 (has links)
Fung Cheuk Luk. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 104-107). / Abstracts in English and Chinese. / Abstract --- p.ii / 論文摘要 --- p.v / Acknowledgement --- p.v / Notations --- p.vi / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Summary of our approach --- p.2 / Chapter 1.2 --- Summary of our contributions --- p.2 / Chapter 1.3 --- Report Outline --- p.3 / Chapter 2 --- Related Work --- p.4 / Chapter 2.1 --- Face Detection and Tracking literature --- p.4 / Chapter 2.1.1 --- Face Detection approaches --- p.5 / Chapter 2.1.2 --- Face Tracking approaches --- p.6 / Chapter 2.2 --- Overview of Face Detection Procedure --- p.7 / Chapter 2.3 --- Haar-like Feature Cascade Upright Face Detector --- p.12 / Chapter 2.3.1 --- Face Detector Design --- p.12 / Chapter 2.3.2 --- Rectangular Edge Feature f(.) --- p.15 / Chapter 2.3.3 --- Fast Feature Computation Structure: Integral Image --- p.22 / Chapter 2.3.4 --- Feature Selection and parameter estimation --- p.25 / Chapter 2.4 --- Other Related Work --- p.29 / Chapter 2.4.1 --- Rotated Summed Area Table --- p.29 / Chapter 2.4.2 --- Condensation Framework --- p.33 / Chapter 3 --- Rotated Face Detector and Interleaved Face Tracker --- p.35 / Chapter 3.1 --- Rotated Detector Overview --- p.36 / Chapter 3.1.1 --- Parameter Transform and Rotated Face Detection --- p.38 / Chapter 3.1.2 --- Sample Transformation of Detector Parameters --- p.44 / Chapter 3.1.3 --- Post-processing of the detector responses --- p.48 / Chapter 3.2 --- Face Tracking Modeling --- p.51 / Chapter 3.2.1 --- Interleaved Detection --- p.51 / Chapter 3.2.2 --- CONDENSATION filter modeling --- p.53 / Chapter 4 --- Experiments --- p.57 / Chapter 4.1 --- Experiments on Rotated Face Detector --- p.57 / Chapter 4.1.1 --- Rotated Image Face Detector --- p.58 / Chapter 4.1.2 --- Face Image Rotation Test --- p.58 / Chapter 4.1.3 --- Real-life Image Experiment --- p.70 / Chapter 4.1.4 --- CMU Rotated Face Image Test --- p.74 / Chapter 4.2 --- Experiments on Interleaved Face Tracker --- p.82 / Chapter 4.2.1 --- Experiment Parameter Settings --- p.82 / Chapter 4.2.2 --- Moving Face Video Experiment --- p.84 / Chapter 4.2.3 --- Scale Varying Face Video Experiment --- p.90 / Chapter 4.2.4 --- Rotating Face Video Experiment --- p.94 / Chapter 5 --- Conclusion and Discussion --- p.98 / Chapter A --- Feature Selection and Parameter Estimation --- p.101 / Bibliography --- p.104
|
485 |
Model-based speech separation and enhancement with single-microphone input. / CUHK electronic theses & dissertations collectionJanuary 2008 (has links)
Experiments were carried out for continuous real speech mixed with either competitive speech source or broadband noise. Results show that separation outputs bear similar spectral trajectories as the ideal source signals. For speech mixtures, the proposed algorithm is evaluated in two ways: segmental signal-to-interference ratio (segSIR) and Itakura-Saito distortion ( dIS). It is found that (1) interference signal power is reduced in term of segSIR improvement, even under harsh condition of similar target speech and interference powers; and (2) dIS between the estimated source and the clean speech source is significantly smaller than before processing. These assert the capability of the proposed algorithm to extract individual sources from a mixture signal by reducing the interference signal and generating appropriate spectral trajectory for individual source estimates. / Our approach is based on the findings of psychoacoustics. To separate individual sound sources in a mixture signal, human exploits perceptual cues like harmonicity, continuity, context information and prior knowledge of familiar auditory patterns. Furthermore, the application of prior knowledge of speech for top-down separation (called schema-based grouping) is found to be powerful, yet unexplored. In this thesis, a bi-directional, model-based speech separation and enhancement algorithm is proposed by utilizing speech schemas, in particular. As model patterns are employed to generate subsequent spectral envelopes in an utterance, output speech is expected to be natural and intelligible. / The proposed separation algorithm regenerates a target speech source by working out the corresponding spectral envelope and harmonic structure. In the first stage, an optimal sequence of Wiener filtering is determined for subsequent interference removal. Specifically, acoustic models of speech schemas represented by possible line spectrum pair (LSP) patterns, are manipulated to match the input mixture and the given transcription if available, in a top-down manner. Specific LSP patterns are retrieved to constitute a spectral evolution that synchronizes with the target speech source. With this evolution, the mixture spectrum is then filtered to approximate the target source in an appropriate signal level. In the second stage, irrelevant harmonic structure from interfering sources is eliminated by comb filtering. These filters are designed according to the results of pitch tracking. / This thesis focuses on speech source separation problem in a single-microphone scenario. Possible applications of speech separation include recognition, auditory prostheses and surveillance systems. Sound signals typically reach our ears as a mixture of desired signals, other competing sounds and background noise. Example scenarios are talking with someone in crowd with other people speaking or listening to an orchestra with a number of instruments playing concurrently. These sounds are often overlapped in time and frequency. While human attends to individual sources remarkably well under these adverse conditions even with a single ear, the performance of most speech processing system is easily degraded. Therefore, modeling how human auditory system performs is one viable way to extract target speech sources from the mixture before any vulnerable processes. / Lee, Siu Wa. / "April 2008." / Adviser: Chung Ching. / Source: Dissertation Abstracts International, Volume: 70-03, Section: B, page: 1846. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (p. 233-252). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
|
486 |
Face recognition using structural approach. / CUHK electronic theses & dissertations collectionJanuary 2006 (has links)
Face recognition is an important biological authentication technology. In this thesis, we study face recognition using structural approach, in which structural information of the face is extracted and used for the recognition. / The first part of this thesis discusses the methods for the detection of some facial features and their applications in face recognition. Generally, the more features with good accuracy are detected and used for face recognition, the better is the recognition result. We first propose a method to extract the eyebrow contours from the face image by an enhanced K-means clustering algorithm and a revised Snake algorithm. The reliable part of the extracted eyebrow contour is then used as a feature for face recognition. Then we introduce a novel method to estimate the chin contour for face recognition. The method first estimates several possible locations of chin and check points which are used to build a number of curves as chin contour candidates. Based on the chin like edges extracted by a modified Canny edge detector, the curve with the largest degree of likeliness to be the actual chin contour is selected. Finally, the estimated chin contours with high enough likeliness are used as a geometric feature for face recognition. Experimental results show that the proposed algorithms can extract eyebrows and chin contours with good accuracy and the extracted features are effective for improving face recognition rates. / The second part of this thesis deals with pose estimation and pose invariant face recognition. Pose estimation is achieved based on the detected structural information of the face. We first propose a method for recognition of a face at any pose from a single frontal view image. The first step of the method is feature detection. In this step, we detect the ear points by a novel algorithm. Then, a set of 3D head models is constructed for each test image based on the geometric features extracted from both the input image and each frontal view image in the gallery. Using this set of potential models, we can obtain a set of potential poses. Based on these potential models and poses, feature templates and geometric features of the input face are then rectified to form the potential frontal views. The last step is the feature comparison and final pose estimation. The major contribution of the proposed algorithm is that it can estimate and compensate both sidespin and seesaw rotations while existing model based algorithms from a single frontal view can only handle sidespin rotation. We also propose a method of pose invariant face recognition from multi-view images. First, the 3D poses of face in 2D images are estimated by using a 3D reference face model in three-layer linear iterative processes. The 3D model is updated to fit a particular person using an iterative algorithm. Then we construct the virtual frontal view face images from the input 2D face images based on the estimated poses and the matched 3D face models. We extract the waveletfaces from these virtual frontal views based on wavelet transform and perform linear discriminant analysis on these waveletfaces. Finally, the nearest feature space classifier is employed for feature comparison. These proposed methods were tested using commonly used face databases. Experimental results show that the proposed face recognition methods are robust and compare favourably with existing methods in terms of recognition rate. / Chen Qinran. / "September 2006." / Adviser: Wai Kuen Cham. / Source: Dissertation Abstracts International, Volume: 68-03, Section: B, page: 1814. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (p. 134-154). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
|
487 |
Integrating visual and tactile robotic perceptionCorradi, Tadeo January 2018 (has links)
The aim of this project is to enable robots to recognise objects and object categories by combining vision and touch. In this thesis, a novel inexpensive tactile sensor design is presented, together with a complete, probabilistic sensor-fusion model. The potential of the model is demonstrated in four areas: (i) Shape Recognition, here the sensor outperforms its most similar rival, (ii) Single-touch Object Recognition, where state-of-the-art results are produced, (iii) Visuo-tactile object recognition, demonstrating the benefits of multi-sensory object representations, and (iv) Object Classification, which has not been reported in the literature to date. Both the sensor design and the novel database were made available. Tactile data collection is performed by a robot. An extensive analysis of data encodings, data processing, and classification methods is presented. The conclusions reached are: (i) the inexpensive tactile sensor can be used for basic shape and object recognition, (ii) object recognition combining vision and touch in a probabilistic manner provides an improvement in accuracy over either modality alone, (iii) when both vision and touch perform poorly independently, the sensor-fusion model proposed provides faster learning, i.e. fewer training samples are required to achieve similar accuracy, and (iv) such a sensor-fusion model is more accurate than either modality alone when attempting to classify unseen objects, as well as when attempting to recognise individual objects from amongst similar other objects of the same class. (v) The preliminary potential is identified for real-life applications: underwater object classification. (vi) The sensor fusion model providesimprovements in classification even for award-winning deep-learning basedcomputer vision models.
|
488 |
Three-dimensional interpretation of an imperfect line drawing.January 1996 (has links)
by Leung Kin Lap. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (leaves 70-72). / ACKNOWLEDGEMENTS --- p.I / ABSTRACT --- p.II / TABLE OF CONTENTS --- p.III / TABLE OF FIGURES --- p.IV / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Contributions of the thesis --- p.2 / Chapter 1.2 --- Organization of the thesis --- p.4 / Chapter Chapter 2 --- Previous Work --- p.5 / Chapter 2.1 --- An overview of 3-D interpretation --- p.5 / Chapter 2.1.1 --- Multiple-View Clues --- p.5 / Chapter 2.1.2 --- Single-View Clues --- p.6 / Chapter 2.2 --- Line Drawing Interpretation --- p.7 / Chapter 2.2.1 --- Qualitative Interpretation --- p.7 / Chapter 2.2.2 --- Quantitative Interpretation --- p.10 / Chapter 2.3 --- Previous Methods of Quantitative Interpretation by Optimization --- p.12 / Chapter 2.3.1 --- Extremum Principle for Shape from Contour --- p.12 / Chapter 2.3.2 --- MSDA Algorithm --- p.14 / Chapter 2.4 --- Comments on Previous Work on Line Drawing Interpretation --- p.17 / Chapter Chapter 3 --- An Iterative Clustering Procedure for Imperfect Line Drawings --- p.18 / Chapter 3.1 --- Shape Constraints --- p.19 / Chapter 3.2 --- Problem Formulation --- p.20 / Chapter 3.3 --- Solution Steps --- p.25 / Chapter 3.4 --- Nearest-Neighbor Clustering Algorithm --- p.37 / Chapter 3.5 --- Discussion --- p.38 / Chapter Chapter 4 --- Experimental Results --- p.40 / Chapter 4.1 --- Synthetic Line Drawings --- p.40 / Chapter 4.2 --- Real Line Drawing --- p.42 / Chapter 4.2.1 --- Recovery of real images --- p.42 / Chapter Chapter 5 --- Conclusion and Future Work --- p.65 / Appendix A --- p.67 / Chapter A. 1 --- Gradient Space Concept --- p.67 / Chapter A. 2 --- Shading of images --- p.69 / Appendix B --- p.70
|
489 |
Face recognition using different training data.January 2003 (has links)
Li Zhifeng. / Thesis submitted in: December 2002. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 49-53). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgments --- p.v / Table of Contents --- p.vi / List of Figures --- p.viii / List of Tables --- p.ix / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Face Recognition Problem and Challenge --- p.1 / Chapter 1.2 --- Applications --- p.2 / Chapter 1.3 --- Face Recognition Methods --- p.3 / Chapter 1.4 --- The Relationship Between the Face Recognition Performance and Different Training Data --- p.5 / Chapter 1.5 --- Thesis Overview --- p.6 / Chapter Chapter 2 --- PCA-based Recognition Method --- p.7 / Chapter 2.1 --- Review --- p.7 / Chapter 2.2 --- Formulation --- p.8 / Chapter 2.2.1 --- Karhunen-Loeve transform (KLT) --- p.8 / Chapter 2.2.2 --- Multilevel Dominant Eigenvector Estimation (MDEE) --- p.12 / Chapter 2.3 --- Analysis of The Effect of Training Data on PCA-based Method --- p.13 / Chapter Chapter 3 --- LDA-based Recognition Method --- p.17 / Chapter 3.1 --- Review --- p.17 / Chapter 3.2 --- Formulation --- p.18 / Chapter 3.2.1 --- The Pure LDA --- p.18 / Chapter 3.2.2 --- LDA-based method --- p.19 / Chapter 3.3 --- Analysis of The Effect of Training Data on LDA-based Method --- p.21 / Chapter Chapter 4 --- Experiments --- p.23 / Chapter 4.1 --- Face Database --- p.23 / Chapter 4.1.1 --- AR face database --- p.23 / Chapter 4.1.2 --- XM2VTS face database --- p.24 / Chapter 4.1.3 --- MMLAB face database --- p.26 / Chapter 4.1.4 --- Face Data Preprocessing --- p.27 / Chapter 4.2 --- Recognition Formulation --- p.29 / Chapter 4.3 --- PCA-based Recognition Using Different Training Data Sets --- p.29 / Chapter 4.3.1 --- Experiments on MMLAB Face Database --- p.30 / Chapter 4.3.1.1 --- Training Data Sets and Testing Data Sets --- p.30 / Chapter 4.3.1.2 --- Face Recognition Performance Using Different Training Data Sets --- p.31 / Chapter 4.3.2 --- Experiments on XM2VTS Face Database --- p.33 / Chapter 4.3.3 --- Comparison of MDEE and KLT --- p.36 / Chapter 4.3.4 --- Summary --- p.38 / Chapter 4.4 --- LDA-based Recognition Using Different Training Data Sets --- p.38 / Chapter 4.4.1 --- Experiments on AR Face Database --- p.38 / Chapter 4.4.1.1 --- The Selection of Training Data and Testing Data --- p.38 / Chapter 4.4.1.2 --- LDA-based recognition on AR face database --- p.39 / Chapter 4.4.2 --- Experiments on XM2VTS Face Database --- p.40 / Chapter 4.4.3 --- Training Data Sets and Testing Data Sets --- p.41 / Chapter 4.4.4 --- Experiments on XM2VTS Face Database --- p.42 / Chapter 4.4.5 --- Summary --- p.46 / Chapter Chapter 5 --- Summary --- p.47 / Bibliography --- p.49
|
490 |
A unified framework for subspace based face recognition.January 2003 (has links)
Wang Xiaogang. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 88-91). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgments --- p.v / Table of Contents --- p.vi / List of Figures --- p.viii / List of Tables --- p.x / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Face recognition --- p.1 / Chapter 1.2 --- Subspace based face recognition technique --- p.2 / Chapter 1.3 --- Unified framework for subspace based face recognition --- p.4 / Chapter 1.4 --- Discriminant analysis in dual intrapersonal subspaces --- p.5 / Chapter 1.5 --- Face sketch recognition and hallucination --- p.6 / Chapter 1.6 --- Organization of this thesis --- p.7 / Chapter Chapter 2 --- Review of Subspace Methods --- p.8 / Chapter 2.1 --- PCA --- p.8 / Chapter 2.2 --- LDA --- p.9 / Chapter 2.3 --- Bayesian algorithm --- p.12 / Chapter Chapter 3 --- A Unified Framework --- p.14 / Chapter 3.1 --- PCA eigenspace --- p.16 / Chapter 3.2 --- Intrapersonal and extrapersonal subspaces --- p.17 / Chapter 3.3 --- LDA subspace --- p.18 / Chapter 3.4 --- Comparison of the three subspaces --- p.19 / Chapter 3.5 --- L-ary versus binary classification --- p.22 / Chapter 3.6 --- Unified subspace analysis --- p.23 / Chapter 3.7 --- Discussion --- p.26 / Chapter Chapter 4 --- Experiments on Unified Subspace Analysis --- p.28 / Chapter 4.1 --- Experiments on FERET database --- p.28 / Chapter 4.1.1 --- PCA Experiment --- p.28 / Chapter 4.1.2 --- Bayesian experiment --- p.29 / Chapter 4.1.3 --- Bayesian analysis in reduced PCA subspace --- p.30 / Chapter 4.1.4 --- Extract discriminant features from intrapersonal subspace --- p.33 / Chapter 4.1.5 --- Subspace analysis using different training sets --- p.34 / Chapter 4.2 --- Experiments on the AR face database --- p.36 / Chapter 4.2.1 --- "Experiments on PCA, LDA and Bayes" --- p.37 / Chapter 4.2.2 --- Evaluate the Bayesian algorithm for different transformation --- p.38 / Chapter Chapter 5 --- Discriminant Analysis in Dual Subspaces --- p.41 / Chapter 5.1 --- Review of LDA in the null space of and direct LDA --- p.42 / Chapter 5.1.1 --- LDA in the null space of --- p.42 / Chapter 5.1.2 --- Direct LDA --- p.43 / Chapter 5.1.3 --- Discussion --- p.44 / Chapter 5.2 --- Discriminant analysis in dual intrapersonal subspaces --- p.45 / Chapter 5.3 --- Experiment --- p.50 / Chapter 5.3.1 --- Experiment on FERET face database --- p.50 / Chapter 5.3.2 --- Experiment on the XM2VTS database --- p.53 / Chapter Chapter 6 --- Eigentransformation: Subspace Transform --- p.54 / Chapter 6.1 --- Face sketch recognition --- p.54 / Chapter 6.1.1 --- Eigentransformation --- p.56 / Chapter 6.1.2 --- Sketch synthesis --- p.59 / Chapter 6.1.3 --- Face sketch recognition --- p.61 / Chapter 6.1.4 --- Experiment --- p.63 / Chapter 6.2 --- Face hallucination --- p.69 / Chapter 6.2.1 --- Multiresolution analysis --- p.71 / Chapter 6.2.2 --- Eigentransformation for hallucination --- p.72 / Chapter 6.2.3 --- Discussion --- p.75 / Chapter 6.2.4 --- Experiment --- p.77 / Chapter 6.3 --- Discussion --- p.83 / Chapter Chapter 7 --- Conclusion --- p.85 / Publication List of This Thesis --- p.87 / Bibliography --- p.88
|
Page generated in 0.1025 seconds