• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 252
  • 52
  • 26
  • 23
  • 16
  • 16
  • 10
  • 7
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 475
  • 475
  • 189
  • 138
  • 130
  • 83
  • 75
  • 70
  • 66
  • 55
  • 54
  • 50
  • 50
  • 48
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Neural mechanisms for face and orientation after-effects

Zhao, Chen January 2011 (has links)
Understanding how human and animal visual systems work is an important and still largely unsolved problem. The neural mechanisms for low-level visual processing have been studied in detail, focusing on early visual areas. Much less is known about the neural basis of high-level perception, particularly in humans. An important issue is whether and how lessons learned from low-level studies, such as how neurons in the primary visual cortex respond to oriented edges, can be applied to understanding highlevel perception, such as human processing of faces. Visual aftereffects are a useful tool for investigating how stimuli are represented, because they reveal aspects of the underlying neural organisation. This thesis focuses on identifying neural mechanisms involved in high-level visual processing, by studying the relationship between low- and high-level visual aftereffects. Previous psychophysical studies have shown that humans exhibit reliable orientation (tilt) aftereffects, wherein prolonged exposure to an oriented visual pattern systematically biases perception of other orientations. Humans also show face identity aftereffects, wherein prolonged exposure to one face systematically biases perception of other faces. Despite these apparent similarities, previous studies have argued that the two effects reflect different mechanisms, in part because tilt aftereffects show a characteristic S-shaped curve, with the effect magnitude increasing and then decreasing with orientation difference, while face aftereffects appeared to increase monotonically (in various units of face morphing strengths) with difference from a norm (average) face. Using computational models of orientation and face processing in the visual cortex, I show that the same computational mechanisms derived from early cortical processing, applied to either orientation-selective or face-selective neurons, are sufficient to replicate both types of effects. However, the models predict that face aftereffects would also be S-shaped, if tested on a sufficiently wide range of face stimuli. Based on the modelling work, I designed psychophysical experiments to test this theory. An identical experimental paradigm was used to test both face gender and tilt aftereffects, with strikingly similar S-shape curves obtained for both conditions. Combined with the modelling results, this result provides evidence that low- and high level visual adaptation reflect similar neural mechanisms. Other psychophysical experiments have recently shown interactions between low and high-level aftereffects, whereby orientation and line curvature processing (in early visual area) can influence judgements of facial emotion (by high-level face-selective neurons). An extended multi-level version of the face processing model replicates this interaction across levels, but again predicts that the cross-level effects will show similar S-shaped aftereffect curves. Future psychophysical experiments can test these predictions. Together, these results help us to understand how stimuli are represented and processed at each level of the visual cortex. They suggest that similar adaptation mechanisms may underlie both high-level and low-level visual processing, which would allow us to apply much of what we know from low-level studies to help understand high-level processing.
82

Rozdíly v kognitivních procesech při znovurozpoznávání tváří u individuální a hromadné rekognice / Differences in cognitive processes during recognition of faces within individual and collective identification procedure

Diallo, Karolina January 2011 (has links)
This thesis discusses cognitive processes theories with special emphasis on those affecting recognition and identification of human faces, introduces the issue of eyewitness testimony and the issue of methods and techniques of criminal identification proven in forensic science and actually in use in police work. The literature and previous empirical studies dealing with the issue of line-up and show-up reliability and accuracy are reviewed afterwards. The effect of selected variables, i.e. retention interval and similarity of suspect clothing, within the show-up and line-up procedures is well mentioned. Experiments in this thesis examine these interrelated goals: determining the effect of two variables on show up accuracy as well as the suggestivity of show up method and exploring carryover effect from show up to line up identification procedure. Keywords: cognitive processes, facial recognition, individual identification, eyewitness testimony
83

An independent evaluation of subspace facial recognition algorithms

Surajpal, Dhiresh Ramchander 23 December 2008 (has links)
In traversing the diverse field of biometric security and face recognition techniques, this investigation explores a rather rare comparative study of three of the most popular Appearance-based Face Recognition projection classes, these being the methodologies of Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Independent Component Analysis (ICA). Both the linear and kernel alternatives are investigated along with the four most widely accepted similarity measures of City Block (L1), Euclidean (L2), Cosine and the Mahalanobis metrics. Although comparisons between these classes can become fairly complex given the different task natures, the algorithm architectures and the distance metrics that must be taken into account, an important aspect of this study is the completely equal working conditions that are provided in order to facilitate fair and proper comparative levels of evaluation. In doing so, one is able to realise an independent study that significantly contributes to prior literary findings, either by verifying previous results, offering further insight into why certain conclusions were made or by providing a better understanding as to why certain claims should be disputed and under which conditions they may hold true. The experimental procedure examines ten algorithms in the categories of expression, illumination, occlusion and temporal delay; the results are then evaluated based on a sequential combination of assessment tools that facilitate both intuitive and statistical decisiveness among the intra and inter-class comparisons. In a bid to boost the overall efficiency and accuracy levels of the identification system, the ‘best’ categorical algorithms are then incorporated into a hybrid methodology, where the advantageous effects of fusion strategies are considered. This investigation explores the weighted-sum approach, which by fusion at a matching score level, effectively harnesses the complimentary strengths of the component algorithms and in doing so highlights the improved performance levels that can be provided by hybrid implementations. In the process, by firstly exploring previous literature with respect to each other and secondly by relating the important findings of this paper to previous works one is also able to meet the primary objective in providing an amateur with a very insightful understanding of publicly available subspace techniques and their comparable application status within the environment of face recognition.
84

Deep learning for attribute inference, parsing, and recognition of face / CUHK electronic theses & dissertations collection

January 2014 (has links)
Deep learning has been widely and successfully applied to many difficult tasks in computer vision, such as image parsing, object detection, and object recognition, where various deep learning architectures such as deep neural networks, convolutional deep neural networks, and deep belief networks have achieved impressive performance and significantly outperformed state-of-the-art methods. However, the potential of deep learning in face related problems has not be fully explored yet. In this thesis, we fully explore different deep learning methods and proposes new network architectures and learning algorithms on face related applications, such as face parsing, face attribute inference, and face recognition. / For face parsing, we propose a novel face parser, which recasts segmentation of face components as a cross-modality data transformation problem, i.e., transforming an image patch to a label map. Specifically, a face is represented hierarchically by parts, components, and pixel-wise labels. With this representation, this approach first detects faces at both the part- and component-levels, and then computes the pixel-wise label maps. The part-based and component-based detectors are generatively trained with the deep belief network (DBN), and are discriminatively tuned by logistic regression. The segmentators transform the detected face components to label maps, which are obtained by learning a highly nonlinear mapping with the deep autoencoder. The proposed hierarchical face parsing is not only robust to partial occlusions but also provide richer information for face analysis and face synthesis compared with face keypoint detection and face alignment. / For face attribute inference, the proposed approach captures the interdependencies of local regions for each attribute, as well as the high-order correlations between different attributes, which makes it more robust to occlusions and misdetection of face regions. First, we have modeled region interdependencies with a discriminative decision tree, where each node consists of a detector and a classifier trained on a local region. The detector allows us to locate the region, while the classifier determines the presence or absence of an attribute. Second, correlations of attributes and attribute predictors are modeled by organizing all of the decision trees into a large sum-product network (SPN), which is learned by the EM algorithm and yields the most probable explanation (MPE) of the facial attributes in terms of the region’s localization and classification. Experimental results on a large data set with 22,400 images show the effectiveness of the proposed approach. / For face recognition, this thesis addresses this challenge by proposing a new deep learning framework that can recover the canonical view of face images. It dramatically reduces the intra-person variances, while maintaining the inter-person discriminativeness. Unlike the existing face reconstruction methods that were either evaluated in controlled 2D environment or employed 3D information, our approach directly learns the transformation between face images with a complex set of variations and their canonical views. At the training stage, to avoid the costly process of labeling canonical-view images from the training set by hand, we have devised a new measurement and algorithm to automatically select or synthesize a canonical-view image for each identity. The recovered canonical-view face images are matched by using a facial component-based convolutional neural network. Our approach achieves the best performance on the LFW dataset under the unrestricted protocol. We also demonstrate that the performance of existing methods can be improved if they are applied to our recovered canonical-view face images. / 近年來,深度學習算法被成功應用於解決各種困難的計算機視覺問題,例如圖像分割、物體識別和檢測等。深度學習算法,如深度神經網絡、深度卷積神經網絡、和深度置信度網絡在上述方面取得重要突破,並且算法性能超過了傳統計算機視覺算法。然而,人臉圖片,作為人的視覺認知最重要的環節之一,還沒有在深度學習框架下進行研究。本文以人臉圖片分析為背景,深入探討了適用的深度學習算法與不同的深度網絡結構。主要關注以下幾個應用,包括人臉分割、人臉屬性判斷、和人臉識別。 / 對於人臉分割問題,我們把傳統的計算機視覺分割問題變成一個高維空間數據轉換問題,即把人臉圖片轉換為分割圖。一張人臉圖片可以層次化的表示為像素塊、人臉關鍵點(五官)、和人臉區域。通過使用該人臉表示,我們的方法先檢測人臉的區域,其次檢測人臉關鍵點,最後根據人臉關鍵點位置把像素塊轉換為分割圖。本文提出的方法包括兩個步驟:關鍵點檢測和圖元轉換為分割圖。本文使用深度置信度網絡進行關鍵點檢測;使用深度編碼器進行像素點到分割圖的轉換。該方法對人臉遮擋也具有魯棒性。 / 對於人臉屬性判斷,本文提出的方法對兩種相關性進行建模,包括人臉關鍵區域相關性和人臉屬性之間的相關性。我們使用決策樹對人臉關鍵區域相關性進行建模。通過把尋找與決策樹一一對應的Sum-Product樹對屬性之間的相關性進行建模。通過對22400張人臉圖片進行實驗,驗證本文提出的方法的有效性與魯棒性。 / 對於人臉識別問題,本論文提出了一種新的人臉表示方法,稱爲人臉身份保持性特徵。該特徵能夠保持不同身份人臉之間的判別性,同時減少同一身份人臉間的變化。該特徵還可以恢復輸入人臉圖片的正臉。使用該正臉圖片進行人臉歸一化,可以使現有人臉識別算法的準確率都能得到提高。 / Luo, Ping. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2014. / Includes bibliographical references (leaves 83-95). / Abstracts also in Chinese. / Title from PDF title page (viewed on 27, October, 2016). / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only.
85

Intensity based methodologies for facial expression recognition.

January 2001 (has links)
by Hok Chun Lo. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 136-143). / Abstracts in English and Chinese. / LIST OF FIGURES --- p.viii / LIST OF TABLES --- p.x / Chapter 1. --- INTRODUCTION --- p.1 / Chapter 2. --- PREVIOUS WORK ON FACIAL EXPRESSION RECOGNITION --- p.9 / Chapter 2.1. --- Active Deformable Contour --- p.9 / Chapter 2.2. --- Facial Feature Points and B-spline Curve --- p.10 / Chapter 2.3. --- Optical Flow Approach --- p.11 / Chapter 2.4. --- Facial Action Coding System --- p.12 / Chapter 2.5. --- Neural Network --- p.13 / Chapter 3. --- EIGEN-ANALYSIS BASED METHOD FOR FACIAL EXPRESSION RECOGNITION --- p.15 / Chapter 3.1. --- Related Topics on Eigen-Analysis Based Method --- p.15 / Chapter 3.1.1. --- Terminologies --- p.15 / Chapter 3.1.2. --- Principal Component Analysis --- p.17 / Chapter 3.1.3. --- Significance of Principal Component Analysis --- p.18 / Chapter 3.1.4. --- Graphical Presentation of the Idea of Principal Component Analysis --- p.20 / Chapter 3.2. --- EigenFace Method for Face Recognition --- p.21 / Chapter 3.3. --- Eigen-Analysis Based Method for Facial Expression Recognition --- p.23 / Chapter 3.3.1. --- Person-Dependent Database --- p.23 / Chapter 3.3.2. --- Direct Adoption of EigenFace Method --- p.24 / Chapter 3.3.3. --- Multiple Subspaces Method --- p.27 / Chapter 3.4. --- Detail Description on Our Approaches --- p.29 / Chapter 3.4.1. --- Database Formation --- p.29 / Chapter a. --- Conversion of Image to Column Vector --- p.29 / Chapter b. --- "Preprocess: Scale Regulation, Orientation Regulation and Cropping." --- p.30 / Chapter c. --- Scale Regulation --- p.31 / Chapter d. --- Orientation Regulation --- p.32 / Chapter e. --- Cropping of images --- p.33 / Chapter f. --- Calculation of Expression Subspace for Direct Adoption Method --- p.35 / Chapter g. --- Calculation of Expression Subspace for Multiple Subspaces Method. --- p.38 / Chapter 3.4.2. --- Recognition Process for Direct Adoption Method --- p.38 / Chapter 3.4.3. --- Recognition Process for Multiple Subspaces Method --- p.39 / Chapter a. --- Intensity Normalization Algorithm --- p.39 / Chapter b. --- Matching --- p.44 / Chapter 3.5. --- Experimental Result and Analysis --- p.45 / Chapter 4. --- DEFORMABLE TEMPLATE MATCHING SCHEME FOR FACIAL EXPRESSION RECOGNITION --- p.53 / Chapter 4.1. --- Background Knowledge --- p.53 / Chapter 4.1.1. --- Camera Model --- p.53 / Chapter a. --- Pinhole Camera Model and Perspective Projection --- p.54 / Chapter b. --- Orthographic Camera Model --- p.56 / Chapter c. --- Affine Camera Model --- p.57 / Chapter 4.1.2. --- View Synthesis --- p.58 / Chapter a. --- Technique Issue of View Synthesis --- p.59 / Chapter 4.2. --- View Synthesis Technique for Facial Expression Recognition --- p.68 / Chapter 4.2.1. --- From View Synthesis Technique to Template Deformation --- p.69 / Chapter 4.3. --- Database Formation --- p.71 / Chapter 4.3.1. --- Person-Dependent Database --- p.72 / Chapter 4.3.2. --- Model Images Acquisition --- p.72 / Chapter 4.3.3. --- Templates' Structure and Formation Process --- p.73 / Chapter 4.3.4. --- Selection of Warping Points and Template Anchor Points --- p.77 / Chapter a. --- Selection of Warping Points --- p.78 / Chapter b. --- Selection of Template Anchor Points --- p.80 / Chapter 4.4. --- Recognition Process --- p.81 / Chapter 4.4.1. --- Solving Warping Equation --- p.83 / Chapter 4.4.2. --- Template Deformation --- p.83 / Chapter 4.4.3. --- Template from Input Images --- p.86 / Chapter 4.4.4. --- Matching --- p.87 / Chapter 4.5. --- Implementation of Automation System --- p.88 / Chapter 4.5.1. --- Kalman Filter --- p.89 / Chapter 4.5.2. --- Using Kalman Filter for Trakcing in Our System --- p.89 / Chapter 4.5.3. --- Limitation --- p.92 / Chapter 4.6. --- Experimental Result and Analysis --- p.93 / Chapter 5. --- CONCLUSION AND FUTURE WORK --- p.97 / APPENDIX --- p.100 / Chapter I. --- Image Sample 1 --- p.100 / Chapter II. --- Image Sample 2 --- p.109 / Chapter III. --- Image Sample 3 --- p.119 / Chapter IV. --- Image Sample 4 --- p.135 / BIBLIOGRAPHY --- p.136
86

3D face recognition using multicomponent feature extraction from the nasal region and its environs

Gao, Jiangning January 2016 (has links)
This thesis is dedicated to extracting expression robust features for 3D face recognition. The use of 3D imaging enables the extraction of discriminative features that can significantly improve the recognition performance due to the availability of facial surface information such as depth, surface normals and curvature. Expression robust analysis using information from both depth and surface normals is investigated by dividing the main facial region into patches of different scales. The nasal region and adjoining parts of the cheeks are utilized as they are more consistent over different expressions and are hard to deliberately occlude. In addition, in comparison with other parts of the face, these regions have a high potential to produce discriminative features for recognition and overcome pose variations. An overview and classification methodology of the widely used 3D face databases are first introduced to provide an appropriate reference for 3D face database selection. Using the FRGC and Bosphorus databases, a low complexity pattern rejector for expression robust 3D face recognition is proposed by matching curves on the nasal and its environs, which results in a low-dimension feature set of only 60 points. To extract discriminative features more locally, a novel multi-scale and multi-component local shape descriptor is further proposed, which achieves more competitive performances under the identification and verification scenarios. In contrast with many of the existing work on 3D face recognition that consider captures obtained with laser scanners or structured light, this thesis also investigates applications to reconstructed 3D captures from lower cost photometric stereo imaging systems that have applications in real-world situations. To this end, the performance of the expression robust face recognition algorithms developed for captures from laser scanners are further evaluated on the Photoface database, which contains naturalistic expression variations. To improve the recognition performance of all types of 3D captures, a universal landmarking algorithm is proposed that makes uses of different components of the surface normals. Using facial profile signatures and thresholded surface normal maps, facial roll and yaw rotations are calibrated and five main landmarks are robustly detected on the well-aligned 3D nasal region. The landmarking results show that the detected landmarks demonstrate high within-class consistency and can achieve good recognition performances under different expressions. This is also the first landmarking work specifically developed for the reconstructed 3D captures from photometric stereo imaging systems.
87

Caricatura e reconhecimento de faces / Caricature and face recognition

Mendes, Ana Irene Fonseca 29 January 2008 (has links)
A caricatura, uma imagem da face baseada no exagero de suas características peculiares, geralmente é reconhecida tão bem quanto a fotografia da face sem distorções. Para confecção das caricaturas, exageram-se as diferenças entre a imagem original e um protótipo (face média de um grupo de pessoas); e para confecção das anti-caricaturas essas diferenças são atenuadas. O objetivo desta pesquisa foi investigar se existe um grau de exagero ótimo para que a caricatura represente a face melhor que a fotografia original. Além disso, investigou- se o papel da percepção holística versus percepção componencial no processo de reconhecimento de faces. Foram geradas seis faces prototípicas, masculinas e femininas, de pessoas da população da região de Ribeirão Preto que se auto-declaram branca, parda e preta. A partir das faces prototípicas, foram gerados dois tipos de caricaturas e anticaricaturas: 1. holística: em que todas as diferenças entre a face original e a prototípica foram manipuladas, 2. parcial: em que somente as diferenças de alguns elementos faciais isolados ou combinados entre a face original e a prototípica foram manipuladas. No Experimento I os estímulos teste foram as caricaturas e anti-caricaturas holísticas. No Experimento II os estímulos foram as caricaturas e anti-caricaturas parciais. Em ambos experimentos as caricaturas e anti-caricaturas foram submetidas a julgamentos de similaridade com a face original previamente memorizada. Os resultados do Experimento I indicaram que a melhor representação da face é a fotografia sem distorção e que, nos casos em que a face é atípica em relação ao protótipo, as caricaturas tendem a ser representações tão fidedignas quanto as fotografias sem distorção. Os resultados do Experimento II apontam para a importância dos elementos peculiares no reconhecimento de faces. Comparando-se os resultados dos Experimentos I e II pode-se afirmar que o processamento de faces se dá predominantemente de forma holística e que a manipulação de elementos peculiares da face reduz mais a similaridade entre a face original e a caricatura (ou anti-caricatura) que a manipulação de elementos não-peculiares. / A caricature is an exaggeration of distinctive facial features and is generally recognized just as well as an undistorted photograph of a face. Caricatures can be generated by exaggerating the differences between a face and a prototypical face (average face) and an anticaricature can be generated by reducing those differences. The aim of this study was to investigate whether there is a degree of caricaturing that best captures facial likeness. Moreover, we investigated the role of holistic perception versus componential perception in the facial recognition process. Six prototypical faces, three male and three female, were generated by morphing photographs of Brazilian people from the region of Ribeirão Preto-SP of different races: black, white and mixed race. Two types of caricatures and anticaricatures were generated: 1, holistic: by manipulating of all the differences between a face and the prototypical faces; 2, partial: by manipulating the differences of isolated or combined features between a face and the prototypical face. The stimuli used in Experiment 1 were the holistic caricatures and anticaricatures. In Experiment 2 the stimuli were the partial caricatures and anticaricatures. In both experiments, subjects were asked to rate the similarity between the caricatures and the anticaricatures and a face previously memorized. The results of Experiment 1 provide evidence that the best representation of the face is a photograph without distortion and that, when the face is atypical, the caricatures seem to be as good as photographs without distortion. The results of Experiment 2 point to the importance of the role of distinctive features in face recognition. Comparing the results of Experiments 1 and 2, we can say that the facial recognition process is predominantly holistic but that the manipulation of distinctive facial elements reduces the similarity judgment more than the manipulation of non-distinctive features.
88

A generic face processing framework: technologies, analyses and applications.

January 2003 (has links)
Jang Kim-fung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 108-124). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Introduction about Face Processing Framework --- p.4 / Chapter 1.2.1 --- Basic architecture --- p.4 / Chapter 1.2.2 --- Face detection --- p.5 / Chapter 1.2.3 --- Face tracking --- p.6 / Chapter 1.2.4 --- Face recognition --- p.6 / Chapter 1.3 --- The scope and contributions of the thesis --- p.7 / Chapter 1.4 --- The outline of the thesis --- p.8 / Chapter 2 --- Facial Feature Representation --- p.10 / Chapter 2.1 --- Facial feature analysis --- p.10 / Chapter 2.1.1 --- Pixel information --- p.11 / Chapter 2.1.2 --- Geometry information --- p.13 / Chapter 2.2 --- Extracting and coding of facial feature --- p.14 / Chapter 2.2.1 --- Face recognition --- p.15 / Chapter 2.2.2 --- Facial expression classification --- p.38 / Chapter 2.2.3 --- Other related work --- p.44 / Chapter 2.3 --- Discussion about facial feature --- p.48 / Chapter 2.3.1 --- Performance evaluation for face recognition --- p.49 / Chapter 2.3.2 --- Evolution of the face recognition --- p.52 / Chapter 2.3.3 --- Evaluation of two state-of-the-art face recog- nition methods --- p.53 / Chapter 2.4 --- Problem for current situation --- p.58 / Chapter 3 --- Face Detection Algorithms and Committee Ma- chine --- p.61 / Chapter 3.1 --- Introduction about face detection --- p.62 / Chapter 3.2 --- Face Detection Committee Machine --- p.64 / Chapter 3.2.1 --- Review of three approaches for committee machine --- p.65 / Chapter 3.2.2 --- The approach of FDCM --- p.68 / Chapter 3.3 --- Evaluation --- p.70 / Chapter 4 --- Facial Feature Localization --- p.73 / Chapter 4.1 --- Algorithm for gray-scale image: template match- ing and separability filter --- p.73 / Chapter 4.1.1 --- Position of face and eye region --- p.74 / Chapter 4.1.2 --- Position of irises --- p.75 / Chapter 4.1.3 --- Position of lip --- p.79 / Chapter 4.2 --- Algorithm for color image: eyemap and separa- bility filter --- p.81 / Chapter 4.2.1 --- Position of eye candidates --- p.81 / Chapter 4.2.2 --- Position of mouth candidates --- p.83 / Chapter 4.2.3 --- Selection of face candidates by cost function --- p.84 / Chapter 4.3 --- Evaluation --- p.85 / Chapter 4.3.1 --- Algorithm for gray-scale image --- p.86 / Chapter 4.3.2 --- Algorithm for color image --- p.88 / Chapter 5 --- Face Processing System --- p.92 / Chapter 5.1 --- System architecture and limitations --- p.92 / Chapter 5.2 --- Pre-processing module --- p.93 / Chapter 5.2.1 --- Ellipse color model --- p.94 / Chapter 5.3 --- Face detection module --- p.96 / Chapter 5.3.1 --- Choosing the classifier --- p.96 / Chapter 5.3.2 --- Verifying the candidate region --- p.97 / Chapter 5.4 --- Face tracking module --- p.99 / Chapter 5.4.1 --- Condensation algorithm --- p.99 / Chapter 5.4.2 --- Tracking the region using Hue color model --- p.101 / Chapter 5.5 --- Face recognition module --- p.102 / Chapter 5.5.1 --- Normalization --- p.102 / Chapter 5.5.2 --- Recognition --- p.103 / Chapter 5.6 --- Applications --- p.104 / Chapter 6 --- Conclusion --- p.106 / Bibliography --- p.107
89

Non-negative matrix factorization for face recognition

Xue, Yun 01 January 2007 (has links)
No description available.
90

Face recognition using virtual frontal-view image

Feng, Guo Can 01 January 1999 (has links)
No description available.

Page generated in 0.0995 seconds