Spelling suggestions: "subject:"[een] FACE RECOGNITION"" "subject:"[enn] FACE RECOGNITION""
21 |
Remote surveillance and face tracking with mobile phones (smart eyes).Da Silva, Sandro Cahanda Marinho January 2005 (has links)
This thesis addresses analysis, evaluation and simulation of low complexity face detection algorithms and tracking that could be used on mobile phones. Network access control using face recognition increases the user-friendliness in human-computer interaction. In order to realize a real time system implemented on handheld devices with low computing power, low complexity algorithms for face detection and face tracking are implemented. Skin color detection algorithms and face matching have low implementation complexity suitable for authentication of cellular network services. Novel approaches for reducing the complexities of these algorithms and fast implementation are introduced in this thesis. This includes a fast algorithm for face detection in video sequences, using a skin color model in the HSV (Hue-Saturation-Value) color space. It is combined with a Gaussian model of the H and S statistics and adaptive thresholds. These algorithms permit segmentation and detection of multiple faces in thumbnail images. Furthermore we evaluate and compare our results with those of a method implemented in the Chromatic Color space (YCbCr). We also test our test data on face detection method using Convolutional Neural Network architecture to study the suitability of using other approaches besides skin color as the basic feature for face detection. Finally, face tracking is done in 2D color video streams using HSV as the histogram color space. The program is used to compute 3D trajectories for a remote surveillance system.
|
22 |
Automatic segmentation and registration techniques for 3D face recognition. / Automatic segmentation and registration techniques for three-dimensional face recognition / CUHK electronic theses & dissertations collectionJanuary 2008 (has links)
A 3D range image acquired by 3D sensing can explicitly represent a three-dimensional object's shape regardless of the viewpoint and lighting variations. This technology has great potential to resolve the face recognition problem eventually. An automatic 3D face recognition system consists of three stages: facial region segmentation, registration and recognition. The success of each stage influences the system's ultimate decision. Lately, research efforts are mainly devoted to the last recognition stage in 3D face recognition research. In this thesis, our study mainly focuses on segmentation and registration techniques, with the aim of providing a more solid foundation for future 3D face recognition research. / Then we propose a fully automatic registration method that can handle facial expressions with high accuracy and robustness for 3D face image alignment. In our method, the nose region, which is relatively more rigid than other facial regions in the anatomical sense, is automatically located and analyzed for computing the precise location of a symmetry plane. Extensive experiments have been conducted using the FRGC (V1.0 and V2.0) benchmark 3D face dataset to evaluate the accuracy and robustness of our registration method. Firstly, we compare its results with two other registration methods. One of these methods employs manually marked points on visualized face data and the other is based on the use of a symmetry plane analysis obtained from the whole face region. Secondly, we combine the registration method with other face recognition modules and apply them in both face identification and verification scenarios. Experimental results show that our approach performs better than the other two methods. For example, 97.55% Rank-1 identification rate and 2.25% EER score are obtained by using our method for registration and the PCA method for matching on the FRGC V1.0 dataset. All these results are the highest scores ever reported using the PCA method applied to similar datasets. / We firstly propose an automatic 3D face segmentation method. This method is based on deep understanding of 3D face image. Concepts of proportions of the facial and nose regions are acquired from anthropometrics for locating such regions. We evaluate this segmentation method on the FRGC dataset, and obtain a success rate as high as 98.87% on nose tip detection. Compared with results reported by other researchers in the literature, our method yields the highest score. / Tang, Xinmin. / Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3616. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 109-117). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
|
23 |
Statistical approaches for facial feature extraction and face recognition. / 抽取臉孔特徵及辨認臉孔的統計學方法 / Statistical approaches for facial feature extraction and face recognition. / Chou qu lian kong te zheng ji bian ren lian kong de tong ji xue fang faJanuary 2004 (has links)
Sin Ka Yu = 抽取臉孔特徵及辨認臉孔的統計學方法 / 冼家裕. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (leaves 86-90). / Text in English; abstracts in English and Chinese. / Sin Ka Yu = Chou qu lian kong te zheng ji bian ren lian kong de tong ji xue fang fa / Xian Jiayu. / Chapter Chapter 1. --- Introduction --- p.1 / Chapter 1.1. --- Motivation --- p.1 / Chapter 1.2. --- Objectives --- p.4 / Chapter 1.3. --- Organization of the thesis --- p.4 / Chapter Chapter 2. --- Facial Feature Extraction --- p.6 / Chapter 2.1. --- Introduction --- p.6 / Chapter 2.2. --- Reviews of Statistical Approach --- p.8 / Chapter 2.2.1. --- Eigenfaces --- p.8 / Chapter 2.2.1.1. --- Eigenfeatures Error! Bookmark not defined / Chapter 2.2.3. --- Singular Value Decomposition --- p.14 / Chapter 2.2.4. --- Summary --- p.15 / Chapter 2.3. --- Review of fiducial point localization methods --- p.16 / Chapter 2.3.1. --- Symmetry based Approach --- p.16 / Chapter 2.3.2. --- Color Based Approaches --- p.17 / Chapter 2.3.3. --- Integral Projection --- p.17 / Chapter 2.3.4. --- Deformable Template --- p.20 / Chapter 2.4. --- Corner-based Fiducial Point Localization --- p.22 / Chapter 2.4.1. --- Facial Region Extraction --- p.22 / Chapter 2.4.2. --- Corner Detection --- p.25 / Chapter 2.4.3. --- Corner Selection --- p.27 / Chapter 2.4.3.1. --- Mouth Corner Pairs Detection --- p.27 / Chapter 2.4.3.2. --- Iris Detection --- p.27 / Chapter 2.5. --- Experimental Results --- p.30 / Chapter 2.6. --- Conclusions --- p.30 / Chapter 2.7. --- Notes on Publications --- p.30 / Chapter Chapter 3. --- Fiducial Point Extraction with Shape Constraint --- p.32 / Chapter 3.1. --- Introduction --- p.32 / Chapter 3.2. --- Statistical Theory of Shape --- p.33 / Chapter 3.2.1. --- Shape Space --- p.33 / Chapter 3.2.2. --- Shape Distribution --- p.34 / Chapter 3.3. --- Shape Guided Fiducial Point Localization --- p.38 / Chapter 3.3.1. --- Shape Constraints --- p.38 / Chapter 3.3.2. --- Intelligent Search --- p.40 / Chapter 3.4. --- Experimental Results --- p.40 / Chapter 3.5. --- Conclusions --- p.42 / Chapter 3.6. --- Notes on Publications --- p.42 / Chapter Chapter 4. --- Statistical Pattern Recognition --- p.43 / Chapter 4.1. --- Introduction --- p.43 / Chapter 4.2. --- Bayes Decision Rule --- p.44 / Chapter 4.3. --- Gaussian Maximum Probability Classifier --- p.46 / Chapter 4.4. --- Maximum Likelihood Estimation of Mean and Covariance Matrix --- p.48 / Chapter 4.5. --- Small Sample Size Problem --- p.50 / Chapter 4.5.1. --- Dispersed Eigenvalues --- p.50 / Chapter 4.5.2. --- Distorted Classification Rule --- p.55 / Chapter 4.6. --- Review of Methods Handling the Small Sample Size Problem --- p.57 / Chapter 4.6.1. --- Linear Discriminant Classifier --- p.57 / Chapter 4.6.2. --- Regularized Discriminant Analysis --- p.59 / Chapter 4.6.3. --- Leave-one-out Likelihood Method --- p.63 / Chapter 4.6.4. --- Bayesian Leave-one-out Likelihood method --- p.65 / Chapter 4.7. --- Proposed Method --- p.68 / Chapter 4.7.1. --- A New Covariance Estimator --- p.70 / Chapter 4.7.2. --- Model Selection --- p.75 / Chapter 4.7.3. --- The Mixture Parameter --- p.76 / Chapter 4.8. --- Experimental results --- p.77 / Chapter 4.8.1. --- Implementation --- p.77 / Chapter 4.8.2. --- Results --- p.79 / Chapter 4.9. --- Conclusion --- p.81 / Chapter 4.10. --- Notes on Publications --- p.82 / Chapter Chapter 5. --- Conclusions and Future works --- p.83 / Chapter 5.1. --- Conclusions and Contributions --- p.83 / Chapter 5.2. --- Future Works --- p.84
|
24 |
Audio-guided video based face recognition. / CUHK electronic theses & dissertations collectionJanuary 2006 (has links)
Face recognition is one of the most challenging computer vision research topics since faces appear differently even for the same person due to expression, pose, lighting, occlusion and many other confounding factors in real life. During the past thirty years, a number of face recognition techniques have been proposed. However, all of these methods focus exclusively on image-based face recognition that uses a still image as input data. One problem with the image-based face recognition is that it is possible to use a pre-recorded face photo to confuse a camera to take it as a live subject. The second problem is that the image-based recognition accuracy is still too low in some practical applications comparing to other high accuracy biometric technologies. To alleviate these problems, video based face recognition has been proposed recently. One of the major advantages of video-based face recognition is to prevent the fraudulent system penetration by pre-recorded facial images. The great difficulty to forge a video sequence (possible but very difficult) in front of a live video camera may ensure that the biometric data come from the user at the time of authentication. Another key advantage of the video based method is that more information is available in a video sequence than in a single image. If the additional information can be properly extracted, we can further increase the recognition accuracy. / In this thesis, we develop a new video-to-video face recognition algorithm [86]. The major advantage of the video based method is that more information is available in a video sequence than in a single image. In order to take advantage of the large amount of information in the video sequence and at the same time overcome the processing speed and data size problems we develop several new techniques including temporal and spatial frame synchronization, multi-level subspace analysis, and multi-classifier integration for video sequence processing. An aligned video sequence for each person is first obtained by applying temporal and spatial synchronization, which effectively establishes the face correspondence using the information of both audio and video, then multi-level subspace analysis or multi-classifier integration is employed for further analysis based on the synchronized sequence. The method preserves all the temporal-spatial information contained in a video sequence. Near perfect classification results are obtained on the largest available XM2VTS face video database. In addition, using a similar framework, two kinds of much improved still image based face recognition algorithms [93][94] are developed by incorporating the Gabor representation, nonparametric feature extraction method, and multiple classifier integration techniques. Extensive experiments on two famous face databases (XM2VTS database and Purdue database) clearly show the superiority of our new algorithms. / by Li Zhifeng. / "March 2006." / Adviser: Xiaoou Tang. / Source: Dissertation Abstracts International, Volume: 67-11, Section: B, page: 6621. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (p. 105-114). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
|
25 |
Facial feature extraction and its applications =: 臉部特徵之擷取及其應用. / 臉部特徵之擷取及其應用 / Facial feature extraction and its applications =: Lian bu te zheng zhi xie qu ji qi ying yong. / Lian bu te zheng zhi xie qu ji qi ying yongJanuary 2001 (has links)
Lau Chun Man. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 173-177). / Text in English; abstracts in English and Chinese. / Lau Chun Man. / Acknowledgement --- p.ii / Abstract --- p.iii / Contents --- p.vi / List of Tables --- p.x / List of Figures --- p.xi / Notations --- p.xv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Facial features --- p.1 / Chapter 1.1.1 --- Face region --- p.2 / Chapter 1.1.2 --- Contours and locations of facial organs --- p.2 / Chapter 1.1.3 --- Fiducial points --- p.3 / Chapter 1.1.4 --- Features from Principal Components Analysis --- p.5 / Chapter 1.1.5 --- Relationships between facial features --- p.7 / Chapter 1.2 --- Facial feature extraction --- p.8 / Chapter 1.2.1 --- Extraction of contours and locations of facial organs --- p.9 / Chapter 1.2.2 --- Extraction of fiducial points --- p.9 / Chapter 1.3 --- Face recognition --- p.10 / Chapter 1.4 --- Face animation --- p.11 / Chapter 1.5 --- Thesis outline --- p.12 / Chapter 2 --- Extraction of contours and locations of facial organs --- p.13 / Chapter 2.1 --- Introduction --- p.13 / Chapter 2.2 --- Deformable template model --- p.21 / Chapter 2.2.1 --- Introduction --- p.21 / Chapter 2.2.2 --- Segmentation of facial organs --- p.21 / Chapter 2.2.3 --- Estimation of iris location --- p.22 / Chapter 2.2.4 --- Eye template model --- p.23 / Chapter 2.2.5 --- Eye contour extraction --- p.24 / Chapter 2.2.6 --- Experimental results --- p.26 / Chapter 2.3 --- Integral projection method --- p.28 / Chapter 2.3.1 --- Introduction --- p.28 / Chapter 2.3.2 --- Pre-processing of the intensity map --- p.28 / Chapter 2.3.3 --- Processing of facial mask --- p.28 / Chapter 2.3.4 --- Integral projection --- p.29 / Chapter 2.3.5 --- Extraction of the irises --- p.30 / Chapter 2.3.6 --- Experimental results --- p.30 / Chapter 2.4 --- Active contour model (Snake) --- p.32 / Chapter 2.4.1 --- Introduction --- p.32 / Chapter 2.4.2 --- Forces on active contour model --- p.33 / Chapter 2.4.3 --- Mathematical representation of Snake --- p.33 / Chapter 2.4.4 --- Internal energy --- p.33 / Chapter 2.4.5 --- Image energy --- p.35 / Chapter 2.4.6 --- External energy --- p.36 / Chapter 2.4.7 --- Energy minimization --- p.36 / Chapter 2.4.8 --- Experimental results --- p.36 / Chapter 2.5 --- Summary --- p.38 / Chapter 3 --- Extraction of fiducial points --- p.39 / Chapter 3.1 --- Introduction --- p.39 / Chapter 3.2 --- Theory --- p.42 / Chapter 3.2.1 --- Face region extraction --- p.42 / Chapter 3.2.2 --- Iris detection and energy function --- p.44 / Chapter 3.2.3 --- Extraction of fiducial points --- p.53 / Chapter 3.2.4 --- Optimization of energy functions --- p.54 / Chapter 3.3 --- Experimental results --- p.55 / Chapter 3.4 --- Geometric features --- p.61 / Chapter 3.4.1 --- Definition of geometric features --- p.61 / Chapter 3.4.2 --- Selection of geometric features for face recognition --- p.63 / Chapter 3.4.3 --- Discussion --- p.73 / Chapter 3.5 --- Gaobr features --- p.75 / Chapter 3.5.1 --- Introduction --- p.75 / Chapter 3.5.2 --- Properties of Gabor wavelets --- p.82 / Chapter 3.5.3 --- Gabor features for face recognition --- p.85 / Chapter 3.6 --- Summary --- p.89 / Chapter 4 --- The use of fiducial points for face recognition --- p.90 / Chapter 4.1 --- Introduction --- p.90 / Chapter 4.1.1 --- Problem of face recognition --- p.92 / Chapter 4.1.2 --- Face recognition process --- p.93 / Chapter 4.1.3 --- Features for face recognition --- p.94 / Chapter 4.1.4 --- Distance measure --- p.95 / Chapter 4.1.5 --- Interpretation of recognition results --- p.96 / Chapter 4.2 --- Face recognition by Principal Components Analysis (PCA) --- p.98 / Chapter 4.2.1 --- Introduction --- p.98 / Chapter 4.2.2 --- PCA recognition system overview --- p.101 / Chapter 4.2.3 --- Face database --- p.103 / Chapter 4.2.4 --- Experimental results and analysis --- p.103 / Chapter 4.3 --- Face recognition by geometric features --- p.105 / Chapter 4.3.1 --- System overview --- p.105 / Chapter 4.3.2 --- Face database --- p.107 / Chapter 4.3.3 --- Experimental results and analysis --- p.107 / Chapter 4.3.4 --- Summary --- p.109 / Chapter 4.4 --- Face recognition by Gabor features --- p.110 / Chapter 4.4.1 --- System overview --- p.110 / Chapter 4.4.2 --- Face database --- p.112 / Chapter 4.4.3 --- Experimental results and analysis --- p.112 / Chapter 4.4.4 --- Comparison of recognition rate --- p.123 / Chapter 4.4.5 --- Summary --- p.124 / Chapter 4.5 --- Summary --- p.125 / Chapter 5 --- The use of fiducial points for face animation --- p.126 / Chapter 5.1 --- Introduction --- p.126 / Chapter 5.2 --- Wire-frame model --- p.129 / Chapter 5.2.1 --- Wire-frame model I --- p.129 / Chapter 5.2.2 --- Wire-frame model II --- p.132 / Chapter 5.3 --- Construction of individualized 3-D face mdoel --- p.133 / Chapter 5.3.1 --- Wire-frame fitting --- p.133 / Chapter 5.3.2 --- Texture mapping --- p.136 / Chapter 5.3.3 --- Experimental results --- p.142 / Chapter 5.4 --- Face definition and animation in MPEG4 --- p.144 / Chapter 5.4.1 --- Introduction --- p.144 / Chapter 5.4.2 --- Correspondences between fiducial points and FDPs --- p.146 / Chapter 5.4.3 --- Automatic generation of FDPs --- p.148 / Chapter 5.4.4 --- Generation of expressions by FAPs --- p.148 / Chapter 5.5 --- Summary --- p.152 / Chapter 6 --- Discussions and Conclusions --- p.153 / Chapter 6.1 --- Discussions --- p.153 / Chapter 6.1.1 --- Extraction of contours and locations of facial organs --- p.154 / Chapter 6.1.2 --- Extraction of fiducial points --- p.155 / Chapter 6.1.3 --- The use of fiducial points for face recognition --- p.156 / Chapter 6.1.4 --- The use of fiducial points for face animation --- p.157 / Chapter 6.2 --- Conclusions --- p.160 / Chapter A --- Mathematical derivation of Principal Components Analysis --- p.160 / Chapter B --- Face database --- p.173 / Bibliography
|
26 |
A design framework for ISFAR: (an intelligent surveillance system with face recognition).January 2008 (has links)
Chan, Fai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 104-108). / Abstracts in English and Chinese. / Chapter 1. --- Introduction --- p.14 / Chapter 1.1. --- Background --- p.14 / Chapter 1.1.1. --- Introduction to Intelligent Surveillance System (ISS) --- p.14 / Chapter 1.1.2. --- Typical architecture of Surveillance System --- p.17 / Chapter 1.1.3. --- Single-camera vs Multi-camera Surveillance System --- p.17 / Chapter 1.1.4. --- Intelligent Surveillance System with Face Recognition (ISFAR) --- p.20 / Chapter 1.1.5. --- Minimal requirements for automatic Face Recognition --- p.21 / Chapter 1.2. --- Motivation --- p.22 / Chapter 1.3. --- Major Contributions --- p.26 / Chapter 1.3.1. --- A unified design framework for IS FAR --- p.26 / Chapter 1.3.2. --- Prototyping of IS FAR (ISFARO) --- p.29 / Chapter 1.3.3. --- Evaluation of ISFARO --- p.29 / Chapter 1.4. --- Thesis Organization --- p.30 / Chapter 2. --- Related Works --- p.31 / Chapter 2.1. --- Distant Human Identification (DHID) --- p.31 / Chapter 2.2. --- Distant Targets Identification System --- p.33 / Chapter 2.3. --- Virtual Vision System with Camera Scheduling --- p.35 / Chapter 3. --- A unified design framework for IS FAR --- p.37 / Chapter 3.1. --- Camera system modeling --- p.40 / Chapter 3.1.1. --- Stereo Triangulation (Human face location estimation) --- p.40 / Chapter 3.1.2. --- Camera system calibration --- p.42 / Chapter 3.2. --- Human face detection --- p.44 / Chapter 3.3. --- Human face tracking --- p.46 / Chapter 3.4. --- Human face correspondence --- p.50 / Chapter 3.4.1. --- Information consistency in stereo triangulation --- p.51 / Chapter 3.4.2. --- Proposed object correspondent algorithm --- p.52 / Chapter 3.5. --- Human face location and velocity estimation --- p.57 / Chapter 3.6. --- Human-Camera Synchronization --- p.58 / Chapter 3.6.1. --- Controlling a PTZ Camera for capturing human facial images --- p.60 / Chapter 3.6.2. --- Mathematical Formulation of the Human Face Capturing Problem --- p.61 / Chapter 4. --- Prototyping of lSFAR (ISFARO) --- p.64 / Chapter 4.1. --- Experiment Setup --- p.64 / Chapter 4.2. --- Speed of the PTZ camera 一 AXIS 213 PTZ --- p.67 / Chapter 4.3. --- Performance of human face detection and tracking --- p.68 / Chapter 4.4. --- Performance of human face correspondence --- p.72 / Chapter 4.5. --- Performance of human face location estimation --- p.74 / Chapter 4.6. --- Stability test of the Human-Camera Synchronization model --- p.75 / Chapter 4.7. --- Performance of ISFARO in capturing human facial images --- p.76 / Chapter 4.8. --- System Profiling of ISFARO --- p.79 / Chapter 4.9. --- Summary --- p.79 / Chapter 5. --- Improvements to ISFARO --- p.80 / Chapter 5.1. --- System Dynamics oflSFAR --- p.80 / Chapter 5.2. --- Proposed improvements to ISFARO --- p.82 / Chapter 5.2.1. --- Semi-automatic camera system calibration --- p.82 / Chapter 5.2.2. --- Velocity estimation using Kalman filter --- p.83 / Chapter 5.2.3. --- Reduction in PTZ camera delay --- p.87 / Chapter 5.2.4. --- Compensation of image blurriness due to motion from human --- p.89 / Chapter 5.3. --- Experiment Setup --- p.91 / Chapter 5.4. --- Performance of human face location estimation --- p.91 / Chapter 5.5. --- Speed of the PTZ Camera - SONY SNC RX-570 --- p.93 / Chapter 5.6. --- Performance of human face velocity estimation --- p.95 / Chapter 5.7. --- Performance of improved ISFARO in capturing human facial images --- p.99 / Chapter 6. --- Conclusions --- p.101 / Chapter 7. --- Bibliography --- p.104
|
27 |
Visual Attention in Brains and ComputersHurlbert, Anya, Poggio, Tomaso 01 September 1986 (has links)
Existing computer programs designed to perform visual recognition of objects suffer from a basic weakness: the inability to spotlight regions in the image that potentially correspond to objects of interest. The brain's mechanisms of visual attention, elucidated by psychophysicists and neurophysiologists, may suggest a solution to the computer's problem of object recognition.
|
28 |
Face Representation in Cortex: Studies Using a Simple and Not So Special ModelRosen, Ezra 05 June 2003 (has links)
The face inversion effect has been widely documented as an effect of the uniqueness of face processing. Using a computational model, we show that the face inversion effect is a byproduct of expertise with respect to the face object class. In simulations using HMAX, a hierarchical, shape based model, we show that the magnitude of the inversion effect is a function of the specificity of the representation. Using many, sharply tuned units, an ``expert'' has a large inversion effect. On the other hand, if fewer, broadly tuned units are used, the expertise is lost, and this ``novice'' has a small inversion effect. As the size of the inversion effect is a product of the representation, not the object class, given the right training we can create experts and novices in any object class. Using the same representations as with faces, we create experts and novices for cars. We also measure the feasibility of a view-based model for recognition of rotated objects using HMAX. Using faces, we show that transfer of learning to novel views is possible. Given only one training view, the view-based model can recognize a face at a new orientation via interpolation from the views to which it had been tuned. Although the model can generalize well to upright faces, inverted faces yield poor performance because the features change differently under rotation.
|
29 |
Role of color in face recognitionYip, Andrew, Sinha, Pawan 13 December 2001 (has links)
One of the key challenges in face perception lies in determining the contribution of different cues to face identification. In this study, we focus on the role of color cues. Although color appears to be a salient attribute of faces, past research has suggested that it confers little recognition advantage for identifying people. Here we report experimental results suggesting that color cues do play a role in face recognition and their contribution becomes evident when shape cues are degraded. Under such conditions, recognition performance with color images is significantly better than that with grayscale images. Our experimental results also indicate that the contribution of color may lie not so much in providing diagnostic cues to identity as in aiding low-level image-analysis processes such as segmentation.
|
30 |
Automated face tracking and recognitionHesher, Matthew Curtis. Erlebacher, Gordon. January 2003 (has links)
Thesis (M.S.)--Florida State University, 2003. / Advisor: Dr. Gordon Erlebacher, Florida State University, College of Arts and Sciences, Dept. of Computer Science. Title and description from dissertation home page (viewed Apr. 06, 2004). Includes bibliographical references.
|
Page generated in 0.0351 seconds