• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 249
  • 52
  • 26
  • 23
  • 16
  • 16
  • 10
  • 7
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 472
  • 472
  • 189
  • 138
  • 130
  • 83
  • 75
  • 70
  • 65
  • 54
  • 53
  • 50
  • 50
  • 48
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Statistical approaches for facial feature extraction and face recognition. / 抽取臉孔特徵及辨認臉孔的統計學方法 / Statistical approaches for facial feature extraction and face recognition. / Chou qu lian kong te zheng ji bian ren lian kong de tong ji xue fang fa

January 2004 (has links)
Sin Ka Yu = 抽取臉孔特徵及辨認臉孔的統計學方法 / 冼家裕. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (leaves 86-90). / Text in English; abstracts in English and Chinese. / Sin Ka Yu = Chou qu lian kong te zheng ji bian ren lian kong de tong ji xue fang fa / Xian Jiayu. / Chapter Chapter 1. --- Introduction --- p.1 / Chapter 1.1. --- Motivation --- p.1 / Chapter 1.2. --- Objectives --- p.4 / Chapter 1.3. --- Organization of the thesis --- p.4 / Chapter Chapter 2. --- Facial Feature Extraction --- p.6 / Chapter 2.1. --- Introduction --- p.6 / Chapter 2.2. --- Reviews of Statistical Approach --- p.8 / Chapter 2.2.1. --- Eigenfaces --- p.8 / Chapter 2.2.1.1. --- Eigenfeatures Error! Bookmark not defined / Chapter 2.2.3. --- Singular Value Decomposition --- p.14 / Chapter 2.2.4. --- Summary --- p.15 / Chapter 2.3. --- Review of fiducial point localization methods --- p.16 / Chapter 2.3.1. --- Symmetry based Approach --- p.16 / Chapter 2.3.2. --- Color Based Approaches --- p.17 / Chapter 2.3.3. --- Integral Projection --- p.17 / Chapter 2.3.4. --- Deformable Template --- p.20 / Chapter 2.4. --- Corner-based Fiducial Point Localization --- p.22 / Chapter 2.4.1. --- Facial Region Extraction --- p.22 / Chapter 2.4.2. --- Corner Detection --- p.25 / Chapter 2.4.3. --- Corner Selection --- p.27 / Chapter 2.4.3.1. --- Mouth Corner Pairs Detection --- p.27 / Chapter 2.4.3.2. --- Iris Detection --- p.27 / Chapter 2.5. --- Experimental Results --- p.30 / Chapter 2.6. --- Conclusions --- p.30 / Chapter 2.7. --- Notes on Publications --- p.30 / Chapter Chapter 3. --- Fiducial Point Extraction with Shape Constraint --- p.32 / Chapter 3.1. --- Introduction --- p.32 / Chapter 3.2. --- Statistical Theory of Shape --- p.33 / Chapter 3.2.1. --- Shape Space --- p.33 / Chapter 3.2.2. --- Shape Distribution --- p.34 / Chapter 3.3. --- Shape Guided Fiducial Point Localization --- p.38 / Chapter 3.3.1. --- Shape Constraints --- p.38 / Chapter 3.3.2. --- Intelligent Search --- p.40 / Chapter 3.4. --- Experimental Results --- p.40 / Chapter 3.5. --- Conclusions --- p.42 / Chapter 3.6. --- Notes on Publications --- p.42 / Chapter Chapter 4. --- Statistical Pattern Recognition --- p.43 / Chapter 4.1. --- Introduction --- p.43 / Chapter 4.2. --- Bayes Decision Rule --- p.44 / Chapter 4.3. --- Gaussian Maximum Probability Classifier --- p.46 / Chapter 4.4. --- Maximum Likelihood Estimation of Mean and Covariance Matrix --- p.48 / Chapter 4.5. --- Small Sample Size Problem --- p.50 / Chapter 4.5.1. --- Dispersed Eigenvalues --- p.50 / Chapter 4.5.2. --- Distorted Classification Rule --- p.55 / Chapter 4.6. --- Review of Methods Handling the Small Sample Size Problem --- p.57 / Chapter 4.6.1. --- Linear Discriminant Classifier --- p.57 / Chapter 4.6.2. --- Regularized Discriminant Analysis --- p.59 / Chapter 4.6.3. --- Leave-one-out Likelihood Method --- p.63 / Chapter 4.6.4. --- Bayesian Leave-one-out Likelihood method --- p.65 / Chapter 4.7. --- Proposed Method --- p.68 / Chapter 4.7.1. --- A New Covariance Estimator --- p.70 / Chapter 4.7.2. --- Model Selection --- p.75 / Chapter 4.7.3. --- The Mixture Parameter --- p.76 / Chapter 4.8. --- Experimental results --- p.77 / Chapter 4.8.1. --- Implementation --- p.77 / Chapter 4.8.2. --- Results --- p.79 / Chapter 4.9. --- Conclusion --- p.81 / Chapter 4.10. --- Notes on Publications --- p.82 / Chapter Chapter 5. --- Conclusions and Future works --- p.83 / Chapter 5.1. --- Conclusions and Contributions --- p.83 / Chapter 5.2. --- Future Works --- p.84
32

Audio-guided video based face recognition. / CUHK electronic theses & dissertations collection

January 2006 (has links)
Face recognition is one of the most challenging computer vision research topics since faces appear differently even for the same person due to expression, pose, lighting, occlusion and many other confounding factors in real life. During the past thirty years, a number of face recognition techniques have been proposed. However, all of these methods focus exclusively on image-based face recognition that uses a still image as input data. One problem with the image-based face recognition is that it is possible to use a pre-recorded face photo to confuse a camera to take it as a live subject. The second problem is that the image-based recognition accuracy is still too low in some practical applications comparing to other high accuracy biometric technologies. To alleviate these problems, video based face recognition has been proposed recently. One of the major advantages of video-based face recognition is to prevent the fraudulent system penetration by pre-recorded facial images. The great difficulty to forge a video sequence (possible but very difficult) in front of a live video camera may ensure that the biometric data come from the user at the time of authentication. Another key advantage of the video based method is that more information is available in a video sequence than in a single image. If the additional information can be properly extracted, we can further increase the recognition accuracy. / In this thesis, we develop a new video-to-video face recognition algorithm [86]. The major advantage of the video based method is that more information is available in a video sequence than in a single image. In order to take advantage of the large amount of information in the video sequence and at the same time overcome the processing speed and data size problems we develop several new techniques including temporal and spatial frame synchronization, multi-level subspace analysis, and multi-classifier integration for video sequence processing. An aligned video sequence for each person is first obtained by applying temporal and spatial synchronization, which effectively establishes the face correspondence using the information of both audio and video, then multi-level subspace analysis or multi-classifier integration is employed for further analysis based on the synchronized sequence. The method preserves all the temporal-spatial information contained in a video sequence. Near perfect classification results are obtained on the largest available XM2VTS face video database. In addition, using a similar framework, two kinds of much improved still image based face recognition algorithms [93][94] are developed by incorporating the Gabor representation, nonparametric feature extraction method, and multiple classifier integration techniques. Extensive experiments on two famous face databases (XM2VTS database and Purdue database) clearly show the superiority of our new algorithms. / by Li Zhifeng. / "March 2006." / Adviser: Xiaoou Tang. / Source: Dissertation Abstracts International, Volume: 67-11, Section: B, page: 6621. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (p. 105-114). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
33

Facial feature extraction and its applications =: 臉部特徵之擷取及其應用. / 臉部特徵之擷取及其應用 / Facial feature extraction and its applications =: Lian bu te zheng zhi xie qu ji qi ying yong. / Lian bu te zheng zhi xie qu ji qi ying yong

January 2001 (has links)
Lau Chun Man. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 173-177). / Text in English; abstracts in English and Chinese. / Lau Chun Man. / Acknowledgement --- p.ii / Abstract --- p.iii / Contents --- p.vi / List of Tables --- p.x / List of Figures --- p.xi / Notations --- p.xv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Facial features --- p.1 / Chapter 1.1.1 --- Face region --- p.2 / Chapter 1.1.2 --- Contours and locations of facial organs --- p.2 / Chapter 1.1.3 --- Fiducial points --- p.3 / Chapter 1.1.4 --- Features from Principal Components Analysis --- p.5 / Chapter 1.1.5 --- Relationships between facial features --- p.7 / Chapter 1.2 --- Facial feature extraction --- p.8 / Chapter 1.2.1 --- Extraction of contours and locations of facial organs --- p.9 / Chapter 1.2.2 --- Extraction of fiducial points --- p.9 / Chapter 1.3 --- Face recognition --- p.10 / Chapter 1.4 --- Face animation --- p.11 / Chapter 1.5 --- Thesis outline --- p.12 / Chapter 2 --- Extraction of contours and locations of facial organs --- p.13 / Chapter 2.1 --- Introduction --- p.13 / Chapter 2.2 --- Deformable template model --- p.21 / Chapter 2.2.1 --- Introduction --- p.21 / Chapter 2.2.2 --- Segmentation of facial organs --- p.21 / Chapter 2.2.3 --- Estimation of iris location --- p.22 / Chapter 2.2.4 --- Eye template model --- p.23 / Chapter 2.2.5 --- Eye contour extraction --- p.24 / Chapter 2.2.6 --- Experimental results --- p.26 / Chapter 2.3 --- Integral projection method --- p.28 / Chapter 2.3.1 --- Introduction --- p.28 / Chapter 2.3.2 --- Pre-processing of the intensity map --- p.28 / Chapter 2.3.3 --- Processing of facial mask --- p.28 / Chapter 2.3.4 --- Integral projection --- p.29 / Chapter 2.3.5 --- Extraction of the irises --- p.30 / Chapter 2.3.6 --- Experimental results --- p.30 / Chapter 2.4 --- Active contour model (Snake) --- p.32 / Chapter 2.4.1 --- Introduction --- p.32 / Chapter 2.4.2 --- Forces on active contour model --- p.33 / Chapter 2.4.3 --- Mathematical representation of Snake --- p.33 / Chapter 2.4.4 --- Internal energy --- p.33 / Chapter 2.4.5 --- Image energy --- p.35 / Chapter 2.4.6 --- External energy --- p.36 / Chapter 2.4.7 --- Energy minimization --- p.36 / Chapter 2.4.8 --- Experimental results --- p.36 / Chapter 2.5 --- Summary --- p.38 / Chapter 3 --- Extraction of fiducial points --- p.39 / Chapter 3.1 --- Introduction --- p.39 / Chapter 3.2 --- Theory --- p.42 / Chapter 3.2.1 --- Face region extraction --- p.42 / Chapter 3.2.2 --- Iris detection and energy function --- p.44 / Chapter 3.2.3 --- Extraction of fiducial points --- p.53 / Chapter 3.2.4 --- Optimization of energy functions --- p.54 / Chapter 3.3 --- Experimental results --- p.55 / Chapter 3.4 --- Geometric features --- p.61 / Chapter 3.4.1 --- Definition of geometric features --- p.61 / Chapter 3.4.2 --- Selection of geometric features for face recognition --- p.63 / Chapter 3.4.3 --- Discussion --- p.73 / Chapter 3.5 --- Gaobr features --- p.75 / Chapter 3.5.1 --- Introduction --- p.75 / Chapter 3.5.2 --- Properties of Gabor wavelets --- p.82 / Chapter 3.5.3 --- Gabor features for face recognition --- p.85 / Chapter 3.6 --- Summary --- p.89 / Chapter 4 --- The use of fiducial points for face recognition --- p.90 / Chapter 4.1 --- Introduction --- p.90 / Chapter 4.1.1 --- Problem of face recognition --- p.92 / Chapter 4.1.2 --- Face recognition process --- p.93 / Chapter 4.1.3 --- Features for face recognition --- p.94 / Chapter 4.1.4 --- Distance measure --- p.95 / Chapter 4.1.5 --- Interpretation of recognition results --- p.96 / Chapter 4.2 --- Face recognition by Principal Components Analysis (PCA) --- p.98 / Chapter 4.2.1 --- Introduction --- p.98 / Chapter 4.2.2 --- PCA recognition system overview --- p.101 / Chapter 4.2.3 --- Face database --- p.103 / Chapter 4.2.4 --- Experimental results and analysis --- p.103 / Chapter 4.3 --- Face recognition by geometric features --- p.105 / Chapter 4.3.1 --- System overview --- p.105 / Chapter 4.3.2 --- Face database --- p.107 / Chapter 4.3.3 --- Experimental results and analysis --- p.107 / Chapter 4.3.4 --- Summary --- p.109 / Chapter 4.4 --- Face recognition by Gabor features --- p.110 / Chapter 4.4.1 --- System overview --- p.110 / Chapter 4.4.2 --- Face database --- p.112 / Chapter 4.4.3 --- Experimental results and analysis --- p.112 / Chapter 4.4.4 --- Comparison of recognition rate --- p.123 / Chapter 4.4.5 --- Summary --- p.124 / Chapter 4.5 --- Summary --- p.125 / Chapter 5 --- The use of fiducial points for face animation --- p.126 / Chapter 5.1 --- Introduction --- p.126 / Chapter 5.2 --- Wire-frame model --- p.129 / Chapter 5.2.1 --- Wire-frame model I --- p.129 / Chapter 5.2.2 --- Wire-frame model II --- p.132 / Chapter 5.3 --- Construction of individualized 3-D face mdoel --- p.133 / Chapter 5.3.1 --- Wire-frame fitting --- p.133 / Chapter 5.3.2 --- Texture mapping --- p.136 / Chapter 5.3.3 --- Experimental results --- p.142 / Chapter 5.4 --- Face definition and animation in MPEG4 --- p.144 / Chapter 5.4.1 --- Introduction --- p.144 / Chapter 5.4.2 --- Correspondences between fiducial points and FDPs --- p.146 / Chapter 5.4.3 --- Automatic generation of FDPs --- p.148 / Chapter 5.4.4 --- Generation of expressions by FAPs --- p.148 / Chapter 5.5 --- Summary --- p.152 / Chapter 6 --- Discussions and Conclusions --- p.153 / Chapter 6.1 --- Discussions --- p.153 / Chapter 6.1.1 --- Extraction of contours and locations of facial organs --- p.154 / Chapter 6.1.2 --- Extraction of fiducial points --- p.155 / Chapter 6.1.3 --- The use of fiducial points for face recognition --- p.156 / Chapter 6.1.4 --- The use of fiducial points for face animation --- p.157 / Chapter 6.2 --- Conclusions --- p.160 / Chapter A --- Mathematical derivation of Principal Components Analysis --- p.160 / Chapter B --- Face database --- p.173 / Bibliography
34

A design framework for ISFAR: (an intelligent surveillance system with face recognition).

January 2008 (has links)
Chan, Fai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 104-108). / Abstracts in English and Chinese. / Chapter 1. --- Introduction --- p.14 / Chapter 1.1. --- Background --- p.14 / Chapter 1.1.1. --- Introduction to Intelligent Surveillance System (ISS) --- p.14 / Chapter 1.1.2. --- Typical architecture of Surveillance System --- p.17 / Chapter 1.1.3. --- Single-camera vs Multi-camera Surveillance System --- p.17 / Chapter 1.1.4. --- Intelligent Surveillance System with Face Recognition (ISFAR) --- p.20 / Chapter 1.1.5. --- Minimal requirements for automatic Face Recognition --- p.21 / Chapter 1.2. --- Motivation --- p.22 / Chapter 1.3. --- Major Contributions --- p.26 / Chapter 1.3.1. --- A unified design framework for IS FAR --- p.26 / Chapter 1.3.2. --- Prototyping of IS FAR (ISFARO) --- p.29 / Chapter 1.3.3. --- Evaluation of ISFARO --- p.29 / Chapter 1.4. --- Thesis Organization --- p.30 / Chapter 2. --- Related Works --- p.31 / Chapter 2.1. --- Distant Human Identification (DHID) --- p.31 / Chapter 2.2. --- Distant Targets Identification System --- p.33 / Chapter 2.3. --- Virtual Vision System with Camera Scheduling --- p.35 / Chapter 3. --- A unified design framework for IS FAR --- p.37 / Chapter 3.1. --- Camera system modeling --- p.40 / Chapter 3.1.1. --- Stereo Triangulation (Human face location estimation) --- p.40 / Chapter 3.1.2. --- Camera system calibration --- p.42 / Chapter 3.2. --- Human face detection --- p.44 / Chapter 3.3. --- Human face tracking --- p.46 / Chapter 3.4. --- Human face correspondence --- p.50 / Chapter 3.4.1. --- Information consistency in stereo triangulation --- p.51 / Chapter 3.4.2. --- Proposed object correspondent algorithm --- p.52 / Chapter 3.5. --- Human face location and velocity estimation --- p.57 / Chapter 3.6. --- Human-Camera Synchronization --- p.58 / Chapter 3.6.1. --- Controlling a PTZ Camera for capturing human facial images --- p.60 / Chapter 3.6.2. --- Mathematical Formulation of the Human Face Capturing Problem --- p.61 / Chapter 4. --- Prototyping of lSFAR (ISFARO) --- p.64 / Chapter 4.1. --- Experiment Setup --- p.64 / Chapter 4.2. --- Speed of the PTZ camera 一 AXIS 213 PTZ --- p.67 / Chapter 4.3. --- Performance of human face detection and tracking --- p.68 / Chapter 4.4. --- Performance of human face correspondence --- p.72 / Chapter 4.5. --- Performance of human face location estimation --- p.74 / Chapter 4.6. --- Stability test of the Human-Camera Synchronization model --- p.75 / Chapter 4.7. --- Performance of ISFARO in capturing human facial images --- p.76 / Chapter 4.8. --- System Profiling of ISFARO --- p.79 / Chapter 4.9. --- Summary --- p.79 / Chapter 5. --- Improvements to ISFARO --- p.80 / Chapter 5.1. --- System Dynamics oflSFAR --- p.80 / Chapter 5.2. --- Proposed improvements to ISFARO --- p.82 / Chapter 5.2.1. --- Semi-automatic camera system calibration --- p.82 / Chapter 5.2.2. --- Velocity estimation using Kalman filter --- p.83 / Chapter 5.2.3. --- Reduction in PTZ camera delay --- p.87 / Chapter 5.2.4. --- Compensation of image blurriness due to motion from human --- p.89 / Chapter 5.3. --- Experiment Setup --- p.91 / Chapter 5.4. --- Performance of human face location estimation --- p.91 / Chapter 5.5. --- Speed of the PTZ Camera - SONY SNC RX-570 --- p.93 / Chapter 5.6. --- Performance of human face velocity estimation --- p.95 / Chapter 5.7. --- Performance of improved ISFARO in capturing human facial images --- p.99 / Chapter 6. --- Conclusions --- p.101 / Chapter 7. --- Bibliography --- p.104
35

Visual Attention in Brains and Computers

Hurlbert, Anya, Poggio, Tomaso 01 September 1986 (has links)
Existing computer programs designed to perform visual recognition of objects suffer from a basic weakness: the inability to spotlight regions in the image that potentially correspond to objects of interest. The brain's mechanisms of visual attention, elucidated by psychophysicists and neurophysiologists, may suggest a solution to the computer's problem of object recognition.
36

Face Representation in Cortex: Studies Using a Simple and Not So Special Model

Rosen, Ezra 05 June 2003 (has links)
The face inversion effect has been widely documented as an effect of the uniqueness of face processing. Using a computational model, we show that the face inversion effect is a byproduct of expertise with respect to the face object class. In simulations using HMAX, a hierarchical, shape based model, we show that the magnitude of the inversion effect is a function of the specificity of the representation. Using many, sharply tuned units, an ``expert'' has a large inversion effect. On the other hand, if fewer, broadly tuned units are used, the expertise is lost, and this ``novice'' has a small inversion effect. As the size of the inversion effect is a product of the representation, not the object class, given the right training we can create experts and novices in any object class. Using the same representations as with faces, we create experts and novices for cars. We also measure the feasibility of a view-based model for recognition of rotated objects using HMAX. Using faces, we show that transfer of learning to novel views is possible. Given only one training view, the view-based model can recognize a face at a new orientation via interpolation from the views to which it had been tuned. Although the model can generalize well to upright faces, inverted faces yield poor performance because the features change differently under rotation.
37

Role of color in face recognition

Yip, Andrew, Sinha, Pawan 13 December 2001 (has links)
One of the key challenges in face perception lies in determining the contribution of different cues to face identification. In this study, we focus on the role of color cues. Although color appears to be a salient attribute of faces, past research has suggested that it confers little recognition advantage for identifying people. Here we report experimental results suggesting that color cues do play a role in face recognition and their contribution becomes evident when shape cues are degraded. Under such conditions, recognition performance with color images is significantly better than that with grayscale images. Our experimental results also indicate that the contribution of color may lie not so much in providing diagnostic cues to identity as in aiding low-level image-analysis processes such as segmentation.
38

The Identity Myth: Constructing the Face in Technologies of Citizenship

Ferenbok, Joseph 13 April 2010 (has links)
Over the last century, images of faces have become integral components of many institutional identification systems. A driver’s licence, a passport and often even a health care card, all usually feature prominently images representing the face of their bearer as part of the mechanism for linking real-world bodies to institutional records. Increasingly the production, distribution and inspection of these documents is becoming computer-mediated. As photo ID documents become ‘enhanced’ by computerization, the design challenges and compromises become increasingly coded in the hierarchy of gazes aimed at individual faces and their technologically mediated surrogates. In Western visual culture, representations of faces have been incorporated into identity documents since the 15th century when Renaissance portraits were first used to visually and legally establish the social and institutional positions of particular individuals. However, it was not until the 20th century that official identity documents and infrastructures began to include photographic representations of individual faces. This work explores photo ID documents within the context of “the face,”—a theoretical model for understanding relationships of power coded using representations of particular human faces as tokens of identity. “The face” is a product of mythology for linking ideas of stable identity with images of particular human beings. This thesis extends the panoptic model of the body and contributes to the understanding of changes posed by computerization to the norms of constructing institutional identity and interaction based on surrogates of faces. The exploration is guided by four key research questions: What is “the face”? How does it work? What are its origins (or mythologies)? And how is “the face” being transformed through digitization? To address these questions this thesis weaves ideas from theorists including Foucault, Deleuze and Lyon to explore the rise of “the face” as a strategy for governing, sorting, and classifying members of constituent populations. The work re-examines the techno-political value of captured faces as identity data and by tracing the cultural and techno-political genealogies tying faces to ideas of stable institutional identities this thesis demonstrates face-based identity practices are being improvised and reconfigured by computerization and why these practices are significant for understanding the changing norms of interaction between individuals and institutions.
39

The Identity Myth: Constructing the Face in Technologies of Citizenship

Ferenbok, Joseph 13 April 2010 (has links)
Over the last century, images of faces have become integral components of many institutional identification systems. A driver’s licence, a passport and often even a health care card, all usually feature prominently images representing the face of their bearer as part of the mechanism for linking real-world bodies to institutional records. Increasingly the production, distribution and inspection of these documents is becoming computer-mediated. As photo ID documents become ‘enhanced’ by computerization, the design challenges and compromises become increasingly coded in the hierarchy of gazes aimed at individual faces and their technologically mediated surrogates. In Western visual culture, representations of faces have been incorporated into identity documents since the 15th century when Renaissance portraits were first used to visually and legally establish the social and institutional positions of particular individuals. However, it was not until the 20th century that official identity documents and infrastructures began to include photographic representations of individual faces. This work explores photo ID documents within the context of “the face,”—a theoretical model for understanding relationships of power coded using representations of particular human faces as tokens of identity. “The face” is a product of mythology for linking ideas of stable identity with images of particular human beings. This thesis extends the panoptic model of the body and contributes to the understanding of changes posed by computerization to the norms of constructing institutional identity and interaction based on surrogates of faces. The exploration is guided by four key research questions: What is “the face”? How does it work? What are its origins (or mythologies)? And how is “the face” being transformed through digitization? To address these questions this thesis weaves ideas from theorists including Foucault, Deleuze and Lyon to explore the rise of “the face” as a strategy for governing, sorting, and classifying members of constituent populations. The work re-examines the techno-political value of captured faces as identity data and by tracing the cultural and techno-political genealogies tying faces to ideas of stable institutional identities this thesis demonstrates face-based identity practices are being improvised and reconfigured by computerization and why these practices are significant for understanding the changing norms of interaction between individuals and institutions.
40

Towards the Development of Training Tools for Face Recognition

Rodriguez, Jobany 2011 May 1900 (has links)
Distinctiveness plays an important role in the recognition of faces, i.e., a distinctive face is usually easier to remember than a typical face in a recognition task. This distinctiveness effect explains why caricatures are recognized faster and more accurately than unexaggerated (i.e., veridical) faces. Furthermore, using caricatures during training can facilitate recognition of a person’s face at a later time. The objective of this thesis is to determine the extent to which photorealistic computer-generated caricatures may be used in training tools to improve recognition of faces by humans. To pursue this objective, we developed a caricaturization procedure for three-dimensional (3D) face models, and characterized face recognition performance (by humans) through a series of perceptual studies. The first study focused on 3D shape information without texture. Namely, we tested whether exposure to caricatures during an initial familiarization phase would aid in the recognition of their veridical counterparts at a later time. We examined whether this effect would emerge with frontal rather than three-quarter views, after very brief exposure to caricatures during the learning phase and after modest rotations of faces during the recognition phase. Results indicate that, even under these difficult training conditions, people are more accurate at recognizing unaltered faces if they are first familiarized with caricatures of the faces, rather than with the unaltered faces. These preliminary findings support the use of caricatures in new training methods to improve face recognition. In the second study, we incorporated texture into our 3D models, which allowed us to generate photorealistic renderings. In this study, we sought to determine the extent to which familiarization with caricaturized faces could also be used to reduce other-race effects (e.g., the phenomenon whereby faces from other races appear less distinct than faces from our own race). Using an old/new face recognition paradigm, Caucasian participants were first familiarized with a set of faces from multiple races, and then asked to recognize those faces among a set of confounders. Participants who were familiarized with and then asked to recognize veridical versions of the faces showed a significant other-race effect on Indian faces. In contrast, participants who were familiarized with caricaturized versions of the same faces, and then asked to recognize their veridical versions, showed no other-race effects on Indian faces. This result suggests that caricaturization may be used to help individuals focus their attention to features that are useful for recognition of other-race faces. The third and final experiment investigated the practical application of our earlier results. Since 3D facial scans are not generally available, here we also sought to determine whether 3D reconstructions from 2D frontal images could be used for the same purpose. Using the same old/new face recognition paradigm, participants who were familiarized with reconstructed faces and then asked to recognize the ground truth versions of the faces showed a significant reduction in performance compared to the previous study. In addition, participants who were familiarized with caricatures of reconstructed versions, and then asked to recognize their corresponding ground truth versions, showed a larger reduction in performance. Our results suggest that, despite the high level of photographic realism achieved by current 3D facial reconstruction methods, additional research is needed in order to reduce reconstruction errors and capture the distinctive facial traits of an individual. These results are critical for the development of training tools based on computer-generated photorealistic caricatures from “mug shot” images.

Page generated in 0.0897 seconds