Spelling suggestions: "subject:"facerecognition"" "subject:"face_recognition""
1 |
Names and faces: the role of name labels in the formation of face representationsGordon, Iris 31 May 2011 (has links)
Although previous research in event-related potentials (ERPs) has focused on the conditions under which faces are recognized, less research has focused on the process by which face representations are acquired and maintained. In Experiment I, participants were required to monitor for a target "Joe" face that was shown amongst a series of distractor "Other" faces. At the half-way point, participants were instructed to switch targets from the Joe face to a previous distractor face that is now labeled "Bob". The ERP analysis focused on the posterior N250 component known to index face familiarity and the P300 component associated with context
updating and response decision. Results showed that the N250 increased in negativity to target Joe face compared to the Bob face and a designated Other face. In the second half of the experiment, a more negative N250 was produced to the now target Bob face compared to the Other face. Critical1y, the more negative N250 to the Joe face was maintained even though Joe was no longer the target. The P300 component followed a similar pattern of brain response where the Joe face elicited a significantly larger P300 amplitude than the Other and Bob face. In the Bob half of the experiment, the Bob face elicited a reliably larger P300 than the Other faces
and the heightened P300 to the Joe face was sustained. In Experiment 2, we examined whether the increased N2S0 negativity and enhanced P300 to Joe was due to simple naming effects. Participants were introduced to both Joe and Bob faces and names at the beginning of the experiment. During the first half of the experiment,
participants were to monitor for the Joe face and at the half-way point, they were instructed to switch targets to the Bob face. Findings show that N250 negativity significantly increased to the Joe face relative to the Bob and Other faces in the first half of the experiment and an increased N250 negativity was found for target Bob face and the non-target Joe face in the second half. An increased P300 amplitude was demonstrated to the target Joe and Bob faces in the first and second halves of the experiment, respectively. Importantly, the P300 amplitude elicited by the Joe face equaled the P300 amplitude to the Bob face even though it was no longer the target face.The findings from Experiment 1 and 2 suggest that the N250 component is not solely determined
by name labeling, exposure or task-relevancy, but it is the combination of these factors that contribute to the acquisition of enduring face representations. / Graduate
|
2 |
Dynamic face models : construction and applicationLi, Yongmin January 2001 (has links)
No description available.
|
3 |
Facilitation and inhibition of person identificationBrennen, Tim January 1989 (has links)
No description available.
|
4 |
Event-related brain potential correlates of familiar face and name processingPickering, Esther January 2002 (has links)
No description available.
|
5 |
A generic neural network architecture for deformation invariant object recognitionBanarse, D. S. January 1997 (has links)
No description available.
|
6 |
The role of emotion in face recognitionBate, Sarah January 2008 (has links)
This thesis examines the role of emotion in face recognition, using measures of the visual scanpath as indicators of recognition. There are two key influences of emotion in face recognition: the emotional expression displayed upon a face, and the emotional feelings evoked within a perceiver in response to a familiar person. An initial set of studies examined these processes in healthy participants. First, positive emotional expressions were found to facilitate the processing of famous faces, and negative expressions facilitated the processing of novel faces. A second set of studies examined the role of emotional feelings in recognition. Positive feelings towards a face were also found to facilitate processing, in both an experimental study using newly learned faces and in the recognition of famous faces. A third set of studies using healthy participants examined the relative influences of emotional expression and emotional feelings in face recognition. For newly learned faces, positive expressions and positive feelings had a similar influence in recognition, with no presiding role of either dimension. However, emotional feelings had an influence over and above that of expression in the recognition of famous faces. A final study examined whether emotional valence could influence covert recognition in developmental prosopagnosia, and results suggested the patients process faces according to emotional valence rather than familiarity per se. Specifically, processing was facilitated for studied-positive faces compared to studied-neutral and novel faces, but impeded for studied-negative faces. This pattern of findings extends existing reports of a positive-facilitation effect in face recognition, and suggests there may be a closer relationship between facial familiarity and emotional valence than previously envisaged. The implications of these findings are discussed in relation to models of normal face recognition and theories of covert recognition in prosopagnosia.
|
7 |
Online Face Recognition GameQu, Yawe, Yang, Mingxi January 2006 (has links)
<p>The purpose of this project is to test and improve people’s ability of face recognition. </p><p>Although there are some tests on the internet with the same purpose, the problem is that people </p><p>may feel bored and give up before finishing the tests. Consequently they may not benefit from </p><p>testing nor from training. To solve this problem, face recognition and online game are put </p><p>together in this project. The game is supposed to provide entertainment when people are playing, </p><p>so that more people can take the test and improve their abilities of face recognition. </p><p>In the game design, the game is assumed to take place in the face recognition lab, which is </p><p>an imaginary lab. The player plays the main role in this game and asked to solve a number of </p><p>problems. There are several scenarios waiting for the player, which mainly need face recognition </p><p>skills from the player. At the end the player obtains the result of evaluation of her/his skills in </p><p>face recognition.</p>
|
8 |
Image-based face recognition under varying pose and illuminations conditionsDu, Shan 05 1900 (has links)
Image-based face recognition has attained wide applications during the past decades in commerce and law enforcement areas, such as mug shot database matching, identity authentication, and access control. Existing face recognition techniques (e.g., Eigenface, Fisherface, and Elastic Bunch Graph Matching, etc.), however, do not perform well when the following case inevitably exists. The case is that, due to some variations in imaging conditions, e.g., pose and illumination changes, face images of the same person often have different appearances. These variations make face recognition techniques much challenging. With this concern in mind, the objective of my research is to develop robust face recognition techniques against variations.
This thesis addresses two main variation problems in face recognition, i.e., pose and illumination variations. To improve the performance of face recognition systems, the following methods are proposed: (1) a face feature extraction and representation method using non-uniformly selected Gabor convolution features, (2) an illumination normalization method using adaptive region-based image enhancement for face recognition under variable illumination conditions, (3) an eye detection method in gray-scale face images under various illumination conditions, and (4) a virtual pose generation method for pose-invariant face recognition. The details of these proposed methods are explained in this thesis. In addition, we conduct a comprehensive survey of the existing face recognition methods. Future research directions are pointed out.
|
9 |
Online Face Recognition GameQu, Yawe, Yang, Mingxi January 2006 (has links)
The purpose of this project is to test and improve people’s ability of face recognition. Although there are some tests on the internet with the same purpose, the problem is that people may feel bored and give up before finishing the tests. Consequently they may not benefit from testing nor from training. To solve this problem, face recognition and online game are put together in this project. The game is supposed to provide entertainment when people are playing, so that more people can take the test and improve their abilities of face recognition. In the game design, the game is assumed to take place in the face recognition lab, which is an imaginary lab. The player plays the main role in this game and asked to solve a number of problems. There are several scenarios waiting for the player, which mainly need face recognition skills from the player. At the end the player obtains the result of evaluation of her/his skills in face recognition.
|
10 |
Computer extraction of human facesLow, Boon Kee January 1999 (has links)
Due to the recent advances in visual communication and face recognition technologies, automatic face detection has attracted a great deal of research interest. Being a diverse problem, the development of face detection research has comprised contributions from researchers in various fields of sciences. This thesis examines the fundamentals of various face detection techniques implemented since the early 70's. Two groups of techniques are identified based on their approach in applying face knowledge as a priori: feature-based and image-based. One of the problems faced by the current feature-based techniques, is the lack of costeffective segmentation algorithms that are able to deal with issues such as background and illumination variations. As a result a novel facial feature segmentation algorithm is proposed in this thesis. The algorithm aims to combine spatial and temporal information using low cost techniques. In order to achieve this, an existing motion detection technique is analysed and implemented with a novel spatial filter, which itself is proved robust for segmentation of features in varying illumination conditions. Through spatio-temporal information fusion, the algorithm effectively addresses the background and illumination problems among several head and shoulder sequences. Comparisons of the algorithm with existing motion and spatial techniques establishes the efficacy of the combined approach.
|
Page generated in 0.0888 seconds