Spelling suggestions: "subject:"face perception."" "subject:"race perception.""
171 |
Face Recognition: Study and Comparison of PCA and EBGM AlgorithmsKatadound, Sachin 01 January 2004 (has links)
Face recognition is a complex and difficult process due to various factors such as variability of illumination, occlusion, face specific characteristics like hair, glasses, beard, etc., and other similar problems affecting computer vision problems. Using a system that offers robust and consistent results for face recognition, various applications such as identification for law enforcement, secure system access, computer human interaction, etc., can be automated successfully. Different methods exist to solve the face recognition problem. Principal component analysis, Independent component analysis, and linear discriminant analysis are few other statistical techniques that are commonly used in solving the face recognition problem. Genetic algorithm, elastic bunch graph matching, artificial neural network, etc. are few of the techniques that have been proposed and implemented.
The objective of this thesis paper is to provide insight into different methods available for face recognition, and explore methods that provided an efficient and feasible solution. Factors affecting the result of face recognition and the preprocessing steps that eliminate such abnormalities are also discussed briefly. Principal Component Analysis (PCA) is the most efficient and reliable method known for at least past eight years. Elastic bunch graph matching (EBGM) technique is one of the promising techniques that we studied in this thesis work. We also found better results with EBGM method than PCA in the current thesis paper. We recommend use of a hybrid technique involving the EBGM algorithm to obtain better results. Though, the EBGM method took a long time to train and generate distance measures for the given gallery images compared to PCA. But, we obtained better cumulative match score (CMS) results for the EBGM in comparison to the PCA method. Other promising techniques that can be explored separately in other paper include Genetic algorithm based methods, Mixture of principal components, and Gabor wavelet techniques.
|
172 |
Långsammare igenkänning av emotioner i ansiktsuttryck hos individer med utmattningssyndrom : En pilotstudieLöfdahl, Tomas, Wretman, Mattias January 2012 (has links)
Syftet med denna pilotstudie var att skapa hypoteser om och hur utmattningssyndrom påverkar förmågan att känna igen emotioner i ansiktsuttryck. En grupp patienter med utmattningssyndrom jämfördes med en matchad frisk kontrollgrupp (N=14). Grupperna undersöktes med ett datorbaserat test beståendes av färgbilder av autentiska ansiktsuttryck som gradvis i steg om 10% förändrades från ett neutralt ansiktsuttryck till någon av de fem grundemotionerna ilska, avsky, rädsla, glädje och ledsenhet. Mätningarna gjordes i termer av igenkänningsprecision och responshastighet. Resultatet visade att patientgruppen responderade signifikant långsammare än kontrollgruppen sett över samtliga emotioner i testet. Inga emotionsspecifika skillnader såväl som skillnader i igenkänningsprecision kunde påvisas mellan grupperna. Orsakerna till diskrepansen i responshastighet diskuterades utifrån fyra tänkbara förklaringsområden: ansiktsperceptuell funktion, visuell uppmärksamhet, självfokuserad uppmärksamhet samt noggrannhet/oro. Rekommendationer gjordes till framtida forskning om att utforska dessa områden närmare.
|
173 |
Human-IntoFace.net : May 6th, 2003 /Bennett, Troy. January 2003 (has links)
Thesis (M.F.A.)--Rochester Institute of Technology, 2003. / Typescript. Includes bibliographical references (leaves 21-23).
|
174 |
Facial expression analysis with graphical modelsShang, Lifeng., 尚利峰. January 2012 (has links)
Facial expression recognition has become an active research topic in recent
years due to its applications in human computer interfaces and data-driven animation. In this thesis, we focus on the problem of how to e?ectively use domain,
temporal and categorical information of facial expressions to help computer understand human emotions. Over the past decades, many techniques (such as
neural networks, Gaussian processes, support vector machines, etc.) have been
applied to facial expression analysis. Recently graphical models have emerged as
a general framework for applying probabilistic models. They provide a natural
framework for describing the generative process of facial expressions. However,
these models often su?er from too many latent variables or too complex model
structures, which makes learning and inference di±cult. In this thesis, we will
try to analyze the deformation of facial expression by introducing some recently
developed graphical models (e.g. latent topic model) or improving the recognition
ability of some already widely used models (e.g. HMM).
In this thesis, we develop three di?erent graphical models with di?erent representational assumptions: categories being represented by prototypes, sets of
exemplars and topics in between. Our ¯rst model incorporates exemplar-based
representation into graphical models. To further improve computational e±-
ciency of the proposed model, we build it in a local linear subspace constructed
by principal component analysis. The second model is an extension of the recently
developed topic model by introducing temporal and categorical information into
Latent Dirichlet Allocation model. In our discriminative temporal topic model
(DTTM), temporal information is integrated by placing an asymmetric Dirichlet
prior over document-topic distributions. The discriminative ability is improved by
a supervised term weighting scheme. We describe the resulting DTTM in detail
and show how it can be applied to facial expression recognition. Our third model
is a nonparametric discriminative variation of HMM. HMM can be viewed as a
prototype model, and transition parameters act as the prototype for one category.
To increase the discrimination ability of HMM at both class level and state level,
we introduce linear interpolation with maximum entropy (LIME) and member-
ship coe±cients to HMM. Furthermore, we present a general formula for output
probability estimation, which provides a way to develop new HMM. Experimental
results show that the performance of some existing HMMs can be improved by
integrating the proposed nonparametric kernel method and parameters adaption
formula.
In conclusion, this thesis develops three di?erent graphical models by (i) combining exemplar-based model with graphical models, (ii) introducing temporal
and categorical information into Latent Dirichlet Allocation (LDA) topic model,
and (iii) increasing the discrimination ability of HMM at both hidden state level
and class level. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
|
175 |
Infants' neural processing of facial attractivenessJankowitsch, Jessica Michelle 16 February 2015 (has links)
The relationship between infants’ neural processing of and visual preferences for attractive and unattractive faces was investigated through the integration of event-related potential and preferential looking methods. Six-month-olds viewed color images of female faces previously rated by adults for attractiveness. The faces were presented in contrasting pairs of attractiveness (attractive/unattractive) for 1.5-second durations. The results showed that compared to attractive faces, unattractive faces elicited larger N290 amplitudes at left hemisphere electrode sites (PO9) and smaller P400 amplitudes at electrode sites across both hemispheres (PO9 and PO10). There were no significant differences between infants’ overall looking times based on attractiveness, however, a significant relationship was found between amplitude and trial looking time; larger N290 amplitudes were associated with longer trial looking times. The results suggest that compared to attractive faces, unattractive faces require greater cognitive resources and longer initial attention for visual processing. / text
|
176 |
Effect of the Muslim headscarf on face perception : a series of psychological experiments looking at how the Muslim headscarf influences the perception of (South Asian) facesToseeb, Mohammed Umar January 2012 (has links)
The Muslim headscarf conceals the hair and other external features of a face. For this reason it may have implications for the recognition of such faces. The experiments reported in this thesis aimed to investigate anecdotal reports, which suggested that headscarf wearing females are more difficult to recognise. This was done by employing a series of experiments which involved a yes/no recognition task. The stimuli that were used were images of South Asian females who were photographed wearing a Muslim headscarf (HS), with their own hair visible (H), and a third set of stimuli were produced in which their external features were cropped (CR). Most importantly, participants either took part in the condition in which the state of the external features remained the same between the learning and test stage (Same) or the condition in which they were switched between the two stages (Switch). In one experiment participants completed a Social Contact Questionnaire. Surprisingly, in the Same condition, there was no difference in the recognition rates of faces that were presented with hair, with headscarf, or cropped faces. However, participants in the Switch condition performed significantly worse than those in the Same condition. It was also found that there was no difference in the % of fixations to the external features between the Same and Switch condition, which implied that the drop in performance between the two conditions was not mediated by eye-movements. These results suggest that the internal and external features of a face are processed interactively and, although the external features were not fixated on, a manipulation to them caused a drop in performance. This was confirmed in a separate experiment in which participants were unable to ignore the external features when they were asked to judge the similarity of the internal features of pairs of faces. Pairs of headscarf faces were rated as being more similar compared to pairs of faces with hair. Finally, for one group of participants it was found that contact with headscarf-wearing females was positively correlated with the recognition of headscarf-wearing faces. It was concluded that the headscarf per se did not impair face recognition and that there is enough information in the internal features of a face for optimal recognition, however, performance was disrupted when the presence or absence of the headscarf was manipulated.
|
177 |
A Cognitive Neuroscience of Social GroupsContreras, Juan Manuel 30 September 2013 (has links)
We used functional magnetic resonance imaging to investigate how the human brain processes information about social groups in three domains. Study 1: Semantic knowledge. Participants were scanned while they answered questions about their knowledge of both social categories and non-social categories like object groups and species of nonhuman animals. Brain regions previously identified in processing semantic information are more robustly engaged by nonsocial semantics than stereotypes. In contrast, stereotypes elicit greater activity in brain regions implicated in social cognition. These results suggest that stereotypes should be considered distinct from other forms of semantic knowledge. Study 2: Theory of mind. Participants were scanned while they answered questions about the mental states and physical attributes of individual people and groups. Regions previously associated with mentalizing about individuals were also robustly responsive to judgments of groups. However, multivariate searchlight analysis revealed that several of these regions showed distinct multivoxel patterns of response to groups and individual people. These findings suggest that perceivers mentalize about groups in a manner qualitatively similar to mentalizing about individual people, but that the brain nevertheless maintains important distinctions between the representations of such entities. Study 3: Social categorization. Participants were scanned while they categorized the sex and race of unfamiliar Black men, Black women, White men, and White women. Multivariate pattern analysis revealed that multivoxel patterns in FFA--but not other face-selective brain regions, other category-selective brain regions, or early visual cortex--differentiated faces by sex and race. Specifically, patterns of voxel-based responses were more similar between individuals of the same sex than between men and women, and between individuals of the same race than between Black and White individuals. These results suggest that FFA represents the sex and race of faces. Together, these three studies contribute to a growing cognitive neuroscience of social groups. / Psychology
|
178 |
Predictive eyes precede retrieval : visual recognition as hypothesis testingHolm, Linus January 2007 (has links)
Does visual recognition entail verifying an idea about what is perceived? This question was addressed in the three studies of this thesis. The main hypothesis underlying the investigation was that visual recognition is an active process involving hypothesis testing. Recognition of faces (Study 1), scenes (Study 2) and objects (Study 3) was investigated using eye movement registration as a window on the recognition process. In Study 1, a functional relationship between eye movements and face recognition was established. Restricting the eye movements reduced recognition performance. In addition, perceptual reinstatement as indicated by eye movement consistency across study and test was related to recollective experience at test. Specifically, explicit recollection was related to higher eye movement consistency than familiarity-based recognition and false rejections (Studies 1-2). Furthermore, valid expectations about a forthcoming stimulus scene produced eye movements which were more similar to those of an earlier study episode, compared to invalid expectations (Study 2). In Study 3 participants recognized fragmented objects embedded in nonsense fragments. Around 8 seconds prior to explicit recognition, participants began to fixate the object region rather than a similar control region in the stimulus pictures. Before participants’ indicated awareness of the object, they fixated it with an average of 9 consecutive fixations. Hence, participants were looking at the object as if they had recognized it before they became aware of its identity. Furthermore, prior object information affected eye movement sampling of the stimulus, suggesting that semantic memory was involved in guiding the eyes during object recognition even before the participants were aware of its presence. Collectively, the studies support the view that gaze control is instrumental to visual recognition performance and that visual recognition is an interactive process between memory representation and information sampling.
|
179 |
An investigation of young infants’ ability to match phonetic and gender information in dynamic faces and voicePatterson, Michelle Louise 11 1900 (has links)
This dissertation explores the nature and ontogeny of infants' ability to match
phonetic information in comparison to non-speech information in the face and voice.
Previous research shows that infants' ability to match phonetic information in face and
voice is robust at 4.5 months of age (e.g., Kuhl & Meltzoff, 1982; 1984; 1988; Patterson &
Werker, 1999). These findings support claims that young infants can perceive structural
correspondences between audio and visual aspects of phonetic input and that speech is
represented amodally. It remains unclear, however, specifically what factors allow
speech to be perceived amodally and whether the intermodal perception of other
aspects of face and voice is like that of speech. Gender is another biologically significant
cue that is available in both the face and voice. In this dissertation, nine experiments
examine infants' ability to match phonetic and gender information with dynamic faces
and voices.
Infants were seated in front of two side-by-side video monitors which displayed
filmed images of a female or male face, each articulating a vowel sound ( / a / or / i / ) in
synchrony. The sound was played through a central speaker and corresponded with
one of the displays but was synchronous with both. In Experiment 1,4.5-month-old
infants did not look preferentially at the face that matched the gender of the heard voice
when presented with the same stimuli that produced a robust phonetic matching effect.
In Experiments 2 through 4, vowel and gender information were placed in conflict to
determine the relative contribution of each in infants' ability to match bimodal
information in the face and voice. The age at which infants do match gender
information with my stimuli was determined in Experiments 5 and 6. In order to
explore whether matching phonetic information in face and voice is based on featural or
configural information, two experiments examined infants' ability to match phonetic
information using inverted faces (Experiment 7) and upright faces with inverted
mouths (Experiment 8). Finally, Experiment 9 extended the phonetic matching effect to
2-month-old infants. The experiments in this dissertation provide evidence that, at 4.5
months of age, infants are more likely to attend to phonetic information in the face and
voice than to gender information. Phonetic information may have a special salience
and/or unity that is not apparent in similar but non-phonetic events. The findings are
discussed in relation to key theories of perceptual development.
|
180 |
The Effect of Training on Haptic Classification of Facial Expressions of Emotion in 2D Displays by Sighted and Blind ObserversABRAMOWICZ, ANETA 23 October 2009 (has links)
Abstract
The current study evaluated the effects of training on the haptic classification of culturally universal facial expressions of emotion as depicted in simple 2D raised-line drawings. Blindfolded sighted (N = 60) and blind (N = 4) participants participated in Experiments 1 and 3, respectively. A small vision control study (N = 12) was also conducted (Experiment 2) to compare haptic versus visual learning patterns. A hybrid learning paradigm consisting of pre/post- and old/new-training procedures was used to address the nature of the underlying learning process in terms of token-specific learning and/or generalization. During the Pre-Training phase, participants were tested on their ability to classify facial expressions of emotion using the set with which they would be subsequently trained. During the Post-Training phase, they were tested with the training set (Old) intermixed with a completely novel set (New). For sighted observers, visual classification was more accurate than haptic classification; in addition, two of the three adventitiously blind individuals tended to be at least as accurate as the sighted haptic group. All three groups showed similar learning patterns across the learning stages of the experiment: accuracy improved substantially with training; however, while classification accuracy for the Old set remained high during the Post-Training test stage, learning effects for novel (New) drawings were reduced, if present at all. These results imply that learning by the sighted was largely token-specific for both haptic and visual classification. Additional results from a limited number of blind subjects tentatively suggest that the accuracy with which facial expressions of emotion are classified is not impaired when visual loss occurs later in life. / Thesis (Master, Neuroscience Studies) -- Queen's University, 2009-10-23 12:04:41.133
|
Page generated in 0.0915 seconds