Spelling suggestions: "subject:"coacial 1expression"" "subject:"coacial dexpression""
141 |
Techniques for Facial Expression Recognition Using the KinectAly, Sherin Fathy Mohammed Gaber 02 November 2016 (has links)
Facial expressions convey non-verbal cues. Humans use facial expressions to show emotions, which play an important role in interpersonal relations and can be of use in many applications involving psychology, human-computer interaction, health care, e-commerce, and many others. Although humans recognize facial expressions in a scene with little or no effort, reliable expression recognition by machine is still a challenging problem.
Automatic facial expression recognition (FER) has several related problems: face detection, face representation, extraction of the facial expression information, and classification of expressions, particularly under conditions of input data variability such as illumination and pose variation. A system that performs these operations accurately and in realtime would be a major step forward in achieving a human-like interaction between the man and machine.
This document introduces novel approaches for the automatic recognition of the basic facial expressions, namely, happiness, surprise, sadness, fear, disgust, anger, and neutral using relatively low-resolution noisy sensor such as the Microsoft Kinect. Such sensors are capable of fast data collection, but the low-resolution noisy data present unique challenges when identifying subtle changes in appearance. This dissertation will present the work that has been done to address these challenges and the corresponding results. The lack of Kinect-based FER datasets motivated this work to build two Kinect-based RGBD+time FER datasets that include facial expressions of adults and children. To the best of our knowledge, they are the first FER-oriented datasets that include children. Availability of children data is important for research focused on children (e.g., psychology studies on facial expressions of children with autism), and also allows researchers to do deeper studies on automatic FER by analyzing possible differences between data coming from adults and children.
The key contributions of this dissertation are both empirical and theoretical. The empirical contributions include the design and successful test of three FER systems that outperform existing FER systems either when tested on public datasets or in realtime. One proposed approach automatically tunes itself to the given 3D data by identifying the best distance metric that maximizes the system accuracy. Compared to traditional approaches where a fixed distance metric is employed for all classes, the presented adaptive approach had better recognition accuracy especially in non-frontal poses. Another proposed system combines high dimensional feature vectors extracted from 2D and 3D modalities via a novel fusion technique. This system achieved 80% accuracy which outperforms the state of the art on the public VT-KFER dataset by more than 13%. The third proposed system has been designed and successfully tested to recognize the six basic expressions plus neutral in realtime using only 3D data captured by the Kinect. When tested on a public FER dataset, it achieved 67% (7% higher than other 3D-based FER systems) in multi-class mode and 89% (i.e., 9% higher than the state of the art) in binary mode. When the system was tested in realtime on 20 children, it achieved over 73% on a reduced set of expressions. To the best of our knowledge, this is the first known system that has been tested on relatively large dataset of children in realtime. The theoretical contributions include 1) the development of a novel feature selection approach that ranks the features based on their class separability, and 2) the development of the Dual Kernel Discriminant Analysis (DKDA) feature fusion algorithm. This later approach addresses the problem of fusing high dimensional noisy data that are highly nonlinear distributed. / PHD / One of the most expressive way humans display emotions is through facial expressions. The recognition of facial expressions is considered one of the primary tools used to understand the feelings and intentions of others. Humans detect and interpret faces and facial expressions in a scene with little or no effort, in a way that it has been argued that it may be universal. However, developing an automated system that accurately accomplishes facial expression recognition is more challenging and is still an open problem. It is not difficult to understand why facial expression recognition is a challenging problem. Human faces are capable of expressing a wide array of emotions. Recognition of even a small set of expressions, say happiness, surprise, anger, disgust, fear, and sadness, is a difficult problem due to the wide variations of the same expression among different people. In working toward automatic Facial Expression Recognition (FER), psychologists and engineers alike have tried to analyze and characterize facial expressions in an attempt to understand and categorize these expressions. Several researchers have considered the development of systems that can perform FER automatically whether using 2D images or videos. However, these systems inherently impose constraints on illumination, image resolution, and head orientation. Some of these constraints can be relaxed through the use of three-dimensional (3D) sensing systems. Among existing 3D sensing systems, the Microsoft Kinect system is notable because it is low in cost. It is also a relatively fast sensor, and it has been proven to be effective in real-time applications. However, Kinect imposes significant limitations to build effective FER systems. This is mainly because of its relatively low resolution, compared to other 3D sensing techniques and the noisy data it produces. Therefore, very few researchers have considered the Kinect for the purpose of FER. This dissertation considers new, comprehensive systems for automatic facial expression recognition that can accommodate the low-resolution data from the Kinect sensor. Moreover, through collaboration with some Psychology researchers, we built the first facial expression recognition dataset that include spontaneous and acted facial expressions recorded for 32 subjects including children. With the availability of children data, deeper studies focused focused on children can be conducted (e.g., psychology studies on facial expressions of children with autism).
|
142 |
Automatic 3D facial modelling with deformable modelsXiang, Guofu January 2012 (has links)
Facial modelling and animation has been an active research subject in computer graphics since the 1970s. Due to extremely complex biomechanical structures of human faces and people’s visual familiarity with human faces, modelling and animating realistic human faces is still one of greatest challenges in computer graphics. Since we are so familiar with human faces and very sensitive to unnatural subtle changes in human faces, it usually requires a tremendous amount of artistry and manual work to create a convincing facial model and animation. There is a clear need of developing automatic techniques for facial modelling in order to reduce manual labouring. In order to obtain a realistic facial model of an individual, it is now common to make use of 3D scanners to capture range scans from the individual and then fit a template to the range scans. However, most existing template-fitting methods require manually selected landmarks to warp the template to the range scans. It would be tedious to select landmarks by hand over a large set of range scans. Another way to reduce repeated work is synthesis by reusing existing data. One example is expression cloning, which copies facial expression from one face to another instead of creating them from scratch. This aim of this study is to develop a fully automatic framework for template-based facial modelling, facial expression transferring and facial expression tracking from range scans. In this thesis, the author developed an extension of the iterative closest points (ICP) algorithm, which is able to match a template with range scans in different scales, and a deformable model, which can be used to recover the shapes of range scans and to establish correspondences between facial models. With the registration method and the deformable model, the author proposed a fully automatic approach to reconstructing facial models and textures from range scans without re-quiring any manual interventions. In order to reuse existing data for facial modelling, the author formulated and solved the problem of facial expression transferring in the framework of discrete differential geometry. The author also applied his methods to face tracking for 4D range scans. The results demonstrated the robustness of the registration method and the capabilities of the deformable model. A number of possible directions for future work were pointed out.
|
143 |
Physics based facial modeling and animation.January 2002 (has links)
by Leung Hoi-Chau. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 70-71). / Abstracts in English and Chinese. / Chapter Chapter 1. --- Introduction --- p.1 / Chapter Chapter 2. --- Previous Works --- p.2 / Chapter 2.1. --- Facial animations and facial surgery simulations / Chapter 2.2. --- Facial Action Coding System (FACS) / Chapter 2.3. --- The Boundary Element Method (BEM) in Computer Graphics / Chapter Chapter 3. --- The Facial Expression System --- p.7 / Chapter 3.1. --- Input to the system / Chapter 3.1.1. --- Orientation requirements for the input mesh / Chapter 3.1.2. --- Topology requirements for the input mesh / Chapter 3.1.3. --- Type of the polygons of the facial mesh / Chapter 3.2. --- Facial Modeling and Feature Recognition / Chapter 3.3. --- User Control / Chapter 3.4. --- Output of the system / Chapter Chapter 4. --- Boundary Element Method (BEM) --- p.12 / Chapter 4.1. --- Numerical integration of the kernels / Chapter 4.1.1. --- P and Q are different / Chapter 4.1.2. --- P and Q are identical / Chapter 4.1.2.1. --- Evaluation of the Singular Traction Kernel / Chapter 4.1.2.2. --- Evaluation of the Singular Displacement Kernel / Chapter 4.2. --- Assemble the stiffness matrix / Chapter Chapter 5. --- Facial Modeling --- p.18 / Chapter 5.1. --- Offset of facial mesh / Chapter 5.2. --- Thickening of Face Contour / Chapter Chapter 6. --- Facial Feature Recognition --- p.22 / Chapter 6.1. --- Extract all contour edges from the facial mesh / Chapter 6.2. --- Separate different holes from the contour edges / Chapter 6.3. --- Locating the bounding boxes of different holes / Chapter 6.4. --- Determine the facial features / Chapter 6.4.1. --- Eye positions / Chapter 6.4.2. --- Mouth position and Face / Chapter 6.4.3. --- Nose position / Chapter 6.4.4. --- Skull position / Chapter Chapter 7. --- Boundary Conditions in the system --- p.28 / Chapter 7.1. --- Facial Muscles / Chapter 7.2. --- Skull Bone / Chapter 7.3. --- Facial Muscle recognition / Chapter 7.3.1. --- Locating muscle-definers / Chapter 7.3.2. --- Locating muscles / Chapter 7.4. --- Skull Bone Recognition / Chapter 7.5. --- Refine the bounding regions of the facial features / Chapter 7.6. --- Add/Remove facial muscles / Chapter Chapter 8. --- Muscles Movement --- p.40 / Chapter 8.1. --- Muscle contraction / Chapter 8.2. --- Muscle relaxation / Chapter 8.3. --- The Muscle sliders / Chapter Chapter 9. --- Pre-computation --- p.44 / Chapter 9.1. --- Changing the Boundary Values / Chapter Chapter 10 --- . Implementation --- p.46 / Chapter 10.1. --- Data Structure for the facial mesh / Chapter 10.2. --- Implementation of the BEM engine / Chapter 10.3. --- Facial modeling and the facial recognition / Chapter Chapter 11 --- . Results --- p.48 / Chapter 11.1. --- Example 1 (low polygon man face) / Chapter 11.2. --- Example 2 (girl face) / Chapter 11.3. --- Example 3 (man face) / Chapter 11.4. --- System evaluation / Chapter Chapter 12 --- . Conclusions --- p.67 / References --- p.70
|
144 |
Is this the face of sadness? Facial expression recognition and contextDiminich, Erica January 2015 (has links)
A long standing debate in psychological science is whether the face signals specific emotions. Basic emotion theory presupposes that there are coordinated facial musculature movements that individuals can identify as relating to a core set of basic emotions. In opposition to this view, the constructionist theory contends that the perception of emotion is a far more intricate process involving semantic knowledge and arousal states. The aim of the current investigation was to explore some of the questions at the crux of this debate. We showed participants video clips of real people in real time, where the face was in motion, much as in everyday life. In study 1 we directly manipulated the effects of context to determine what influences emotion perception – situational information or the face? In support of the basic emotion view, participants identified displays of happiness, anger and sadness irrespective of contextual information provided. Importantly, participants also rated one set of facial movements as more intensely expressing a ‘sad’ face. Study 1 also demonstrated unique context effects in partial support for the constructionist view, suggesting that for some facial expressions, the role of context may be important. In study 2, we explored the possible effects that language has on the perception of emotion. In the absence of linguistic cues, participants used significantly more ‘happy’ and ‘sad’ words to label the basic emotion prototype for happiness and for the ‘sad’ face introduced in study 1. Overall, findings from these studies suggest that although contextual cues may be important for specific scenarios, the face is dominant to the layperson when inferring the emotional state of another.
|
145 |
The effect of facial expression and identity information on the processing of own and other race facesHirose, Yoriko January 2006 (has links)
The central aim of the current thesis was to examine how facial expression and racial identity information affect face processing involving different races, and this was addressed by studying several types of face processing tasks including face recognition, emotion perception/recognition, face perception and attention to faces. In particular, the effect of facial expression on the differential processing of own and other race faces (the so-called the own-race bias) was examined from two perspectives, examining the effect both at the level of perceptual expertise favouring the processing of own-race faces and in-group bias influencing face processing in terms of a self-enhancing dimension. Results from the face recognition study indicated a possible similarity between familiar/unfamiliar and own-race/other-race face processing. Studies on facial expression perception and memory showed that there was no indication of in-group bias in face perception and memory, although a common finding throughout was that different race faces were often associated with different types of facial expressions. The most consistent finding across all studies was that the effect of the own-race bias was more evident amongst European participants. Finally, results from the face attention study showed that there were no signs of preferential visual attention to own-race faces. The results from the current research provided further evidence to the growing body of knowledge regarding the effects of the own-race bias. Based on this knowledge, for future studies it is suggested that a better understanding of the mechanisms underlying the own-race bias would help advance this interesting and ever-evolving area of research further.
|
146 |
The Affective PDF ReaderRadits, Markus January 2010 (has links)
<p>The Affective PDF Reader is a PDF Reader combined with affect recognition systems. The aim of the project is to research a way to provide the reader of a PDF with real - time visual feedback while reading the text to influence the reading experience in a positive way. The visual feedback is given in accordance to analyzed emotional states of the person reading the text - this is done by capturing and interpreting affective information with a facial expression recognition system. Further enhancements would also include analysis of voice in the computation as well as gaze tracking software to be able to use the point of gaze when rendering the visualizations.The idea of the Affective PDF Reader mainly arose in admitting that the way we read text on computers, mostly with frozen and dozed off faces, is somehow an unsatisfactory state or moreover a lonesome process and a poor communication. This work is also inspired by the significant progress and efforts in recognizing emotional states from video and audio signals and the new possibilities that arise from.The prototype system was providing visualizations of footprints in different shapes and colours which were controlled by captured facial expressions to enrich the textual content with affective information. The experience showed that visual feedback controlled by utterances of facial expressions can bring another dimension to the reading experience if the visual feedback is done in a frugal and non intrusive way and it showed that the evolvement of the users can be enhanced.</p>
|
147 |
A quadratic deformation model for representing facial expressionsObaid, Mohammad Hisham Rashid January 2011 (has links)
Techniques for facial expression generation are employed in several applications in computer graphics as well as in the processing of image and video sequences containing faces. Video coding standards such as MPEG-4 support facial expression animation. There are a number of facial expression representations that are application dependent or facial animation standard dependent and most of them require a lot of computational effort. We have developed
a completely novel and effective method for representing the primary facial expressions using a model-independent set of deformation parameters (derived using rubber-sheet transformations), which can be easily applied to transform facial feature points. The developed mathematical model captures the necessary non-linear characteristics of deformations of facial muscle regions; producing well-recognizable expressions on images, sketches, and three dimensional models of faces. To show the effectiveness of the method, we developed a variety of novel applications such as facial expression recognition, expression mapping, facial animation and caricature generation.
|
148 |
Computational analysis of facial expressionsShenoy, A. January 2010 (has links)
This PhD work constitutes a series of inter-disciplinary studies that use biologically plausible computational techniques and experiments with human subjects in analyzing facial expressions. The performance of the computational models and human subjects in terms of accuracy and response time are analyzed. The computational models process images in three stages. This includes: Preprocessing, dimensionality reduction and Classification. The pre-processing of face expression images includes feature extraction and dimensionality reduction. Gabor filters are used for feature extraction as they are closest biologically plausible computational method. Various dimensionality reduction methods: Principal Component Analysis (PCA), Curvilinear Component Analysis (CCA) and Fisher Linear Discriminant (FLD) are used followed by the classification by Support Vector Machines (SVM) and Linear Discriminant Analysis (LDA). Six basic prototypical facial expressions that are universally accepted are used for the analysis. They are: angry, happy, fear, sad, surprise and disgust. The performance of the computational models in classifying each expression category is compared with that of the human subjects. The Effect size and Encoding face enable the discrimination of the areas of the face specific for a particular expression. The Effect size in particular emphasizes the areas of the face that are involved during the production of an expression. This concept of using Effect size on faces has not been reported previously in the literature and has shown very interesting results. The detailed PCA analysis showed the significant PCA components specific for each of the six basic prototypical expressions. An important observation from this analysis was that with Gabor filtering followed by non linear CCA for dimensionality reduction, the dataset vector size may be reduced to a very small number, in most cases it was just 5 components. The hypothesis that the average response time (RT) for the human subjects in classifying the different expressions is analogous to the distance measure of the data points from the classification hyper-plane was verified. This means the harder a facial expression is to classify by human subjects, the closer to the classifying hyper-plane of the classifier it is. A bi-variate correlation analysis of the distance measure and the average RT suggested a significant anti-correlation. The signal detection theory (SDT) or the d-prime determined how well the model or the human subjects were in making the classification of an expressive face from a neutral one. On comparison, human subjects are better in classifying surprise, disgust, fear, and sad expressions. The RAW computational model is better able to distinguish angry and happy expressions. To summarize, there seems to some similarities between the computational models and human subjects in the classification process.
|
149 |
Age-related changes in decoding basic social cues from the eyesSlessor, Gillian January 2009 (has links)
This thesis explores age differences in the ability to decode basic social cues from the face and, in particular, the eye region. Age-related declines in complex aspects of social perception, such as forced choice labelling of emotional expressions and theory of mind reasoning, are well documented. However, research, to date, has not assessed age differences in more basic aspects of social perception such as eye-gaze detection, joint attention, or more implicit responses to emotional cues. The first two experimental chapters of this thesis report a series of studies investigating age-related changes in gaze processing. Both the ability to detect subtle differences in gaze direction and to subsequently follow the gaze cues given by others was found to decline with age. Age-related changes were also found in the integration of gaze direction with emotional (angry, joyful and disgusted) facial expressions, when making emotion perception and approachability judgements (Chapters 4 and 5). Age differences in responses to happy facial expressions are further investigated in Chapter 6 by assessing sensitivity to discriminate between enjoyment and non-enjoyment smiles. Findings indicated that older adults demonstrated a greater bias towards thinking that any smiling individual was feeling happy. They were also more likely than younger participants to choose to approach an individual displaying a non-enjoyment smile. The final experimental chapter explores whether the age of the face influences age-related changes in gaze following. Age-related declines in gaze following were greatest when following the gaze cues of younger (vs. older) adults, highlighting the importance of closely matching age of stimulus and participant when investigating age differences in social perception. Perceptual, neuropsychological and motivational explanations for these results are evaluated and implications of these research findings for older adults’ social functioning are discussed.
|
150 |
The effects of eye gaze and emotional facial expression on the allocation of visual attentionCooper, Robbie Mathew January 2006 (has links)
This thesis examines the way in which meaningful facial signals (i.e., eye gaze and emotional facial expressions) influence the allocation of visual attention. These signals convey information about the likely imminent behaviour of the sender and are, in turn, potentially relevant to the behaviour of the viewer. It is already well established that different signals influence the allocation of attention in different ways that are consistent with their meaning. For example, direct gaze (i.e., gaze directed at the viewer) is considered both to draw attention to its location and hold attention when it arrives, whereas observing averted gaze is known to create corresponding shifts in the observer’s attention. However, the circumstances under which these effects occur are not yet understood fully. The first two sets of experiments in this thesis tested directly whether direct gaze is particularly difficult to ignore when the task is to ignore it, and whether averted gaze will shift attention when it is not relevant to the task. Results suggest that direct gaze is no more difficult to ignore than closed eyes, and the shifts in attention associated with viewing averted gaze are not evident when the gaze cues are task-irrelevant. This challenges the existing understanding of these effects. The remaining set of experiments investigated the role of gaze direction in the allocation of attention to emotional facial expressions. Without exception, previous work looking at this issue has measured the allocation of attention to such expressions when gaze is directed at the viewer. Results suggest that while the type of emotional expression (i.e., angry or happy) does influence the allocation of attention, the associated gaze direction does not, even when the participants are divided in terms of anxiety level (a variable known to influence the allocation of attention to emotional expressions). These findings are discussed in terms of how the social meaning of the stimulus can influence preattentive processing. This work also serves to highlight the need for general theories of visual attention to incorporate such data. Not to do so fundamentally risks misrepresenting the nature of attention as it operates out-with the laboratory setting.
|
Page generated in 0.0624 seconds