Spelling suggestions: "subject:"coacial expression"" "subject:"cracial expression""
51 |
A study of the relationship between shyness and recognition of facial expression and emotion in a sample of young adults / ShynessGraves-O'Haver, Laura M. January 2009 (has links)
Previous research indicates a link between shyness and the ability to recognize facial expressions of emotion, particularly among children. The current study examined college students’ facial recognition as a potential influence on their levels of self-reported shyness. Three factors related to facial expression recognition were examined: the participants’ ability to accurately identify facial expressions, their ratings of the intensity of the faces, and their tendency to make positive or negative interpretation errors. Demographic variables, introversion, self esteem, and mood were also examined for their ability to predict shyness. The results indicated a weak relationship between facial expression recognition and shyness. Possible limitations and future directions for research are addressed in light of these new findings. / Access to thesis permanently restricted to Ball State community only / Department of Psychological Science
|
52 |
Síntese de expressões faciais em fotografias para representação de emoções / Facial expression synthesis in photographs for emotion representationTesta, Rafael Luiz 04 December 2018 (has links)
O processamento e a identificação de emoções faciais constituem ações essenciais para estabelecer interação entre pessoas. Alguns transtornos psiquiátricos podem limitar a capacidade de um indivíduo em reconhecer emoções em expressões faciais. De modo a contribuir com a solução deste problema, técnicas computacionais podem ser utilizadas para compor ferramentas destinadas ao diagnóstico, avaliação e treinamento no reconhecimento de tais expressões. Com esta motivação, o objetivo deste trabalho é definir, implementar e avaliar um método para sintetizar expressões faciais que representam emoções em imagens de pessoas reais. Nos trabalhos encontrados na literatura a principal ideia é que a expressão facial da imagem de uma pessoa pode ser reconstituída na imagem de outra pessoa. Este estudo difere-se das abordagens apresentadas na literatura ao propor uma técnica que considera a similaridade entre imagens faciais para escolher aquela que será empregada como origem para a reconstituição. Desta maneira, pretende-se aumentar o realismo das imagens sintetizadas. A abordagem sugerida para resolver o problema, além de buscar as faces mais similares em banco de imagens, faz a deformação dos componentes faciais e o mapeamento das diferenças de iluminação na imagem destino. O realismo das imagens geradas foi mensurado de forma objetiva e subjetiva usando imagens disponíveis em bancos de imagens públicos. Uma análise visual mostrou que as imagens sintetizadas com base em faces similares apresentaram um grau de realismo adequado, principalmente quando comparadas com imagens sintetizadas a partir de faces aleatórias. Além de constituir uma contribuição para a geração de imagens a serem aplicadas em ferramentas de auxílio ao diagnóstico e terapia de distúrbios psiquiátricos, oferece uma contribuição para a área de Ciência da Computação, por meio da proposição de novas técnicas de síntese de expressões faciais / The ability to process and identify facial emotions are essential factors for an individual\'s social interaction. Some psychiatric disorders can limit an individual\'s ability to recognize emotions in facial expressions. This problem could be confronted by using computational techniques in order to develop learning environments for diagnosis, evaluation, and training in identifying facial emotions. With this motivation, the objective of this work is to define, implement and evaluate a method to synthesize realistic facial expression that represents emotions in images of real people. The main idea of the studies found in the literature is that a facial expression of one persons image can be reenacted in an another persons image. The study differs from the approaches presented in the literature when proposing a technique that considers the similarity between facial images to choose the one that will be used as the origin for reenactment. As a result, we intend to increase the realism of the synthesized images. Our approach to solve the problem, besides searching for the most similar facial components in the image dataset, also deforms the facial elements and maps the differences of illumination in the target image. A visual analysis showed that the images synthesized on the basis of similar faces presented an adequate degree of realism, especially when compared with images synthesized from random faces. The study will contribute to the generation of the images applied to tools for the diagnosis and therapy of psychiatric disorders, and also contribute to the computational field, through the proposition of new techniques for facial expression synthesis
|
53 |
Embedded Face Detection and Facial Expression RecognitionZhou, Yun 30 April 2014 (has links)
Face Detection has been applied in many fields such as surveillance, human machine interaction, entertainment and health care. Two main reasons for extensive attention on this typical research domain are: 1) a strong need for the face recognition system is obvious due to the widespread use of security, 2) face recognition is more user friendly and faster since it almost requests the users to do nothing. The system is based on ARM Cortex-A8 development board, including transplantation of Linux operating system, the development of drivers, detecting face by using face class Haar feature and Viola-Jones algorithm. In the paper, the face Detection system uses the AdaBoost algorithm to detect human face from the frame captured by the camera. The paper introduces the pros and cons between several popular images processing algorithm. Facial expression recognition system involves face detection and emotion feature interpretation, which consists of offline training and online test part. Active shape model (ASM) for facial feature node detection, optical flow for face tracking, support vector machine (SVM) for classification is applied in this research.
|
54 |
Síntese de expressões faciais em fotografias para representação de emoções / Facial expression synthesis in photographs for emotion representationRafael Luiz Testa 04 December 2018 (has links)
O processamento e a identificação de emoções faciais constituem ações essenciais para estabelecer interação entre pessoas. Alguns transtornos psiquiátricos podem limitar a capacidade de um indivíduo em reconhecer emoções em expressões faciais. De modo a contribuir com a solução deste problema, técnicas computacionais podem ser utilizadas para compor ferramentas destinadas ao diagnóstico, avaliação e treinamento no reconhecimento de tais expressões. Com esta motivação, o objetivo deste trabalho é definir, implementar e avaliar um método para sintetizar expressões faciais que representam emoções em imagens de pessoas reais. Nos trabalhos encontrados na literatura a principal ideia é que a expressão facial da imagem de uma pessoa pode ser reconstituída na imagem de outra pessoa. Este estudo difere-se das abordagens apresentadas na literatura ao propor uma técnica que considera a similaridade entre imagens faciais para escolher aquela que será empregada como origem para a reconstituição. Desta maneira, pretende-se aumentar o realismo das imagens sintetizadas. A abordagem sugerida para resolver o problema, além de buscar as faces mais similares em banco de imagens, faz a deformação dos componentes faciais e o mapeamento das diferenças de iluminação na imagem destino. O realismo das imagens geradas foi mensurado de forma objetiva e subjetiva usando imagens disponíveis em bancos de imagens públicos. Uma análise visual mostrou que as imagens sintetizadas com base em faces similares apresentaram um grau de realismo adequado, principalmente quando comparadas com imagens sintetizadas a partir de faces aleatórias. Além de constituir uma contribuição para a geração de imagens a serem aplicadas em ferramentas de auxílio ao diagnóstico e terapia de distúrbios psiquiátricos, oferece uma contribuição para a área de Ciência da Computação, por meio da proposição de novas técnicas de síntese de expressões faciais / The ability to process and identify facial emotions are essential factors for an individual\'s social interaction. Some psychiatric disorders can limit an individual\'s ability to recognize emotions in facial expressions. This problem could be confronted by using computational techniques in order to develop learning environments for diagnosis, evaluation, and training in identifying facial emotions. With this motivation, the objective of this work is to define, implement and evaluate a method to synthesize realistic facial expression that represents emotions in images of real people. The main idea of the studies found in the literature is that a facial expression of one persons image can be reenacted in an another persons image. The study differs from the approaches presented in the literature when proposing a technique that considers the similarity between facial images to choose the one that will be used as the origin for reenactment. As a result, we intend to increase the realism of the synthesized images. Our approach to solve the problem, besides searching for the most similar facial components in the image dataset, also deforms the facial elements and maps the differences of illumination in the target image. A visual analysis showed that the images synthesized on the basis of similar faces presented an adequate degree of realism, especially when compared with images synthesized from random faces. The study will contribute to the generation of the images applied to tools for the diagnosis and therapy of psychiatric disorders, and also contribute to the computational field, through the proposition of new techniques for facial expression synthesis
|
55 |
Connectionist models of the perception of facial expressions of emotionMignault, Alain, 1962- January 1999 (has links)
No description available.
|
56 |
Immediate effects of a relaxation treatment upon subject perception of facial expression of emotionWhittington, Kathryn Darlene 03 June 2011 (has links)
The purpose of this study was to determine what the immediate effects of a relaxation treatment had upon the subject's perception of facial expression of emotion with state anxiety held constant. Specifically, this study attempted to compare subjects who received a 25-minute taped recorded relaxation treatment with subjects who did not receive the relaxation treatment and subsequent perception of facial expression of emotion. The research hypothesis was stated in the null form.A review of the relevant literature available on facial expression of emotion, relaxation treatment, and training programs designed for therapists supported the need for the study. In addition, the research indicated that techniques for reliably evaluating facial expression of emotion were not extant.All subjects for the study were graduate level students enrolled in at least one Guidance and Counseling course offered. spring quarter, 1978, at a midwestern university. The university's Research Computing Unit randomly selected 80 subjects from the total population of 167 potential. subjects. Randomly selected subjects were then randomly assigned to either the experimental group or study two the control group. The sex of the subject was controlled for in the random assignment of subjects to each group. Each group, experimental and control, consisted of 20 males and 20 females. Experimental group subjects ranged in age from 22 to 40, with a mean age of 29.8. Control group subjects ranged in age from 22 to 57, with a mean age of 30.7. The total of 80 randomly selected subjects who participated in this study were scheduled to participate in the at one time.The Multiple Affect Adjective Check List, Today Form (MAACL) was used to obtain the subject's state anxiety score (the covariate measure). Following the administration of the MAACL, experimental group subjects received a 25-minute tape recorded relaxation treatment. The Pictures of Facial Affect (PFA) was administered to both groups to measure the subject's perception of facial expression of emotion. The PFA consists of 110 high quality slides which depict 7 facial expressions of emotion. The 7 subtests of, the PFA include: happy, sad, fear, anger, surprise, disgust, and neutral. The PFA was administered to the experimental group following the relaxation treatment. The control group, which received no treatment, was given the PFA following the administration of the MAACL.Preliminary to the analysis of data, a KR-20 subtest analysis conducted on the PFA resulted in discarding subtests happy, fear, and surprise. These subtests lacked internal reliability. Further, the null hypothesis of no relation between the covariate (state anxiety as measured by the MAACL) and the set of selected dependent of the PFA was not rejected. The revised null hypothesis was tested through a multivariate analysis of variance. An F test significant at the .05 level was set. The results of the analysis indicated the revised null hypothesis was not rejected. Under the constraints of the study, the following conclusion was made. No significant differences were found between subjects who received relaxation treatment and subjects who did not receive relaxation treatment and subsequent perception of facial expression of emotion as measured by the PFA. However, an additional finding of the study was significant difference between men and women end their perception of facial expression of emotion. Suggestions for future research were offered based upon the analysis of data.
|
57 |
Facial Expression Recognition SystemRen, Yuan January 2008 (has links)
A key requirement for developing any innovative system in a computing environment is to integrate a sufficiently friendly interface with the average end user. Accurate design of such a user-centered interface, however, means more than just the ergonomics of the panels and displays. It also requires that designers precisely define what information to use and how, where, and when to use it. Facial expression as a natural, non-intrusive and efficient way of communication has been considered as one of the potential inputs of such interfaces. The work of this thesis aims at designing a robust Facial Expression Recognition (FER) system by combining various techniques from computer vision and pattern recognition.
Expression recognition is closely related to face recognition where a lot of research has been done and a vast array of algorithms have been introduced. FER can also be considered as a special case of a pattern recognition problem and many techniques are available. In the designing of an FER system, we can take advantage of these resources and use existing algorithms as building blocks of our system. So a major part of this work is to determine the optimal combination of algorithms. To do this, we first divide the system into 3 modules, i.e. Preprocessing, Feature Extraction and Classification, then for each of them some candidate methods are implemented, and eventually the optimal configuration is found by comparing the performance of different combinations.
Another issue that is of great interest to facial expression recognition systems designers is the classifier which is the core of the system. Conventional classification algorithms assume the image is a single variable function of a underlying class label. However this is not true in face recognition area where the appearance of the face is influenced by multiple factors: identity, expression, illumination and so on. To solve this problem, in this thesis we propose two new algorithms, namely Higher Order Canonical Correlation Analysis and Simple Multifactor Analysis which model the image as a multivariable function.
The addressed issues are challenging problems and are substantial for developing a facial expression recognition system.
|
58 |
Facial Expression Recognition SystemRen, Yuan January 2008 (has links)
A key requirement for developing any innovative system in a computing environment is to integrate a sufficiently friendly interface with the average end user. Accurate design of such a user-centered interface, however, means more than just the ergonomics of the panels and displays. It also requires that designers precisely define what information to use and how, where, and when to use it. Facial expression as a natural, non-intrusive and efficient way of communication has been considered as one of the potential inputs of such interfaces. The work of this thesis aims at designing a robust Facial Expression Recognition (FER) system by combining various techniques from computer vision and pattern recognition.
Expression recognition is closely related to face recognition where a lot of research has been done and a vast array of algorithms have been introduced. FER can also be considered as a special case of a pattern recognition problem and many techniques are available. In the designing of an FER system, we can take advantage of these resources and use existing algorithms as building blocks of our system. So a major part of this work is to determine the optimal combination of algorithms. To do this, we first divide the system into 3 modules, i.e. Preprocessing, Feature Extraction and Classification, then for each of them some candidate methods are implemented, and eventually the optimal configuration is found by comparing the performance of different combinations.
Another issue that is of great interest to facial expression recognition systems designers is the classifier which is the core of the system. Conventional classification algorithms assume the image is a single variable function of a underlying class label. However this is not true in face recognition area where the appearance of the face is influenced by multiple factors: identity, expression, illumination and so on. To solve this problem, in this thesis we propose two new algorithms, namely Higher Order Canonical Correlation Analysis and Simple Multifactor Analysis which model the image as a multivariable function.
The addressed issues are challenging problems and are substantial for developing a facial expression recognition system.
|
59 |
The use of facial features in facial expression discriminationNeath, Karly January 2012 (has links)
The present four studies are the first to examine the effect of presentation time on accurate facial expression discrimination while concurrently using eye movement monitoring to ensure fixation to specific features during the brief presentation of the entire face. Recent studies using backward masking and evaluating accuracy performance with signal detection methods (A’) have identified a happy-face advantage however differences between other facial expressions of emotion have not been reported. In each study, a specific exposure time before mask (150, 100, 50, or 16.67 ms) and eight different fixation locations were used during the presentation of neutral, disgusted, fearful, happy, and surprised expressions. An effect of emotion was found across all presentation times such that the greatest performance was seen for happiness, followed by neutral, disgust, surprise, and with the lowest performances seen for fear. Fixation to facial features specific to an emotion did not improve performance and did not account for the differences in accuracy performance between emotions. Rather, results suggest that accuracy performance depends on the integration of facial features, and that this varies across emotions and with presentation time.
|
60 |
Facial affect recognition in psychopathic offenders /Kreklewetz, Kimberly. January 2005 (has links)
Thesis (M.A.) - Simon Fraser University, 2005. / Theses (Dept. of Psychology) / Simon Fraser University. Also issued in digital format and available on the World Wide Web.
|
Page generated in 0.0894 seconds