Spelling suggestions: "subject:"coacial expression."" "subject:"cracial expression.""
271 |
Reconhecimento automático de expressões faciais por dispositivos móveisDomingues, Daniel Chinen January 2014 (has links)
Orientador: Prof. Dr. Guiou Kobayashi / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia da Informação, 2014. / A computação atual vem demandando, cada vez mais, formas avançadas de interação com
os computadores. A interface do humano com seus dispositivos móveis carece de métodos
mais avançados, e um recurso automático de reconhecimento de expressões faciais seria
uma maneira de alcançar patamares maiores nessa escala de evolução. A forma como se
dá o reconhecimento de emoções humanas e o que as expressões faciais representam em
uma comunicação face-a-face vem sendo referência no desenvolvimento desses sistemas
computacionais e com isso, pode-se elencar três grandes desafios para implementar o
algoritmo de analise de expressões: Localizar o rosto na imagem, extrair os elementos
faciais relevantes e classificar os estados de emoções. O melhor método de resolução de
cada um desses sub-desafios, que se relacionam fortemente, determinará a viabilidade, a
eficiência e a relevância de um novo sistema de análise de expressões embarcada nos
dispositivos portáteis. Este estudo tem como objetivo avaliar a viabilidade da implantação de
um sistema automático de reconhecimento de expressões faciais por imagens, em
dispositivo móvel, utilizando a plataforma iOS da Apple, integrada com a biblioteca de código
aberto e muito utilizada na comunidade da computação visual, o OpenCV. O algoritmo Local
Binary Pattern, implementado pelo OpenCV, foi escolhido como lógica de rastreamento da
face. Os algorítmos Adaboost e Eigenface foram ,respectivamente, adotados para extração
e classificação da emoção e ambos são também suportados pela mencionada biblioteca. O
Módulo de Classificação Eigenface demandou um treinamento adicional em um ambiente de
maior capacidade de processamento e externo a plataforma móvel; posteriormente, apenas
o arquivo de treino foi exportado e consumido pelo aplicativo modelo. O estudo permitiu
concluir que o Local Binary Pattern é muito robusto a variações de iluminação e muito
eficiente no rastreamento da face; o Adaboost e Eigenface produziram eficiência de
aproximadamente 65% na classificação da emoção, quando utilizado apenas as imagens de
pico no treino do módulo, condição essa, necessária para manutenção do arquivo de treino
em um tamanho compatível com o armazenamento disponível nos dispositivos dessa
categoria. / The actual computing is demanding, more and more, advanced forms of interaction with
computers. The interfacing from man with their mobile devices lacks more advanced
methods, and automatic facial expression recognition would be a way to achieve greater
levels in this scale of evolution. The way how is the human emotion recognition and what
facial expressions represents in a face to face communication is being reference for
development of these computer systems and thus, it can list three major implementation
challenges for algorithm analysis of expressions: location of the face in the image, extracting
the relevant facial features and emotions¿ states classification. The best method to solve
each of these strongly related sub- challenges, determines the feasibility, the efficiency and
the relevance of a new expressions analysis system, embedded in portable devices. To
evaluate the feasibility of developing an automatic recognition of facial expressions in
images, we implemented a mobile system model in the iOS platform with integration to an
open source library that is widely used in visual computing community: the OpenCV. The
Local Binary Pattern algorithm implemented by OpenCV, was chosen as the face tracking
logic; the Eigenface and AdaBoost, respectively, were adopted for extraction and
classification of emotion and both are also supported by the library. The Eigenface
Classification Module was trained in a more robust and external environment to the mobile
platform and subsequently only the training file was exported and consumed by the model
application. With this experiment it was concluded that the Local Binary Pattern is very robust
to lighting variations and very efficient to tracking the face; the Adaboot and Eigenface
resulted in approximately 65% of efficiency when used only maximum emotion images to
training the module, a condition necessary for maintenance of the cascade file in a
compatible size to available storage on the mobile platform.
|
272 |
Estudo de associação entre déficits de reconhecimento de emoções em faces, flexibilidade mental e adequação social, em pacientes com transtorno bipolar do tipo I eutímicos, comparados com controles normais / Association between facial emotion recognition, mental flexibility and social adjustment deficits in bipolar disorder type I euthymic patients compared to normal controlDenise Petresco David 14 March 2016 (has links)
Introdução: O objetivo do estudo foi investigar se há associação entre déficits na capacidade de reconhecimento de emoções faciais e déficits na flexibilidade mental e na adequação social em pacientes com Transtorno Bipolar do tipo I eutímicos quando comparados a sujeitos controles sem transtorno mental. Métodos: 65 pacientes com Transtorno Bipolar do tipo I eutímicos e 95 controles sem transtorno mental, foram avaliados no reconhecimento de emoções faciais, na flexibilidade mental e na adequação social através de avaliações clínicas e neuropsicológicas. Os sintomas afetivos foram avaliados através da Escala de Depressão de Hamilton e da Escala de Mania de Young, o reconhecimento de emoções faciais através da Facial Expressions of Emotion: Stimuli and Tests, a flexibilidade mental avaliada através do Wisconsin Card Sorting Test e a adequação social através da Escala de Auto- Avaliação de Adequação Social. Resultados: Pacientes com Transtorno Bipolar do tipo I eutímicos apresentam uma associação de maior intensidade comparativamente aos controles entre o reconhecimento de emoções faciais e a flexibilidade mental, indicando que quanto mais preservada a flexibilidade mental, melhor será a habilidade para reconhecer emoções faciais Neste grupo às correlações de todas as emoções são positivas com o total de acertos e as categorias e são negativas com as respostas perseverativas, total de erros, erros perseverativos e erros não perseverativos. Não houve uma correlação entre o reconhecimento de emoções faciais e a adequação social, apesar dos pacientes com Transtorno Bipolar do tipo I eutímicos apresentar uma pior adequação social, sinalizando que a pior adequação social não parece ser devida a uma dificuldade em reconhecer e interpretar adequadamente as expressões faciais. Os pacientes com Transtorno Bipolar do tipo I eutímicos não apresentam diferenças significativas no reconhecimento de emoções faciais em relação aos controles, entretanto no subteste surpresa (p=0,080) as diferenças estão no limite da significância estatística, indicando que portadores de transtorno bipolar do tipo I eutímicos tendem a apresentar um pior desempenho no reconhecimento da emoção surpresa em relação aos controles. Conclusão: Nossos resultados reforçam a hipótese de que existe uma associação entre o reconhecimento de emoções faciais e a preservação do funcionamento executivo, mais precisamente a flexibilidade mental, indicando que quanto maior a flexibilidade mental, melhor será a habilidade para reconhecer emoções faciais e melhorar o desempenho funcional do paciente. Pacientes bipolares do tipo I eutímicos apresentam uma pior adequação social quando comparados aos controles, o que pode ser uma consequência do Transtorno Bipolar que ratifica a necessidade de uma intervenção terapêutica rápida e eficaz nestes pacientes / Introduction: The aim of this study was to investigate whether there is an association between deficits in the ability to recognize facial emotions and deficits in mental flexibility and social adjustment in bipolar disorder type I euthymic patients compared to control subjects without mental disorder. Methods: 65 bipolar disorder type I euthymic patients and 95 controls without mental disorder were evaluated to recognition of facial emotions, mental flexibility and social adjustment through clinical and neuropsychological evaluations. Affective symptoms were assessed using the Hamilton Depression Rating Scale and Young Mania Rating Scale, recognition of facial emotions through Facial Expressions of Emotion: Stimuli and Tests, mental flexibility assessed using the Wisconsin Card Sorting Test and social adjustment through the Social Adjustment Scale- Self Report. Results: Bipolar Disorder type I euthymic patients have a higher association between recognition of facial emotions and mental flexibility compared to controls, indicating that the more mental flexibility preserved, the better the ability to recognize facial emotions. In this group the correlations of all emotions are positive with the total of correct answers and with the categories and are negative with perseverative responses, total errors, perseverative errors and non-perseverative errors. There was no correlation between the recognition of facial emotions and social fairness, although bipolar disorder type I euthymic patients present a worse social adjustment, showing that the worst social adaptation seems to be due to difficulty in recognizing and properly interpret Facial expressions. Bipolar Disorder type I Euthymic patients showed no significant differences in recognition of facial emotions compared to controls, however in the surprise subtest (p = 0.080) differences are at the limit of statistical significance, indicating that bipolar disorder type I euthymic people tend to have a worse performance in the recognition of surprise emotion compared to controls. Conclusion: Our results support the hypothesis that there is an association between the recognition of facial emotions and preservation of executive functioning, specifically mental flexibility, indicating that the greater mental flexibility, the better the ability to recognize facial emotions and improve the performances of the patient. Bipolar patients type I euthymic have a worse social adjustment compared to controls which may be a result of bipolar disorder which confirms the need for rapid and effective therapeutic intervention in these patients
|
273 |
Modulation émotionnelle de la perception de l’action motrice d’autrui / Emotional modulation of perception of others’ motor actionPrigent, Elise 15 November 2012 (has links)
L’être humain est un être social amené à comprendre les comportements moteurs d’autrui. Selon la littérature, nous disposons de mécanismes cognitifs spécifiques, d’une part à la perception d’un corps humain (qu’il soit statique ou en mouvement), et d’autre part à la perception des expressions faciales émotionnelles. Ce travail de thèse vise à comprendre dans quelle mesure l'émotion véhiculée par le visage d'une personne, peut moduler notre perception de son action motrice. Les résultats de l’étude 1 ont montré que l’estimation de l’équilibre statique d’autrui pouvait être modulée par l’expression faciale émotionnelle (de sourire ou de crispation) exprimée par celui-ci. L’étude 2, a porté sur l’estimation de l’effort physique développé par une personne uniquement à partir de son expression faciale de douleur. Les résultats ont montré que les participants, dans ce type de tâche, utilisent deux mécanismes perceptifs automatiques. Le premier, mis en évidence par mesure fonctionnelle, facilite l’estimation de l’intensité de douleur à l’effort ressentie par autrui. Le second, démontré par la mesure d’un biais de mémorisation, entraîne une anticipation automatique de la suite de l’évolution de l’expression faciale de douleur à l’effort présentée. L’étude 3 a montré que l’estimation de l’effort physique développé par une personne atteinte de paraplégie réalisant un mouvement de transfert, est modulée par deux comportements de douleur (l’auto-protection et l’expression faciale de douleur). Toutefois, cette modulation diffère selon la familiarité des participants avec le monde médical et la paraplégie. En conclusion, ce travail de recherche propose que la modulation émotionnelle de la perception de l’action motrice d’autrui est en premier lieu sous-tendue par un processus automatique et implicite de contagion émotionnelle (bottom-up). Toutefois, cette dernière peut être inhibée par un processus explicite (top-down) qui dépendrait d’une part du type d’inférence à effectuer sur autrui (estimer l’équilibre postural ou l’effort physique développé), et d’autre part de la familiarité de l’observateur avec l’action motrice et les expressions faciales présentées. / Understanding others’ motor behaviour is part and parcel of Humans’ social experience. According to scientific literature, we rely on specific mechanisms for perceiving human bodies (whether static or moving) on the one hand, and processing emotional facial expressions on the other hand. This thesis aims to understand to what extent the emotion conveyed by a person’s face can modulate one’s perception of her/his motor action. Results of study 1 showed that our estimation of an individual’s static equilibrium is modulated by the observed individual’s emotional facial expression (smiling or tensed). Study 2 focused on perceptual estimation of the physical effort developed by a person on the basis of his facial expression of pain alone. Results revealed that participants adopt two automatic perceptual mechanisms. The first, highlighted via functional measurement, facilitates estimating the intensity of effort pain felt by others. The second, evidenced by measuring memory bias, leads to an automatic anticipation of the subsequent changes in the intensity of pain-related facial expressions. Study 3 showed that the estimation of physical effort developed by a paraplegic individual performing a transfer movement is modulated by two pain behaviours (guarding and facial expression of pain). Interestingly, this modulation varies with participants’ familiarity with both the medical domain and paraplegia. The conclusion of this research suggests that the modulation of emotional perception related to others’ motor action is primarily subtended by an automatic (bottom-up) process and an implicit emotional contagion. However, the latter can be inhibited by an explicit (top-down) process which may depend on (1) the type of inference made on others (estimating postural balance or physical effort developed in others), and (2) the familiarity of the observer with motor action and facial expressions.
|
274 |
Affective Empathy in Children: Measurement and CorrelatesHunter, Kirsten, n/a January 2004 (has links)
Empathy is a construct that plays a pivotal role in the development of interpersonal relationships, and thus ones ability to function socially and often professionally. The development of empathy in children is therefore of particular interest to allow for further understanding of normative and atypical developmental trajectories. This thesis investigated the assessment of affective empathy in children aged 5-12, through the development and comparison of a multimethod assessment approach. Furthermore this thesis evaluated the differential relationships between affective empathy and global behavioural problems in children versus the presence of early psychopathic traits, such as callous-unemotional traits. The first component of this study incorporated; a measure of facial expression of affective empathy, and self-reported experience of affective empathy, as measured by the newly designed Griffith Empathy Measure - Video Observation (GEM-VO) and the Griffith Empathy Measure - Self Report (GEM-SR); the Bryant's Index of Empathy for Children and Adolescents (1982) which is a traditional child self-report measure; and a newly designed parent-report of child affective empathy (Griffith Empathy Measure - Parent Report; GEM-PR). Using a normative community sample of 211 children from grades 1, 3, 5, and 7 (aged 5-6, 7-8, 9-10, & 11-12, respectively), the GEM-PR and the Bryant were found to have moderate to strong internal consistency. As a measure of concurrent validity, strong positive correlations were found between the mother and father reports (GEM-PR) of their child's affective empathy, for grades 5 and 7, and for girls of all age groups. Using a convenience sample of 31 parents and children aged 5 to 12, the GEM-PR and the Bryant demonstrated strong test-retest reliability. The reliability of the GEM-VO and the GEM-SR were assessed using a convenience sample of 20 children aged 5 to 12. These measures involve the assessment of children's facial and verbal responses to emotionally evocative videotape vignettes. Children were unobtrusively videotaped while they watched the vignettes and their facial expressions were coded. Children were then interviewed to determine the emotions they attributed to stimulus persons and to themselves whilst viewing the material. Adequate to strong test-retest reliability was found for both measures. Using 30% from the larger sample of 211 participants (N=60), the GEM-VO also demonstrated robust inter-rater reliability. This multimethod approach to assessing child affective empathy produced differing age and gender trends. Facial affect as reported by the GEM-VO decreased with age. Similarly, the matching of child facial emotion to the vignette protagonist's facial emotion was higher in the younger grades. These findings suggest that measures that assess the matching of facial affect (i.e., GEM-VO) may be more appropriate for younger age groups who have not yet learnt to conceal their facial expression of emotion. Data from the GEM-SR suggests that older children are more verbally expressive of negative emotions then younger children, with older girls found to be the most verbally expressive of feeling the same emotion as the vignette character; a role more complimentary of the female gender socialization pressures. These findings are also indicative of the increase in emotional vocabulary and self-awareness in older children, supporting the validity of child self-report measures (based on observational stimuli) with older children. In comparing data from the GEM-VO and GEM-SR, this study found that for negative emotions the consistency between facial emotions coded and emotions verbally reported increased with age. This consistency across gender and amongst the older age groups provides encouraging concurrent validity, suggesting the results of one measure could be inferred through the exclusive use of the alternate measurement approach. In contrast, affective empathy as measured by the two measures; the accurate matching of the participant and vignette character's facial expression (GEM-VO), and the accurate matching of the self reported and vignette character's emotion (GEM-SR); were not found to converge. This finding is consistent with prior research and questions the assumption that facially expressed and self-appraised indexes of affective empathy are different aspects of a complex unified process. When evaluating the convergence of all four measures of affective empathy, negative correlations were found between the Bryant and the GEM-PR, these two measures were also found to not converge with the GEM-VO and GEM-SR in a consistent and predictable way. These findings pose the question of whether different aspects of the complex phenomena of affective empathy are being assessed. Furthermore, the validity of the exclusive use of a child self report measure such as the Bryant, which is the standard assessment in the literature, is questioned. The possibility that callous-unemotional traits (CU; a unique subgroup identified in the child psychopathy literature) may account for the mixed findings throughout research regarding the assumption that deficiencies in empathy underlie conduct problems in children, was examined using regression analysis. Using the previous sample of 211 children aged 5-12, conduct problems (CP) were measured using the Strengths and Difficulties Questionnaire (SDQ; Goodman, 1999), and the CU subscale was used from the Antisocial Process Screening Device (APSD; Caputo, Frick, & Brodsky, 1999). Affective empathy when measured by the GEM-PR and the Bryant showed differing patterns in the relationship between affective empathy, CU traits and CP. While the GEM-Father reported that neither age, CU traits nor CP accounted for affective empathy variance, the GEM-Mother report supported that affective empathy was no longer associated with CP once CU traits had been partialled out. In contrast, the Bryant reported for girls, that CU traits were not found to have an underlying correlational relationship. It can be argued from the GEM-Mother data only that it was the unmeasured variance of CU traits that was accounting for the relationship between CP and affective empathy found in the literature. Furthermore, the comparison of an altered CU subscale with all possible empathy items removed, suggests that the constructs of CU traits and affective empathy are not synonymous or overlapping in nature, but rather are two independent constructs. This multimethod approach highlights the complexity of this research area, exemplifying the significant influence of the source of the reports, and suggesting that affective empathy consists of multiple components that are assessed to differing degrees by the different measurement approaches.
|
275 |
Social Agent: Facial Expression Driver for an e-NoseWidmark, Jörgen January 2003 (has links)
<p>This thesis describes that it is possible to drive synthetic emotions of an interface agent with an electronic nose system developed at AASS. The e-Nose can be used for quality control, and the detected distortion from a known smell sensation prototype is interpreted to a 3D-representation of emotional states, which in turn points to a set of pre-defined muscle contractions. This extension of a rule based motivation system, which we call Facial Expression Driver, is incorporated to a model for sensor fusion with active perception, to provide a general design for a more complex system with additional senses. To be consistent with the biologically inspired sensor fusion model a muscle based animated facial model was chosen as a test bed for the expression of current emotion. The social agent’s facial expressions demonstrate its tolerance to the detected distortion in order to manipulate the user to restore the system to functional balance. Only a few of the known projects use chemically based sensing to drive a face in real-time, whether they are virtual characters or animatronics. This work may inspire a future android implementation of a head with electro active polymers as synthetic facial muscles.</p>
|
276 |
Robust recognition of facial expressions on noise degraded facial imagesSheikh, Munaf January 2011 (has links)
<p>We investigate the use of noise degraded facial images in the application of facial expression recognition. In particular, we trained Gabor+SVMclassifiers to recognize facial expressions images with various types of noise. We applied Gaussian noise, Poisson noise, varying levels of salt and pepper noise, and speckle noise to noiseless facial images. Classifiers were trained with images without noise and then tested on the images with noise. Next, the classifiers were trained using images with noise, and then on tested both images that had noise, and images that were noiseless. Finally, classifiers were tested on images while increasing the levels of salt and pepper in the test set. Our results reflected distinct degradation of recognition accuracy. We also discovered that certain types of noise, particularly Gaussian and Poisson noise, boost recognition rates to levels greater than would be achieved by normal, noiseless images. We attribute this effect to the Gaussian envelope component of Gabor filters being sympathetic to Gaussian-like noise, which is similar in variance to that of the Gabor filters. Finally, using linear regression, we mapped a mathematical model to this degradation and used it to suggest how recognition rates would degrade further should more noise be added to the images.</p>
|
277 |
Toward Understanding Human Expression in Human-Robot InteractionMiners, William Ben January 2006 (has links)
Intelligent devices are quickly becoming necessities to support our activities during both work and play. We are already bound in a symbiotic relationship with these devices. An unfortunate effect of the pervasiveness of intelligent devices is the substantial investment of our time and effort to communicate intent. Even though our increasing reliance on these intelligent devices is inevitable, the limits of conventional methods for devices to perceive human expression hinders communication efficiency. These constraints restrict the usefulness of intelligent devices to support our activities. Our communication time and effort must be minimized to leverage the benefits of intelligent devices and seamlessly integrate them into society. Minimizing the time and effort needed to communicate our intent will allow us to concentrate on tasks in which we excel, including creative thought and problem solving. <br /><br /> An intuitive method to minimize human communication effort with intelligent devices is to take advantage of our existing interpersonal communication experience. Recent advances in speech, hand gesture, and facial expression recognition provide alternate viable modes of communication that are more natural than conventional tactile interfaces. Use of natural human communication eliminates the need to adapt and invest time and effort using less intuitive techniques required for traditional keyboard and mouse based interfaces. <br /><br /> Although the state of the art in natural but isolated modes of communication achieves impressive results, significant hurdles must be conquered before communication with devices in our daily lives will feel natural and effortless. Research has shown that combining information between multiple noise-prone modalities improves accuracy. Leveraging this complementary and redundant content will improve communication robustness and relax current unimodal limitations. <br /><br /> This research presents and evaluates a novel multimodal framework to help reduce the total human effort and time required to communicate with intelligent devices. This reduction is realized by determining human intent using a knowledge-based architecture that combines and leverages conflicting information available across multiple natural communication modes and modalities. The effectiveness of this approach is demonstrated using dynamic hand gestures and simple facial expressions characterizing basic emotions. It is important to note that the framework is not restricted to these two forms of communication. The framework presented in this research provides the flexibility necessary to include additional or alternate modalities and channels of information in future research, including improving the robustness of speech understanding. <br /><br /> The primary contributions of this research include the leveraging of conflicts in a closed-loop multimodal framework, explicit use of uncertainty in knowledge representation and reasoning across multiple modalities, and a flexible approach for leveraging domain specific knowledge to help understand multimodal human expression. Experiments using a manually defined knowledge base demonstrate an improved average accuracy of individual concepts and an improved average accuracy of overall intents when leveraging conflicts as compared to an open-loop approach.
|
278 |
Toward Understanding Human Expression in Human-Robot InteractionMiners, William Ben January 2006 (has links)
Intelligent devices are quickly becoming necessities to support our activities during both work and play. We are already bound in a symbiotic relationship with these devices. An unfortunate effect of the pervasiveness of intelligent devices is the substantial investment of our time and effort to communicate intent. Even though our increasing reliance on these intelligent devices is inevitable, the limits of conventional methods for devices to perceive human expression hinders communication efficiency. These constraints restrict the usefulness of intelligent devices to support our activities. Our communication time and effort must be minimized to leverage the benefits of intelligent devices and seamlessly integrate them into society. Minimizing the time and effort needed to communicate our intent will allow us to concentrate on tasks in which we excel, including creative thought and problem solving. <br /><br /> An intuitive method to minimize human communication effort with intelligent devices is to take advantage of our existing interpersonal communication experience. Recent advances in speech, hand gesture, and facial expression recognition provide alternate viable modes of communication that are more natural than conventional tactile interfaces. Use of natural human communication eliminates the need to adapt and invest time and effort using less intuitive techniques required for traditional keyboard and mouse based interfaces. <br /><br /> Although the state of the art in natural but isolated modes of communication achieves impressive results, significant hurdles must be conquered before communication with devices in our daily lives will feel natural and effortless. Research has shown that combining information between multiple noise-prone modalities improves accuracy. Leveraging this complementary and redundant content will improve communication robustness and relax current unimodal limitations. <br /><br /> This research presents and evaluates a novel multimodal framework to help reduce the total human effort and time required to communicate with intelligent devices. This reduction is realized by determining human intent using a knowledge-based architecture that combines and leverages conflicting information available across multiple natural communication modes and modalities. The effectiveness of this approach is demonstrated using dynamic hand gestures and simple facial expressions characterizing basic emotions. It is important to note that the framework is not restricted to these two forms of communication. The framework presented in this research provides the flexibility necessary to include additional or alternate modalities and channels of information in future research, including improving the robustness of speech understanding. <br /><br /> The primary contributions of this research include the leveraging of conflicts in a closed-loop multimodal framework, explicit use of uncertainty in knowledge representation and reasoning across multiple modalities, and a flexible approach for leveraging domain specific knowledge to help understand multimodal human expression. Experiments using a manually defined knowledge base demonstrate an improved average accuracy of individual concepts and an improved average accuracy of overall intents when leveraging conflicts as compared to an open-loop approach.
|
279 |
Age-related differences in deceit detection: The role of emotion recognitionTehan, Jennifer R. 17 April 2006 (has links)
This study investigated whether age differences in deceit detection are related to impairments in emotion recognition. Key cues to deceit are facial expressions of emotion (Frank and Ekman, 1997). The aging literature has shown an age-related decline in decoding emotions (e.g., Malatesta, Izard, Culver, and Nicolich, 1987). In the present study, 354 participants were presented with 20 interviews and asked to decide whether each man was lying or telling the truth. Ten interviews involved a crime and ten a social opinion. Each participant was in one of three presentation conditions: 1) visual only, 2) audio only, or 3) audio-visual. For crime interviews, age-related impairments in emotion recognition hindered older adults in the visual only condition. In the opinion topic interviews, older adults exhibited a truth bias which rendered them worse at detecting deceit than young adults. Cognitive and dispositional variables did not help to explain the age differences in the ability to detect deceit.
|
280 |
Recognition Of Human Face ExpressionsEner, Emrah 01 September 2006 (has links) (PDF)
In this study a fully automatic and scale invariant feature extractor which does not require manual initialization or special equipment is proposed. Face location and size is extracted using skin segmentation and ellipse fitting. Extracted face region is scaled to a predefined size, later upper and lower facial templates are used for feature extraction. Template localization and template parameter calculations are carried out using Principal Component Analysis. Changes in facial feature coordinates between analyzed image and neutral expression image are used for expression classification. Performances of different classifiers are evaluated. Performance of proposed feature extractor is also tested on sample video sequences. Facial features are extracted in the first frame and KLT tracker is used for tracking the extracted features. Lost features are detected using face geometry rules and they are relocated using feature extractor. As an alternative to feature based technique an available holistic method which analyses face without partitioning is implemented. Face images are filtered using Gabor filters tuned to different scales and orientations. Filtered images are combined to form Gabor jets. Dimensionality of Gabor jets is decreased using Principal Component Analysis. Performances of different classifiers on low dimensional Gabor jets are compared. Feature based and holistic classifier performances are compared using JAFFE and AF facial expression databases.
|
Page generated in 0.0613 seconds