Spelling suggestions: "subject:"emotionation recognition""
61 |
Facial emotion expression, recognition and production of varying intensity in the typical population and on the autism spectrumWingenbach, Tanja January 2016 (has links)
The current research project aimed to investigate facial emotion processing from especially developed and validated video stimuli of facial emotional expressions including varying levels of intensity. Therefore, videos were developed showing real people expressing emotions in real time (anger, disgust, fear, sadness, surprise, happiness, contempt, embarrassment, contempt, and neutral) at different expression intensity levels (low, intermediate, high) called the Amsterdam Dynamic Facial Expression Set – Bath Intensity Variations (ADFES-BIV). The ADFES-BIV was validated on all its emotion and intensity categories. Sex differences in facial emotion recognition were investigated and a female advantage in facial emotion recognition was found compared to males. This demonstrates that the ADFES-BIV is suitable for investigating group comparisons in facial emotion recognition in the general population. Facial emotion recognition from the ADFES-BIV was further investigated in a sample of individuals that is characterised by deficits in social functioning; individuals with an Autism Spectrum Disorder (ASD). A deficit in facial emotion recognition was found in ASD compared to controls and error analysis revealed emotion-specific deficits in detecting emotional content from faces (sensitivity) next to deficits in differentiating between emotions from faces (specificity). The ADFES-BIV was combined with face electromyogram (EMG) to investigate facial mimicry and the effects of proprioceptive feedback (from explicit imitation and blocked facial mimicry) on facial emotion recognition. Based on the reverse simulation model it was predicted that facial mimicry would be an active component of the facial emotion recognition process. Experimental manipulations of face movements did not reveal an advantage of facial mimicry compared to the blocked facial mimicry condition. Whereas no support was found for the reverse simulation model, enhanced proprioceptive feedback can facilitate or hinder recognition of facial emotions in line with embodied cognition accounts.
|
62 |
Análise de sinais de voz para reconhecimento de emoções. / Analysis of speech signals for emotion recognition.Rafael Iriya 07 July 2014 (has links)
Esta pesquisa é motivada pela crescente importância do reconhecimento automático de emoções, em especial através de sinais de voz, e suas aplicações em sistemas para interação homem-máquina. Neste contexto, são estudadas as emoções Felicidade, Medo, Nojo, Raiva, Tédio e Tristeza, além do estado Neutro, que são emoções geralmente consideradas como essenciais para um conjunto básico de emoções. São investigadas diversas questões relacionadas à análise de voz para reconhecimento de emoções, explorando vários parâmetros do sinal de voz, como por exemplo frequência fundamental (pitch), energia de curto prazo, formantes, coeficientes cepstrais e são testadas diferentes técnicas para a classificação, envolvendo reconhecimento de padrões e métodos estatísticos, como K-vizinhos mais próximos (KNN), Máquinas de Vetores de Suporte (SVM), Modelos de Misturas de Gaussianas (GMM) e Modelos Ocultos de Markov (HMM), destacando-se o uso de GMM como principal técnica utilizada por seu custo computacional e desempenho. Neste trabaho é desenvolvido um sistema de identificação em estágio único obtendo-se resultados superiores a diversos sistemas na literatura, com uma taxa de reconhecimento de até 74,86%. Além disso, recorre-se à psicologia e à teoria de emoções para incorporar-se a noção do espaço de emoções e suas dimensões a fim de desenvolver-se um sistema de classificação sequencial em três estágios, que passa por classificações nas dimensões Ativação, Avaliação e Domínio. Este sistema apresenta uma taxa de reconhecimento superior ao do sistema de único estágio, com até 82,41%, ao mesmo tempo em que é identificado um ponto de atenção no sistema de três estágios, que pode apresentar dificuldades na identificação de emoções que possuem baixo índice de reconhecimento em um dos estágios. Uma vez que existem poucos sistemas estado da arte que tratam o problema de verificação de emoções, um sistema também é desenvolvido para esta tarefa, obtendo-se um reconhecimento perfeito para as emoções Raiva, Neutro, Tédio e Tristeza. Por fim, é desenvolvido um sistema híbrido para tratar os problemas de verificação e de identificação em sequência, que tenta resolver o problema do classificador de três estágios e obtém uma taxa de reconhecimento de até 83%. / This work is motivated by the increase on the importance of automatic emotion recognition, especially through speech signals, and its applications in human-machine interaction systems. In this context, the emotions Happiness, Fear, Neutral, Disgust, Anger, Boredom and Sadness are selected for this study, which are usually considered essential for a basic set of emotions. Several topics related to emotion recognition through speech are investigated, including speech features, like pitch, energy, formants and MFCC as well as different classification algorithms that involve pattern recognition and stochastic modelling like K-Nearest Neighbours (KNN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM) and Hidden Markov Models (HMM), where GMM is selected as the main technique for its computational cost and performance. In this work, a single-stage identification system is developed, which outperforms several systems in the literature, with a recognition rate of up to 74.86%. Besides, the idea of emotional space dimensions from Psychology and Emotion Theory is reviewed for the development of a sequential classification system with 3 stages, that passes through classifications on the Activation, Evaluation and Dominance dimensions. This system outperforms the single-stage classifier with a recognition rate of up to 82.41%, at the same time as a point of attention is identified, as this kind of system may show difficulties on the identification of emotions that show low recognition rates in a specific stage. Since there are few state of the art systems that handle emotion verification, a system for this task is also developed in this work, showing itself to be a perfect recognizer for the Anger, Neutral, Boredom and Sadness emotions. Finally, a hybrid system is proposed to handle both the verification and the identification tasks sequentially, which tries to solve the 3-stage classifier problem and shows a recognition rate of up to 83%.
|
63 |
Análise de sinais de voz para reconhecimento de emoções. / Analysis of speech signals for emotion recognition.Iriya, Rafael 07 July 2014 (has links)
Esta pesquisa é motivada pela crescente importância do reconhecimento automático de emoções, em especial através de sinais de voz, e suas aplicações em sistemas para interação homem-máquina. Neste contexto, são estudadas as emoções Felicidade, Medo, Nojo, Raiva, Tédio e Tristeza, além do estado Neutro, que são emoções geralmente consideradas como essenciais para um conjunto básico de emoções. São investigadas diversas questões relacionadas à análise de voz para reconhecimento de emoções, explorando vários parâmetros do sinal de voz, como por exemplo frequência fundamental (pitch), energia de curto prazo, formantes, coeficientes cepstrais e são testadas diferentes técnicas para a classificação, envolvendo reconhecimento de padrões e métodos estatísticos, como K-vizinhos mais próximos (KNN), Máquinas de Vetores de Suporte (SVM), Modelos de Misturas de Gaussianas (GMM) e Modelos Ocultos de Markov (HMM), destacando-se o uso de GMM como principal técnica utilizada por seu custo computacional e desempenho. Neste trabaho é desenvolvido um sistema de identificação em estágio único obtendo-se resultados superiores a diversos sistemas na literatura, com uma taxa de reconhecimento de até 74,86%. Além disso, recorre-se à psicologia e à teoria de emoções para incorporar-se a noção do espaço de emoções e suas dimensões a fim de desenvolver-se um sistema de classificação sequencial em três estágios, que passa por classificações nas dimensões Ativação, Avaliação e Domínio. Este sistema apresenta uma taxa de reconhecimento superior ao do sistema de único estágio, com até 82,41%, ao mesmo tempo em que é identificado um ponto de atenção no sistema de três estágios, que pode apresentar dificuldades na identificação de emoções que possuem baixo índice de reconhecimento em um dos estágios. Uma vez que existem poucos sistemas estado da arte que tratam o problema de verificação de emoções, um sistema também é desenvolvido para esta tarefa, obtendo-se um reconhecimento perfeito para as emoções Raiva, Neutro, Tédio e Tristeza. Por fim, é desenvolvido um sistema híbrido para tratar os problemas de verificação e de identificação em sequência, que tenta resolver o problema do classificador de três estágios e obtém uma taxa de reconhecimento de até 83%. / This work is motivated by the increase on the importance of automatic emotion recognition, especially through speech signals, and its applications in human-machine interaction systems. In this context, the emotions Happiness, Fear, Neutral, Disgust, Anger, Boredom and Sadness are selected for this study, which are usually considered essential for a basic set of emotions. Several topics related to emotion recognition through speech are investigated, including speech features, like pitch, energy, formants and MFCC as well as different classification algorithms that involve pattern recognition and stochastic modelling like K-Nearest Neighbours (KNN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM) and Hidden Markov Models (HMM), where GMM is selected as the main technique for its computational cost and performance. In this work, a single-stage identification system is developed, which outperforms several systems in the literature, with a recognition rate of up to 74.86%. Besides, the idea of emotional space dimensions from Psychology and Emotion Theory is reviewed for the development of a sequential classification system with 3 stages, that passes through classifications on the Activation, Evaluation and Dominance dimensions. This system outperforms the single-stage classifier with a recognition rate of up to 82.41%, at the same time as a point of attention is identified, as this kind of system may show difficulties on the identification of emotions that show low recognition rates in a specific stage. Since there are few state of the art systems that handle emotion verification, a system for this task is also developed in this work, showing itself to be a perfect recognizer for the Anger, Neutral, Boredom and Sadness emotions. Finally, a hybrid system is proposed to handle both the verification and the identification tasks sequentially, which tries to solve the 3-stage classifier problem and shows a recognition rate of up to 83%.
|
64 |
Structural and functional neural networks underlying facial affect recognition impairment following traumatic brain injuryRigon, Arianna 01 August 2017 (has links)
Psychosocial problems are exceedingly common following moderate-to-severe traumatic brain injury (TBI), and are thought to be the major predictor of long-term functional outcome. However, current rehabilitation protocols have shown little success in improving interpersonal and social abilities of individuals with TBI, revealing a critical need for new and more effective treatments. Recent research has shown that neuro-modulatory treatments (e.g., non-invasive brain stimulation, lifestyle interventions) targeting the functionality of specific brain systems—as opposed to focusing on re-teaching individuals with TBI the impaired behaviors— hold the potential to succeed where past behavioral protocols have failed. However, in order to implement such treatments it is crucial to gain a better knowledge of the neural systems underlying social functioning secondary to TBI.
It is well established that in TBI populations the inability to identify and interpret social cues, and in particular to engage in successful recognition of facial affects, is one of the factors driving impaired social functioning following TBI. The aims of the work here described were threefold: (1) to determine the degree of impairment in individuals with moderate-to-severe TBI on tasks measuring different sub-types of facial affect recognition skills, (2) to determine the relationship between white matter integrity and different facial affect recognition ability in individuals with TBI by using diffusion tensor imaging, and (3) to determine the patterns of brain activation associated with facial affect recognition ability in individuals with TBI by using task-related functional magnetic resonance imaging (MRI).
Our results revealed that individuals with TBI are impaired at both perceptual and verbal categorization facial affect recognition tasks, although they are significantly more impaired in the latter. Moreover, performance on tasks tapping into different types of emotion recognition abilities showed different white-matter neural correlates, with more individuals with TBI showing more extensive damage in the left inferior fronto-occipital fasciculus, uncinate fasciculus and inferior longitudinal fasciculus more likely to perform poorly on verbal categorization tasks. Lastly, our functional MRI study suggests an involvement of left dorsolateral prefrontal regions in the disruption of more perceptual emotion recognition skills, and involvement on the fusiform gyrus and of the ventromedial prefrontal cortex in more interpretative facial affect recognition deficits.
The findings here presented further out understanding of the neurobiological mechanisms underlying facial affect impairment following TBI, and have the potential to inform the development of new and more effective treatments.
|
65 |
The Influence of Aging, Gaze Direction, and Context on Emotion Discrimination PerformanceMinton, Alyssa Renee 01 April 2019 (has links)
This study examined how younger and older adults differ in their ability to discriminate between pairs of emotions of varying degrees of similarity when presented with an averted or direct gaze in either a neutral, congruent, or incongruent emotional context. For Task 1, participants were presented with three blocks of emotion pairs (i.e., anger/disgust, sadness/disgust, and fear/disgust) and were asked to indicate which emotion was being expressed. The actors’ gaze direction was manipulated such that emotional facial expressions were depicted with a direct gaze or an averted gaze. For Task 2, the same stimuli were placed into emotional contexts (e.g., evocative backgrounds and expressive body posture) that were either congruent or incongruent with the emotional facial expression. Participants made emotion discrimination judgments for two emotion pairings: anger/disgust (High Similarity condition) and fear/disgust (Low Similarity condition). Discrimination performance varied as a function of age, gaze direction, degree of similarity of emotion pairs, and the congruence of the context. Across task, performance was best when evaluating less similar emotion pairs and worst when evaluating more similar emotion pairs. In addition, evaluating emotion in stimuli with averted eye gaze generally led to poorer performance than when evaluating stimuli communicating emotion with a direct eye gaze. These outcomes held for both age groups. When participants observed emotion facial expressions in the presence of congruent or incongruent emotional contexts, age differences in discrimination performance were most pronounced when the context did not support one’s estimation of the emotion expressed by the actors.
|
66 |
Emotion Recognition from Eye Region Signals using Local Binary PatternsJain, Gaurav 08 December 2011 (has links)
Automated facial expression analysis for Emotion Recognition (ER) is an active research area towards creating socially intelligent systems. The eye region, often considered integral for ER by psychologists and neuroscientists, has received very little attention in engineering and computer sciences. Using eye region as an input signal presents several bene ts for low-cost, non-intrusive ER applications.
This work proposes two frameworks towards ER from eye region images. The first framework uses Local Binary Patterns (LBP) as the feature extractor on grayscale eye region images. The results validate the eye region as a signi cant contributor towards communicating the emotion in the face by achieving high person-dependent accuracy. The system is also able to generalize well across di erent environment conditions.
In the second proposed framework, a color-based approach to ER from the eye region is explored using Local Color Vector Binary Patterns (LCVBP). LCVBP extend the traditional LBP by incorporating color information extracting a rich and a highly discriminative feature set, thereby providing promising results.
|
67 |
Emotion Recognition from Eye Region Signals using Local Binary PatternsJain, Gaurav 08 December 2011 (has links)
Automated facial expression analysis for Emotion Recognition (ER) is an active research area towards creating socially intelligent systems. The eye region, often considered integral for ER by psychologists and neuroscientists, has received very little attention in engineering and computer sciences. Using eye region as an input signal presents several bene ts for low-cost, non-intrusive ER applications.
This work proposes two frameworks towards ER from eye region images. The first framework uses Local Binary Patterns (LBP) as the feature extractor on grayscale eye region images. The results validate the eye region as a signi cant contributor towards communicating the emotion in the face by achieving high person-dependent accuracy. The system is also able to generalize well across di erent environment conditions.
In the second proposed framework, a color-based approach to ER from the eye region is explored using Local Color Vector Binary Patterns (LCVBP). LCVBP extend the traditional LBP by incorporating color information extracting a rich and a highly discriminative feature set, thereby providing promising results.
|
68 |
Facial Expression Discrimination in Adults Experiencing Posttraumatic Stress SymptomsLee, Brian N. 01 December 2011 (has links)
The present study examined the impact of posttraumatic stress symptoms (PTSS) on adults’ ability to discriminate between various facial expressions of emotions. Additionally, the study examined whether individuals reporting PTSS exhibited an attentional bias toward threat-related facial expressions of emotions. The research design was a 2 (expression intensity) x 3 (emotional pairing) x 2 (PTSS group) mixed-model factorial design. Participants for the study were 89 undergraduates recruited from psychology courses at Western Kentucky University. Participants completed the Traumatic Stress Schedule to assess for prior exposure to traumatic events. A median split was used to divide the sample into two groups (i.e., low and high PTSS). Additionally, participants also completed a demographics questionnaire, the Impact of Events Scale-Revised, the Center for Epidemiological Studies Depression Scale, and the Depression Anxiety Stress Scales to assess for possible covariates. Then, participants completed the discrimination of facial expressions task and the dot probe position task. Results indicate that individuals experiencing high levels of PTSS have difficulty discriminating between threatening and non-threatening facial expressions of emotions; additionally, these individuals’ difficulty is exacerbated by comorbid levels of anxiety symptoms. Furthermore, results suggests these individuals focus attention on threatening facial expressions while avoiding expressions that may activate memories associated with the prior trauma. These findings have significant clinical implications, as clinicians could focus treatment on correcting these difficulties which should help promote more beneficial social interactions for these individuals experiencing high levels of PTSS. Additionally, these behavioral measures could be used to assess the effectiveness of treatment. Effective treatment should help alleviate these difficulties, which could be measured by improved performance on the discrimination of facial expressions task and the dot probe position task from baseline to post-treatment.
|
69 |
Långsammare igenkänning av emotioner i ansiktsuttryck hos individer med utmattningssyndrom : En pilotstudieLöfdahl, Tomas, Wretman, Mattias January 2012 (has links)
Syftet med denna pilotstudie var att skapa hypoteser om och hur utmattningssyndrom påverkar förmågan att känna igen emotioner i ansiktsuttryck. En grupp patienter med utmattningssyndrom jämfördes med en matchad frisk kontrollgrupp (N=14). Grupperna undersöktes med ett datorbaserat test beståendes av färgbilder av autentiska ansiktsuttryck som gradvis i steg om 10% förändrades från ett neutralt ansiktsuttryck till någon av de fem grundemotionerna ilska, avsky, rädsla, glädje och ledsenhet. Mätningarna gjordes i termer av igenkänningsprecision och responshastighet. Resultatet visade att patientgruppen responderade signifikant långsammare än kontrollgruppen sett över samtliga emotioner i testet. Inga emotionsspecifika skillnader såväl som skillnader i igenkänningsprecision kunde påvisas mellan grupperna. Orsakerna till diskrepansen i responshastighet diskuterades utifrån fyra tänkbara förklaringsområden: ansiktsperceptuell funktion, visuell uppmärksamhet, självfokuserad uppmärksamhet samt noggrannhet/oro. Rekommendationer gjordes till framtida forskning om att utforska dessa områden närmare.
|
70 |
Perceived Parenting Styles, Emotion Recognition, And Emotion Regulation In Relation To Psychological Well-being: Symptoms Of Depression, Obsessive-compulsive Disorder, And Social AnxietyAka, Turkuler B. 01 June 2011 (has links) (PDF)
The purpose of the current study was to examine the path of perceived parenting styles, emotion recognition, emotion regulation, and psychological well-being in terms of depression, obsessive-compulsive disorder and social anxiety symptoms consequently. For the purpose of this study 530 adults (402 female, 128 male) between the ages of 18 and 36 (M = 22.09, SD = 2.78) participated in the current study. The data was collected by a questionnaire battery including a Demographic Category Sheet, Short-EMBU (Egna Minnen Betraffande Uppfostran- My Memories of Upbringing), &ldquo / Reading the Mind in the Eyes&rdquo / Test (Revised), Emotion Regulation Questionnaire, Emotion Regulation Processes, Beck Depression Inventory, Liebowitz Social Anxiety Scale, Maudsley Obsessive Compulsive Inventory, White Bear Suppression Inventory, Thought-Action Fusion Scale, and Emotional Approach Coping Scale. The psychometric properties of Emotion Regulation Questionnaire and Emotion Regulation Processes were investigated and found to have good
validity and reliability characteristics. The three sets of hierarchical multiple regression analyses were conducted to reveal the significant associates of psychological well-being. As expected, the results of the current study revealed that
perceived parenting styles, different emotion regulation strategies and processes had associated with psychological well-being in terms of depression, obsessivecompulsive
disorder and social anxiety symptoms. The findings, and their
implications with suggestions for future research and practice, were discussed in the light of relevant literature.
|
Page generated in 0.1285 seconds