• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 75
  • 11
  • 10
  • 7
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 164
  • 164
  • 43
  • 34
  • 31
  • 31
  • 24
  • 24
  • 22
  • 21
  • 19
  • 18
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A Machine Learning Approach to EEG-Based Emotion Recognition

Jamil, Sara January 2018 (has links)
In recent years, emotion classification using electroencephalography (EEG) has attracted much attention with the rapid development of machine learning techniques and various applications of brain-computer interfacing. In this study, a general model for emotion recognition was created using a large dataset of 116 participants' EEG responses to happy and fearful videos. We compared discrete and dimensional emotion models, assessed various popular feature extraction methods, evaluated the efficacy of feature selection algorithms, and examined the performance of 2 classification algorithms. An average test accuracy of 76% was obtained using higher-order spectral features with a support vector machine for discrete emotion classification. An accuracy of up 79% was achieved on the subset of classifiable participants. Finally, the stability of EEG patterns in emotion recognition was examined over time by evaluating consistency across sessions. / Thesis / Master of Science (MSc)
42

Eye-Gaze Analyis of Facial Emotion Expression in Adolescents with ASD

Trubanova, Andrea 10 January 2016 (has links)
Prior research has shown that both emotion recognition and expression in children with Autism Spectrum Disorder (ASD) differs from that of typically developing children, and that these differences may contribute to observed social impairment. This study extends prior research in this area with an integrated examination of both expression and recognition of emotion, and evaluation of spontaneous generation of emotional expression in response to another person's emotion, a behavior that is characteristically deficient in ASD. The aim of this study was to assess eye gaze patterns during scripted and spontaneous emotion expression tasks, and to assess quality of emotional expression in relation to gaze patterns. Youth with ASD fixated less to the eye region of stimuli showing surprise (F(1,19.88) = 4.76, p = .04 for spontaneous task; F(1,19.88) = 3.93, p = .06 for the recognition task), and they expressed emotion less clearly than did the typically developing sample (F(1, 35) = 6.38, p = .02) in the spontaneous task, but there was not a significant group difference in the scripted task across the emotions. Results do not, however, suggest altered eye gaze as a candidate mechanism for decreased ability to express an emotion. Findings from this research inform our understanding of the social difficulties associated with emotion recognition and expression deficits. / Master of Science
43

Attention Modification to Attenuate Facial Emotion Recognition Deficits in Children with ASD

Wieckowski, Andrea Trubanova 04 February 2019 (has links)
Prior studies have identified diminished attending to faces, and in particular the eye region, in individuals with Autism Spectrum Disorder (ASD), which may contribute to the impairments they experience with emotion recognition and expression. The current study evaluated the acceptability, feasibility, and preliminary effectiveness of an attention modification intervention designed to attenuate deficits in facial emotion recognition and expression in children with ASD. During the 10-session experimental treatment, children watched videos of people expressing different emotions with the facial features highlighted to guide children's attention. Eight children with ASD completed the treatment, of nine who began. On average, the children and their parents rated the treatment to be acceptable and helpful. Although treatment efficacy, in terms of improved facial emotion recognition (FER), was not apparent on task-based measures, children and their parents reported slight improvements and most parents indicated decreased socioemotional problems following treatment. Results of this preliminary trial suggest that further clinical research on visual attention retraining for ASD, within an experimental therapeutic program, may be promising. / PHD / Previous studies have shown that individuals with Autism Spectrum Disorder (ASD) show lower looking at faces, especially the eyes, which may lead to the difficulties they show with ability to recognize other’s emotions and express their own emotions. This study looked at a new treatment designed to decrease the difficulties in emotion recognition and expression in children with ASD. The study looked at whether the treatment is possible, acceptable to children and their parents, and successful in decreasing the difficulty with emotion recognition. During the 10-session treatment, children watched videos of people making different expressions. The faces of the actors in the videos were highlighted to show the children the important area to look at. Eight children with ASD completed the treatment, of nine who started the treatment. On average, the children and their parents said that the treatment is acceptable and helpful. While the treatment was not successful in improving ability to recognize emotions on other’s faces on several tasks, children and their parents reported slight improvements. In addition, most parents reported less problems with social skills and emotion recognition and expression after the treatment. These results suggest that more clinical research may be needed to evaluate usefulness of such attention retraining for children with ASD.
44

Secrets of a smile? Your gender and perhaps your biometric identity

Ugail, Hassan 11 June 2018 (has links)
No / With its numerous applications, automatic facial emotion recognition has recently become a very active area of research. Yet there has been little detailed study of the dynamic components of facial expressions. This article reviews research that shows gender is encoded in the dynamics of a smile, and how it may be possible to use the dynamic components of facial expressions as a form of biometric.
45

Can adults with autism spectrum disorders infer what happened to someone from their emotional response

Cassidy, S., Ropar, D., Mitchell, Peter, Chapman, P. 04 June 2020 (has links)
Yes / Can adults with autism spectrum disorders (ASD) infer what happened to someone from their emotional response? Millikan has argued that in everyday life, others' emotions are most commonly used to work out the antecedents of behavior, an ability termed retrodictive mindreading. As those with ASD show difficulties interpreting others' emotions, we predicted that these individuals would have difficulty with retrodictive mindreading. Sixteen adults with high-functioning autism or Asperger's syndrome and 19 typically developing adults viewed 21 video clips of people reacting to one of three gifts (chocolate, monopoly money, or a homemade novelty) and then inferred what gift the recipient received and the emotion expressed by that person. Participants' eye movements were recorded while they viewed the videos. Results showed that participants with ASD were only less accurate when inferring who received a chocolate or homemade gift. This difficulty was not due to lack of understanding what emotions were appropriate in response to each gift, as both groups gave consistent gift and emotion inferences significantly above chance (genuine positive for chocolate and feigned positive for homemade). Those with ASD did not look significantly less to the eyes of faces in the videos, and looking to the eyes did not correlate with accuracy on the task. These results suggest that those with ASD are less accurate when retrodicting events involving recognition of genuine and feigned positive emotions, and challenge claims that lack of attention to the eyes causes emotion recognition difficulties in ASD. / University of Nottingham, School of Psychology
46

Adults' and Children's Identification of Faces and Emotions from Isolated Motion Cues

Gonsiorowski, Anna 09 May 2016 (has links)
Faces communicate a wealth of information, including cues to others’ internal emotional states. Face processing is often studied using static stimuli; however, in real life, faces are dynamic. The current project examines face detection and emotion recognition from isolated motion cues. Across two studies, facial motion is presented in point-light displays (PLDs), in which moving white dots against a black screen correspond to dynamic regions of the face. In Study 1, adults were asked to identify the upright facial motion of five basic emotional expressions (e.g., surprise) and five neutral non-rigid movements (e.g., yawning) versus inverted and scrambled distractors. Prior work with static stimuli finds that certain cues, including the addition of motion information, the spatial arrangement of elements, and the emotional significance of stimuli affect face detection. This study found significant effects involving each of these factors using facial PLDs. Notably, face detection was most accurate in response to face-like arrangements, and motion information was useful in response to unusual point configurations. These results suggest that similar processes underlie the processing of static face images and isolated facial motion cues. In Study 2, children and adults were asked to match PLDs of emotional expressions to their corresponding labels (e.g., match a smiling PLD with the word “happy”). Prior work with face images finds that emotion recognition improves with age, but the developmental trajectory depends critically on the emotion to be recognized. Emotion recognition in response to PLDs improved with age, and there were different trajectories across the five emotions tested. Overall, this dissertation contributes to the understanding of the influence of motion information in face processing and emotion recognition, by demonstrating that there are similarities in how people process full-featured static faces and isolated facial motion cues in PLDs (which lack features). The finding that even young children can detect emotions from isolated facial motion indicates that features are not needed for making these types of social judgments. PLD stimuli hold promise for future interventions with atypically developing populations.
47

inHarmony: a Digital Twin for Emotional Well-being

Albraikan, Amani 24 May 2019 (has links)
A digital twin is an enabling technology that facilitates monitoring, understanding, and providing continuous feedback to improve quality of life and well-being. Thus, a digital twin can consider a solution to enhance one's mood to improve the quality of life and emotional well-being. However, there remains a long road ahead until we reach digital twin systems that are capable of empowering development and the deployment of digital twins. This is because there are so many elements and components that can guide the design of a digital twin. This thesis provides a general discussion for the central element of an emotional digital twin, including emotion detection, emotional biofeedback, and emotion-aware recommender systems. In the first part of this thesis, we propose and study the emotion detection models and algorithms. For emotions, which are known to be highly user dependent, improvements to the emotion learning algorithm can significantly boost its predictive power. We aimed to improve the accuracy of the classifier using peripheral physiological signals. Here, we present a hybrid sensor fusion approach based on a stacking model that allows for data from multiple sensors and emotion models to be jointly embedded within a user-independent model. In the second part of this thesis, we propose a real-time mobile biofeedback system that uses wearable sensors to depict five basic emotions and provides the user with emotional feedback. These systems apply the concept of Live Biofeedback through the introduction of an emotion-aware digital twin. An essential element in these systems guides users through an emotion-regulation routine. The proposed systems are aimed at increasing self-awareness by using visual feedback and provide insight into the future design of digital twins. We focus on workplace environments, and the recommendations are based on human emotions and the regulation of emotion in the construct of emotional intelligence. The objective is to suggest coping techniques to a user during an emotional, stressful episode based on her or his preferences, history of what worked well and appropriateness for the context. The developed solution has been studied based on usability studies and extensively compared to related works. The obtained results show the potentials use as an emotional digital twin. In turn, the proposed solution has been providing significant insights that will guide future developments of digital twins using several scenarios and settings.
48

Facial emotion expression, recognition and production of varying intensity in the typical population and on the autism spectrum

Wingenbach, Tanja January 2016 (has links)
The current research project aimed to investigate facial emotion processing from especially developed and validated video stimuli of facial emotional expressions including varying levels of intensity. Therefore, videos were developed showing real people expressing emotions in real time (anger, disgust, fear, sadness, surprise, happiness, contempt, embarrassment, contempt, and neutral) at different expression intensity levels (low, intermediate, high) called the Amsterdam Dynamic Facial Expression Set – Bath Intensity Variations (ADFES-BIV). The ADFES-BIV was validated on all its emotion and intensity categories. Sex differences in facial emotion recognition were investigated and a female advantage in facial emotion recognition was found compared to males. This demonstrates that the ADFES-BIV is suitable for investigating group comparisons in facial emotion recognition in the general population. Facial emotion recognition from the ADFES-BIV was further investigated in a sample of individuals that is characterised by deficits in social functioning; individuals with an Autism Spectrum Disorder (ASD). A deficit in facial emotion recognition was found in ASD compared to controls and error analysis revealed emotion-specific deficits in detecting emotional content from faces (sensitivity) next to deficits in differentiating between emotions from faces (specificity). The ADFES-BIV was combined with face electromyogram (EMG) to investigate facial mimicry and the effects of proprioceptive feedback (from explicit imitation and blocked facial mimicry) on facial emotion recognition. Based on the reverse simulation model it was predicted that facial mimicry would be an active component of the facial emotion recognition process. Experimental manipulations of face movements did not reveal an advantage of facial mimicry compared to the blocked facial mimicry condition. Whereas no support was found for the reverse simulation model, enhanced proprioceptive feedback can facilitate or hinder recognition of facial emotions in line with embodied cognition accounts.
49

Análise de sinais de voz para reconhecimento de emoções. / Analysis of speech signals for emotion recognition.

Rafael Iriya 07 July 2014 (has links)
Esta pesquisa é motivada pela crescente importância do reconhecimento automático de emoções, em especial através de sinais de voz, e suas aplicações em sistemas para interação homem-máquina. Neste contexto, são estudadas as emoções Felicidade, Medo, Nojo, Raiva, Tédio e Tristeza, além do estado Neutro, que são emoções geralmente consideradas como essenciais para um conjunto básico de emoções. São investigadas diversas questões relacionadas à análise de voz para reconhecimento de emoções, explorando vários parâmetros do sinal de voz, como por exemplo frequência fundamental (pitch), energia de curto prazo, formantes, coeficientes cepstrais e são testadas diferentes técnicas para a classificação, envolvendo reconhecimento de padrões e métodos estatísticos, como K-vizinhos mais próximos (KNN), Máquinas de Vetores de Suporte (SVM), Modelos de Misturas de Gaussianas (GMM) e Modelos Ocultos de Markov (HMM), destacando-se o uso de GMM como principal técnica utilizada por seu custo computacional e desempenho. Neste trabaho é desenvolvido um sistema de identificação em estágio único obtendo-se resultados superiores a diversos sistemas na literatura, com uma taxa de reconhecimento de até 74,86%. Além disso, recorre-se à psicologia e à teoria de emoções para incorporar-se a noção do espaço de emoções e suas dimensões a fim de desenvolver-se um sistema de classificação sequencial em três estágios, que passa por classificações nas dimensões Ativação, Avaliação e Domínio. Este sistema apresenta uma taxa de reconhecimento superior ao do sistema de único estágio, com até 82,41%, ao mesmo tempo em que é identificado um ponto de atenção no sistema de três estágios, que pode apresentar dificuldades na identificação de emoções que possuem baixo índice de reconhecimento em um dos estágios. Uma vez que existem poucos sistemas estado da arte que tratam o problema de verificação de emoções, um sistema também é desenvolvido para esta tarefa, obtendo-se um reconhecimento perfeito para as emoções Raiva, Neutro, Tédio e Tristeza. Por fim, é desenvolvido um sistema híbrido para tratar os problemas de verificação e de identificação em sequência, que tenta resolver o problema do classificador de três estágios e obtém uma taxa de reconhecimento de até 83%. / This work is motivated by the increase on the importance of automatic emotion recognition, especially through speech signals, and its applications in human-machine interaction systems. In this context, the emotions Happiness, Fear, Neutral, Disgust, Anger, Boredom and Sadness are selected for this study, which are usually considered essential for a basic set of emotions. Several topics related to emotion recognition through speech are investigated, including speech features, like pitch, energy, formants and MFCC as well as different classification algorithms that involve pattern recognition and stochastic modelling like K-Nearest Neighbours (KNN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM) and Hidden Markov Models (HMM), where GMM is selected as the main technique for its computational cost and performance. In this work, a single-stage identification system is developed, which outperforms several systems in the literature, with a recognition rate of up to 74.86%. Besides, the idea of emotional space dimensions from Psychology and Emotion Theory is reviewed for the development of a sequential classification system with 3 stages, that passes through classifications on the Activation, Evaluation and Dominance dimensions. This system outperforms the single-stage classifier with a recognition rate of up to 82.41%, at the same time as a point of attention is identified, as this kind of system may show difficulties on the identification of emotions that show low recognition rates in a specific stage. Since there are few state of the art systems that handle emotion verification, a system for this task is also developed in this work, showing itself to be a perfect recognizer for the Anger, Neutral, Boredom and Sadness emotions. Finally, a hybrid system is proposed to handle both the verification and the identification tasks sequentially, which tries to solve the 3-stage classifier problem and shows a recognition rate of up to 83%.
50

Structural and functional neural networks underlying facial affect recognition impairment following traumatic brain injury

Rigon, Arianna 01 August 2017 (has links)
Psychosocial problems are exceedingly common following moderate-to-severe traumatic brain injury (TBI), and are thought to be the major predictor of long-term functional outcome. However, current rehabilitation protocols have shown little success in improving interpersonal and social abilities of individuals with TBI, revealing a critical need for new and more effective treatments. Recent research has shown that neuro-modulatory treatments (e.g., non-invasive brain stimulation, lifestyle interventions) targeting the functionality of specific brain systems—as opposed to focusing on re-teaching individuals with TBI the impaired behaviors— hold the potential to succeed where past behavioral protocols have failed. However, in order to implement such treatments it is crucial to gain a better knowledge of the neural systems underlying social functioning secondary to TBI. It is well established that in TBI populations the inability to identify and interpret social cues, and in particular to engage in successful recognition of facial affects, is one of the factors driving impaired social functioning following TBI. The aims of the work here described were threefold: (1) to determine the degree of impairment in individuals with moderate-to-severe TBI on tasks measuring different sub-types of facial affect recognition skills, (2) to determine the relationship between white matter integrity and different facial affect recognition ability in individuals with TBI by using diffusion tensor imaging, and (3) to determine the patterns of brain activation associated with facial affect recognition ability in individuals with TBI by using task-related functional magnetic resonance imaging (MRI). Our results revealed that individuals with TBI are impaired at both perceptual and verbal categorization facial affect recognition tasks, although they are significantly more impaired in the latter. Moreover, performance on tasks tapping into different types of emotion recognition abilities showed different white-matter neural correlates, with more individuals with TBI showing more extensive damage in the left inferior fronto-occipital fasciculus, uncinate fasciculus and inferior longitudinal fasciculus more likely to perform poorly on verbal categorization tasks. Lastly, our functional MRI study suggests an involvement of left dorsolateral prefrontal regions in the disruption of more perceptual emotion recognition skills, and involvement on the fusiform gyrus and of the ventromedial prefrontal cortex in more interpretative facial affect recognition deficits. The findings here presented further out understanding of the neurobiological mechanisms underlying facial affect impairment following TBI, and have the potential to inform the development of new and more effective treatments.

Page generated in 0.0935 seconds