Spelling suggestions: "subject:"emotionation recognition""
51 |
A Machine Learning Approach to EEG-Based Emotion RecognitionJamil, Sara January 2018 (has links)
In recent years, emotion classification using electroencephalography (EEG) has attracted much attention with the rapid development of machine learning techniques and various applications of brain-computer interfacing. In this study, a general model for emotion recognition was created using a large dataset of 116 participants' EEG responses to happy and fearful videos. We compared discrete and dimensional emotion models, assessed various popular feature extraction methods, evaluated the efficacy of feature selection algorithms, and examined the performance of 2 classification algorithms. An average test accuracy of 76% was obtained using higher-order spectral features with a support vector machine for discrete emotion classification. An accuracy of up 79% was achieved on the subset of classifiable participants. Finally, the stability of EEG patterns in emotion recognition was examined over time by evaluating consistency across sessions. / Thesis / Master of Science (MSc)
|
52 |
Eye-Gaze Analyis of Facial Emotion Expression in Adolescents with ASDTrubanova, Andrea 10 January 2016 (has links)
Prior research has shown that both emotion recognition and expression in children with Autism Spectrum Disorder (ASD) differs from that of typically developing children, and that these differences may contribute to observed social impairment. This study extends prior research in this area with an integrated examination of both expression and recognition of emotion, and evaluation of spontaneous generation of emotional expression in response to another person's emotion, a behavior that is characteristically deficient in ASD. The aim of this study was to assess eye gaze patterns during scripted and spontaneous emotion expression tasks, and to assess quality of emotional expression in relation to gaze patterns. Youth with ASD fixated less to the eye region of stimuli showing surprise (F(1,19.88) = 4.76, p = .04 for spontaneous task; F(1,19.88) = 3.93, p = .06 for the recognition task), and they expressed emotion less clearly than did the typically developing sample (F(1, 35) = 6.38, p = .02) in the spontaneous task, but there was not a significant group difference in the scripted task across the emotions. Results do not, however, suggest altered eye gaze as a candidate mechanism for decreased ability to express an emotion. Findings from this research inform our understanding of the social difficulties associated with emotion recognition and expression deficits. / Master of Science
|
53 |
Attention Modification to Attenuate Facial Emotion Recognition Deficits in Children with ASDWieckowski, Andrea Trubanova 04 February 2019 (has links)
Prior studies have identified diminished attending to faces, and in particular the eye region, in individuals with Autism Spectrum Disorder (ASD), which may contribute to the impairments they experience with emotion recognition and expression. The current study evaluated the acceptability, feasibility, and preliminary effectiveness of an attention modification intervention designed to attenuate deficits in facial emotion recognition and expression in children with ASD. During the 10-session experimental treatment, children watched videos of people expressing different emotions with the facial features highlighted to guide children's attention. Eight children with ASD completed the treatment, of nine who began. On average, the children and their parents rated the treatment to be acceptable and helpful. Although treatment efficacy, in terms of improved facial emotion recognition (FER), was not apparent on task-based measures, children and their parents reported slight improvements and most parents indicated decreased socioemotional problems following treatment. Results of this preliminary trial suggest that further clinical research on visual attention retraining for ASD, within an experimental therapeutic program, may be promising. / PHD / Previous studies have shown that individuals with Autism Spectrum Disorder (ASD) show lower looking at faces, especially the eyes, which may lead to the difficulties they show with ability to recognize other’s emotions and express their own emotions. This study looked at a new treatment designed to decrease the difficulties in emotion recognition and expression in children with ASD. The study looked at whether the treatment is possible, acceptable to children and their parents, and successful in decreasing the difficulty with emotion recognition. During the 10-session treatment, children watched videos of people making different expressions. The faces of the actors in the videos were highlighted to show the children the important area to look at. Eight children with ASD completed the treatment, of nine who started the treatment. On average, the children and their parents said that the treatment is acceptable and helpful. While the treatment was not successful in improving ability to recognize emotions on other’s faces on several tasks, children and their parents reported slight improvements. In addition, most parents reported less problems with social skills and emotion recognition and expression after the treatment. These results suggest that more clinical research may be needed to evaluate usefulness of such attention retraining for children with ASD.
|
54 |
Determining Correlation Between Video Stimulus and Electrodermal ActivityTasooji, Reza 06 August 2018 (has links)
With the growth of wearable devices capable of measuring physiological signals, affective computing is becoming more popular than before that gradually will remove our cognitive approach. One of the physiological signals is the electrodermal activities (EDA) signal. We explore how video stimulus that might arouse fear affect the EDA signal. To better understand EDA signal, two different medians, a scene from a movie and a scene from a video game, were selected to arouse fear.
We conducted a user study with 20 participants and analyzed the differences between medians and proposed a method capable of detecting the highlights of the stimulus using only EDA signals. The study results show that there are no significant differences between two medians except that users are more engaged with the content of the video game. From gathered data, we propose a similarity measurement method for clustering different users based on how common they reacted to different highlights. The result shows for 300 seconds stimulus, using a window size of 10 seconds, our approach for detecting highlights of the stimulus has the precision of one for both medians, and F1 score of 0.85 and 0.84 for movie and video game respectively. / Master of Science / In this work, we explore different approaches to analyze and cluster EDA signals. Two different medians, a scene from a movie and a scene from a video game, were selected to arouse fear.
By conducting a user study with 20 participants, we analyzed the differences between two medians and proposed a method capable of detecting highlights of the video clip using only EDA signals. The result of the study, shows there are no significant differences between two medians except that users are more engaged to the content of the video game. From gathered data, we propose a similarity measurement method for clustering different user based on how common they reacted to different highlights.
|
55 |
Secrets of a smile? Your gender and perhaps your biometric identityUgail, Hassan 11 June 2018 (has links)
No / With its numerous applications, automatic facial emotion recognition has recently become a very active area of research. Yet there has been little detailed study of the dynamic components of facial expressions. This article reviews research that shows gender is encoded in the dynamics of a smile, and how it may be possible to use the dynamic components of facial expressions as a form of biometric.
|
56 |
Mechanisms of Empathic Behavior in Children with Callous-Unemotional Traits: Eye Gaze and Emotion RecognitionDelk, Lauren Annabel 06 December 2016 (has links)
The presence of callous-unemotional (CU) traits (e.g., shallow affect, lack of empathy) in children predicts reduced prosocial behavior. Similarly, CU traits relate to emotion recognition deficits, which may be related to deficits in visual attention to the eye region of others. Notably, recognition of others' distress necessarily precedes sympathy, and sympathy is a key predictor in prosocial outcomes. Thus, visual attention and emotion recognition may mediate the relationship between CU traits and deficient prosocial behavior. Elucidating these connections furthers the development of treatment protocols for children with behavioral problems and CU traits. This study seeks to: (1) extend this research to younger children, including girls; (2) measure eye gaze using infrared eye-tracking technology; and (3) test the hypothesis that CU traits are linked to prosocial behavior deficits via reduced eye gaze, which in turn leads to deficits in fear recognition. Children (n = 81, ages 6-9) completed a computerized, eye-tracked emotion recognition task and a standardized prosocial behavior task while parents reported on the children's CU traits. Results partially supported hypotheses, in that CU traits predicted less time focusing on the eye region for fear expressions, and certain dimensions of eye gaze predicted accuracy in recognizing some emotions. However, the full model was not supported for fear or distress expressions. Conversely, there was some evidence that the link between CU traits and deficient prosocial behavior is mediated by reduced recognition for low intensity happy expressions, but only in girls. Theoretical and practical implications for these findings are considered. / Master of Science / Callous-unemotional (CU) traits are defined as experiencing limited emotion and empathy for others. Children with CU traits are less likely to show prosocial behavior, such as sharing or helping others. Similarly, children with CU traits also have more difficulty recognizing emotions compared to peers. This may be related to less attention to the eye region of other’s faces, which conveys emotional information. Notably, accurate recognition of others’ distress is necessary for children to feel concern for others and want to engage in prosocial behavior. Elucidating these connections furthers the development of interventions for children with and CU traits, which often related to behavior problems. This study seeks to: (1) extend this research to younger children, including girls; (2) measure eye gaze using eye-tracking technology; and (3) test the hypothesis that CU traits predict prosocial behavior deficits due to reduced eye gaze and subsequent deficits in fear and distress recognition. While this hypothesis was not fully supported, results did indicate that CU traits predicted less time focusing on the eye region for fear expressions, and certain forms of eye gaze predicted better emotion recognition accuracy for some emotions. Instead, results indicated that eye gaze and recognition of subtle happy expressions played a role linking CU traits and prosocial behavior, but only in girls. Results suggest that CU traits relate to less attention to the eye region and poorer emotion recognition across emotions, and that these mechanisms may operate differently in boys and girls.
|
57 |
Can adults with autism spectrum disorders infer what happened to someone from their emotional responseCassidy, S., Ropar, D., Mitchell, Peter, Chapman, P. 04 June 2020 (has links)
Yes / Can adults with autism spectrum disorders (ASD) infer what happened to someone from their emotional response? Millikan has argued that in everyday life, others' emotions are most commonly used to work out the antecedents of behavior, an ability termed retrodictive mindreading. As those with ASD show difficulties interpreting others' emotions, we predicted that these individuals would have difficulty with retrodictive mindreading. Sixteen adults with high-functioning autism or Asperger's syndrome and 19 typically developing adults viewed 21 video clips of people reacting to one of three gifts (chocolate, monopoly money, or a homemade novelty) and then inferred what gift the recipient received and the emotion expressed by that person. Participants' eye movements were recorded while they viewed the videos. Results showed that participants with ASD were only less accurate when inferring who received a chocolate or homemade gift. This difficulty was not due to lack of understanding what emotions were appropriate in response to each gift, as both groups gave consistent gift and emotion inferences significantly above chance (genuine positive for chocolate and feigned positive for homemade). Those with ASD did not look significantly less to the eyes of faces in the videos, and looking to the eyes did not correlate with accuracy on the task. These results suggest that those with ASD are less accurate when retrodicting events involving recognition of genuine and feigned positive emotions, and challenge claims that lack of attention to the eyes causes emotion recognition difficulties in ASD. / University of Nottingham, School of Psychology
|
58 |
Adults' and Children's Identification of Faces and Emotions from Isolated Motion CuesGonsiorowski, Anna 09 May 2016 (has links)
Faces communicate a wealth of information, including cues to others’ internal emotional states. Face processing is often studied using static stimuli; however, in real life, faces are dynamic. The current project examines face detection and emotion recognition from isolated motion cues. Across two studies, facial motion is presented in point-light displays (PLDs), in which moving white dots against a black screen correspond to dynamic regions of the face.
In Study 1, adults were asked to identify the upright facial motion of five basic emotional expressions (e.g., surprise) and five neutral non-rigid movements (e.g., yawning) versus inverted and scrambled distractors. Prior work with static stimuli finds that certain cues, including the addition of motion information, the spatial arrangement of elements, and the emotional significance of stimuli affect face detection. This study found significant effects involving each of these factors using facial PLDs. Notably, face detection was most accurate in response to face-like arrangements, and motion information was useful in response to unusual point configurations. These results suggest that similar processes underlie the processing of static face images and isolated facial motion cues.
In Study 2, children and adults were asked to match PLDs of emotional expressions to their corresponding labels (e.g., match a smiling PLD with the word “happy”). Prior work with face images finds that emotion recognition improves with age, but the developmental trajectory depends critically on the emotion to be recognized. Emotion recognition in response to PLDs improved with age, and there were different trajectories across the five emotions tested.
Overall, this dissertation contributes to the understanding of the influence of motion information in face processing and emotion recognition, by demonstrating that there are similarities in how people process full-featured static faces and isolated facial motion cues in PLDs (which lack features). The finding that even young children can detect emotions from isolated facial motion indicates that features are not needed for making these types of social judgments. PLD stimuli hold promise for future interventions with atypically developing populations.
|
59 |
EMOTION RECOGNITION AND SOCIAL FUNCTIONING IN CHILDREN WITH AND WITHOUT ATTENTION DEFICIT HYPERACTIVITY DISORDERAldea, Rebecca Flake 01 January 2013 (has links)
This study examined the emotion recognition of children (ages 7-9 years) with and without Attention Deficit Hyperactivity Disorder (ADHD). Children completed two emotion recognition measures, the Diagnostic Analysis of Nonverbal Accuracy 2 (DANVA2) and the Child and Adolescent Social Perception measure (CASP). Children and their parents also completed an assessment of children’s social skills, the Social Skills Improvement System (SSIS). Children with ADHD reported a significantly greater level of depressive symptoms and had significantly lower full scale IQ scores than children without ADHD. When these differences were accounted for, children with ADHD continued to show a handful of deficits in emotion recognition. They demonstrated difficulties in emotion recognition on the DANVA2 regarding specific emotions, fear and sadness. On the CASP, children with ADHD made significantly more errors than children without ADHD due to a tendency to make up information to explain how they were able to identify feelings. Children’s performance on the emotion recognition measures did not significantly mediate the relation between their diagnostic status and social skills (as rated by parents). In summary, additional evidence was found regarding the deficits in emotion recognition experienced by children with ADHD, however, further work needs to be done to determine if these deficits relate to the peer difficulties experienced by these children.
|
60 |
inHarmony: a Digital Twin for Emotional Well-beingAlbraikan, Amani 24 May 2019 (has links)
A digital twin is an enabling technology that facilitates monitoring, understanding, and providing continuous feedback to improve quality of life and well-being. Thus, a digital twin can consider a solution to enhance one's mood to improve the quality of life and emotional well-being. However, there remains a long road ahead until we reach digital twin systems that are capable of empowering development and the deployment of digital twins. This is because there are so many elements and components that can guide the design of a digital twin.
This thesis provides a general discussion for the central element of an emotional digital twin, including emotion detection, emotional biofeedback, and emotion-aware recommender systems. In the first part of this thesis, we propose and study the emotion detection models and algorithms. For emotions, which are known to be highly user dependent, improvements to the emotion learning algorithm can significantly boost its predictive power. We aimed to improve the accuracy of the classifier using peripheral physiological signals. Here, we present a hybrid sensor fusion approach based on a stacking model that allows for data from multiple sensors and emotion models to be jointly embedded within a user-independent model.
In the second part of this thesis, we propose a real-time mobile biofeedback system that uses wearable sensors to depict five basic emotions and provides the user with emotional feedback. These systems apply the concept of Live Biofeedback through the introduction of an emotion-aware digital twin. An essential element in these systems guides users through an emotion-regulation routine. The proposed systems are aimed at increasing self-awareness by using visual feedback and provide insight into the future design of digital twins. We focus on workplace environments, and the recommendations are based on human emotions and the regulation of emotion in the construct of emotional intelligence. The objective is to suggest coping techniques to a user during an emotional, stressful episode based on her or his preferences, history of what worked well and appropriateness for the context.
The developed solution has been studied based on usability studies and extensively compared to related works. The obtained results show the potentials use as an emotional digital twin. In turn, the proposed solution has been providing significant insights that will guide future developments of digital twins using several scenarios and settings.
|
Page generated in 0.1252 seconds