• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 12
  • 10
  • 9
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 190
  • 190
  • 49
  • 37
  • 35
  • 34
  • 28
  • 26
  • 25
  • 24
  • 23
  • 21
  • 21
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A COMPARISON STUDY BETWEEN RESULTS OF 3D VIRTUAL FACIAL ANIMATION METHODS: SKELETON, BLENDSHAPE, AUDIO-DRIVEN TECHNIQUE, AND VISION-BASED CAPTURE

Mingzhu Wei (13158648) 27 July 2022 (has links)
<p> In this paper, the authors explore different approaches to animating 3D facial emotions, some of which use manual keyframe facial animation and some of which use machine learning. To compare approaches the authors conducted an experiment consisting of side-by-side comparisons of animation clips generated by skeleton, blendshape, audio-driven, and vision-based capture techniques.</p> <p>Ninety-five participants viewed twenty face animation clips of characters expressing five distinct emotions (anger, sadness, happiness, fear, neutral), which were created using four different facial animation techniques. After viewing each clip, the participants were asked to score the naturalness on a 5-point Likert scale and to identify the emotions that the characters appeared to be conveying.</p> <p>Although the happy emotion clips differed slightly in the naturalness ratings, the naturalness scores of happy emotions produced by the four methods tended to be consistent. The naturalness ratings of the fear emotion created with skeletal animation were higher than other methods.Recognition of sad and neutral were very low for all methods as compared to other emotions. Findings also showed that a few people participants were able to identify the clips that were machine generated rather than created by a human artist.The means, boxplots and HSD revealed that the skeleton approach had significantly higher ratings for naturalness and higher recognition rate than the other methods.</p>
52

A Machine Learning Approach to EEG-Based Emotion Recognition

Jamil, Sara January 2018 (has links)
In recent years, emotion classification using electroencephalography (EEG) has attracted much attention with the rapid development of machine learning techniques and various applications of brain-computer interfacing. In this study, a general model for emotion recognition was created using a large dataset of 116 participants' EEG responses to happy and fearful videos. We compared discrete and dimensional emotion models, assessed various popular feature extraction methods, evaluated the efficacy of feature selection algorithms, and examined the performance of 2 classification algorithms. An average test accuracy of 76% was obtained using higher-order spectral features with a support vector machine for discrete emotion classification. An accuracy of up 79% was achieved on the subset of classifiable participants. Finally, the stability of EEG patterns in emotion recognition was examined over time by evaluating consistency across sessions. / Thesis / Master of Science (MSc)
53

Mechanisms of Empathic Behavior in Children with Callous-Unemotional Traits: Eye Gaze and Emotion Recognition

Delk, Lauren Annabel 06 December 2016 (has links)
The presence of callous-unemotional (CU) traits (e.g., shallow affect, lack of empathy) in children predicts reduced prosocial behavior. Similarly, CU traits relate to emotion recognition deficits, which may be related to deficits in visual attention to the eye region of others. Notably, recognition of others' distress necessarily precedes sympathy, and sympathy is a key predictor in prosocial outcomes. Thus, visual attention and emotion recognition may mediate the relationship between CU traits and deficient prosocial behavior. Elucidating these connections furthers the development of treatment protocols for children with behavioral problems and CU traits. This study seeks to: (1) extend this research to younger children, including girls; (2) measure eye gaze using infrared eye-tracking technology; and (3) test the hypothesis that CU traits are linked to prosocial behavior deficits via reduced eye gaze, which in turn leads to deficits in fear recognition. Children (n = 81, ages 6-9) completed a computerized, eye-tracked emotion recognition task and a standardized prosocial behavior task while parents reported on the children's CU traits. Results partially supported hypotheses, in that CU traits predicted less time focusing on the eye region for fear expressions, and certain dimensions of eye gaze predicted accuracy in recognizing some emotions. However, the full model was not supported for fear or distress expressions. Conversely, there was some evidence that the link between CU traits and deficient prosocial behavior is mediated by reduced recognition for low intensity happy expressions, but only in girls. Theoretical and practical implications for these findings are considered. / Master of Science
54

Eye-Gaze Analyis of Facial Emotion Expression in Adolescents with ASD

Trubanova, Andrea 10 January 2016 (has links)
Prior research has shown that both emotion recognition and expression in children with Autism Spectrum Disorder (ASD) differs from that of typically developing children, and that these differences may contribute to observed social impairment. This study extends prior research in this area with an integrated examination of both expression and recognition of emotion, and evaluation of spontaneous generation of emotional expression in response to another person's emotion, a behavior that is characteristically deficient in ASD. The aim of this study was to assess eye gaze patterns during scripted and spontaneous emotion expression tasks, and to assess quality of emotional expression in relation to gaze patterns. Youth with ASD fixated less to the eye region of stimuli showing surprise (F(1,19.88) = 4.76, p = .04 for spontaneous task; F(1,19.88) = 3.93, p = .06 for the recognition task), and they expressed emotion less clearly than did the typically developing sample (F(1, 35) = 6.38, p = .02) in the spontaneous task, but there was not a significant group difference in the scripted task across the emotions. Results do not, however, suggest altered eye gaze as a candidate mechanism for decreased ability to express an emotion. Findings from this research inform our understanding of the social difficulties associated with emotion recognition and expression deficits. / Master of Science
55

Attention Modification to Attenuate Facial Emotion Recognition Deficits in Children with ASD

Wieckowski, Andrea Trubanova 04 February 2019 (has links)
Prior studies have identified diminished attending to faces, and in particular the eye region, in individuals with Autism Spectrum Disorder (ASD), which may contribute to the impairments they experience with emotion recognition and expression. The current study evaluated the acceptability, feasibility, and preliminary effectiveness of an attention modification intervention designed to attenuate deficits in facial emotion recognition and expression in children with ASD. During the 10-session experimental treatment, children watched videos of people expressing different emotions with the facial features highlighted to guide children's attention. Eight children with ASD completed the treatment, of nine who began. On average, the children and their parents rated the treatment to be acceptable and helpful. Although treatment efficacy, in terms of improved facial emotion recognition (FER), was not apparent on task-based measures, children and their parents reported slight improvements and most parents indicated decreased socioemotional problems following treatment. Results of this preliminary trial suggest that further clinical research on visual attention retraining for ASD, within an experimental therapeutic program, may be promising. / PHD / Previous studies have shown that individuals with Autism Spectrum Disorder (ASD) show lower looking at faces, especially the eyes, which may lead to the difficulties they show with ability to recognize other’s emotions and express their own emotions. This study looked at a new treatment designed to decrease the difficulties in emotion recognition and expression in children with ASD. The study looked at whether the treatment is possible, acceptable to children and their parents, and successful in decreasing the difficulty with emotion recognition. During the 10-session treatment, children watched videos of people making different expressions. The faces of the actors in the videos were highlighted to show the children the important area to look at. Eight children with ASD completed the treatment, of nine who started the treatment. On average, the children and their parents said that the treatment is acceptable and helpful. While the treatment was not successful in improving ability to recognize emotions on other’s faces on several tasks, children and their parents reported slight improvements. In addition, most parents reported less problems with social skills and emotion recognition and expression after the treatment. These results suggest that more clinical research may be needed to evaluate usefulness of such attention retraining for children with ASD.
56

Determining Correlation Between Video Stimulus and Electrodermal Activity

Tasooji, Reza 06 August 2018 (has links)
With the growth of wearable devices capable of measuring physiological signals, affective computing is becoming more popular than before that gradually will remove our cognitive approach. One of the physiological signals is the electrodermal activities (EDA) signal. We explore how video stimulus that might arouse fear affect the EDA signal. To better understand EDA signal, two different medians, a scene from a movie and a scene from a video game, were selected to arouse fear. We conducted a user study with 20 participants and analyzed the differences between medians and proposed a method capable of detecting the highlights of the stimulus using only EDA signals. The study results show that there are no significant differences between two medians except that users are more engaged with the content of the video game. From gathered data, we propose a similarity measurement method for clustering different users based on how common they reacted to different highlights. The result shows for 300 seconds stimulus, using a window size of 10 seconds, our approach for detecting highlights of the stimulus has the precision of one for both medians, and F1 score of 0.85 and 0.84 for movie and video game respectively. / Master of Science / In this work, we explore different approaches to analyze and cluster EDA signals. Two different medians, a scene from a movie and a scene from a video game, were selected to arouse fear. By conducting a user study with 20 participants, we analyzed the differences between two medians and proposed a method capable of detecting highlights of the video clip using only EDA signals. The result of the study, shows there are no significant differences between two medians except that users are more engaged to the content of the video game. From gathered data, we propose a similarity measurement method for clustering different user based on how common they reacted to different highlights.
57

Can adults with autism spectrum disorders infer what happened to someone from their emotional response

Cassidy, S., Ropar, D., Mitchell, Peter, Chapman, P. 04 June 2020 (has links)
Yes / Can adults with autism spectrum disorders (ASD) infer what happened to someone from their emotional response? Millikan has argued that in everyday life, others' emotions are most commonly used to work out the antecedents of behavior, an ability termed retrodictive mindreading. As those with ASD show difficulties interpreting others' emotions, we predicted that these individuals would have difficulty with retrodictive mindreading. Sixteen adults with high-functioning autism or Asperger's syndrome and 19 typically developing adults viewed 21 video clips of people reacting to one of three gifts (chocolate, monopoly money, or a homemade novelty) and then inferred what gift the recipient received and the emotion expressed by that person. Participants' eye movements were recorded while they viewed the videos. Results showed that participants with ASD were only less accurate when inferring who received a chocolate or homemade gift. This difficulty was not due to lack of understanding what emotions were appropriate in response to each gift, as both groups gave consistent gift and emotion inferences significantly above chance (genuine positive for chocolate and feigned positive for homemade). Those with ASD did not look significantly less to the eyes of faces in the videos, and looking to the eyes did not correlate with accuracy on the task. These results suggest that those with ASD are less accurate when retrodicting events involving recognition of genuine and feigned positive emotions, and challenge claims that lack of attention to the eyes causes emotion recognition difficulties in ASD. / University of Nottingham, School of Psychology
58

Adults' and Children's Identification of Faces and Emotions from Isolated Motion Cues

Gonsiorowski, Anna 09 May 2016 (has links)
Faces communicate a wealth of information, including cues to others’ internal emotional states. Face processing is often studied using static stimuli; however, in real life, faces are dynamic. The current project examines face detection and emotion recognition from isolated motion cues. Across two studies, facial motion is presented in point-light displays (PLDs), in which moving white dots against a black screen correspond to dynamic regions of the face. In Study 1, adults were asked to identify the upright facial motion of five basic emotional expressions (e.g., surprise) and five neutral non-rigid movements (e.g., yawning) versus inverted and scrambled distractors. Prior work with static stimuli finds that certain cues, including the addition of motion information, the spatial arrangement of elements, and the emotional significance of stimuli affect face detection. This study found significant effects involving each of these factors using facial PLDs. Notably, face detection was most accurate in response to face-like arrangements, and motion information was useful in response to unusual point configurations. These results suggest that similar processes underlie the processing of static face images and isolated facial motion cues. In Study 2, children and adults were asked to match PLDs of emotional expressions to their corresponding labels (e.g., match a smiling PLD with the word “happy”). Prior work with face images finds that emotion recognition improves with age, but the developmental trajectory depends critically on the emotion to be recognized. Emotion recognition in response to PLDs improved with age, and there were different trajectories across the five emotions tested. Overall, this dissertation contributes to the understanding of the influence of motion information in face processing and emotion recognition, by demonstrating that there are similarities in how people process full-featured static faces and isolated facial motion cues in PLDs (which lack features). The finding that even young children can detect emotions from isolated facial motion indicates that features are not needed for making these types of social judgments. PLD stimuli hold promise for future interventions with atypically developing populations.
59

EMOTION RECOGNITION AND SOCIAL FUNCTIONING IN CHILDREN WITH AND WITHOUT ATTENTION DEFICIT HYPERACTIVITY DISORDER

Aldea, Rebecca Flake 01 January 2013 (has links)
This study examined the emotion recognition of children (ages 7-9 years) with and without Attention Deficit Hyperactivity Disorder (ADHD). Children completed two emotion recognition measures, the Diagnostic Analysis of Nonverbal Accuracy 2 (DANVA2) and the Child and Adolescent Social Perception measure (CASP). Children and their parents also completed an assessment of children’s social skills, the Social Skills Improvement System (SSIS). Children with ADHD reported a significantly greater level of depressive symptoms and had significantly lower full scale IQ scores than children without ADHD. When these differences were accounted for, children with ADHD continued to show a handful of deficits in emotion recognition. They demonstrated difficulties in emotion recognition on the DANVA2 regarding specific emotions, fear and sadness. On the CASP, children with ADHD made significantly more errors than children without ADHD due to a tendency to make up information to explain how they were able to identify feelings. Children’s performance on the emotion recognition measures did not significantly mediate the relation between their diagnostic status and social skills (as rated by parents). In summary, additional evidence was found regarding the deficits in emotion recognition experienced by children with ADHD, however, further work needs to be done to determine if these deficits relate to the peer difficulties experienced by these children.
60

inHarmony: a Digital Twin for Emotional Well-being

Albraikan, Amani 24 May 2019 (has links)
A digital twin is an enabling technology that facilitates monitoring, understanding, and providing continuous feedback to improve quality of life and well-being. Thus, a digital twin can consider a solution to enhance one's mood to improve the quality of life and emotional well-being. However, there remains a long road ahead until we reach digital twin systems that are capable of empowering development and the deployment of digital twins. This is because there are so many elements and components that can guide the design of a digital twin. This thesis provides a general discussion for the central element of an emotional digital twin, including emotion detection, emotional biofeedback, and emotion-aware recommender systems. In the first part of this thesis, we propose and study the emotion detection models and algorithms. For emotions, which are known to be highly user dependent, improvements to the emotion learning algorithm can significantly boost its predictive power. We aimed to improve the accuracy of the classifier using peripheral physiological signals. Here, we present a hybrid sensor fusion approach based on a stacking model that allows for data from multiple sensors and emotion models to be jointly embedded within a user-independent model. In the second part of this thesis, we propose a real-time mobile biofeedback system that uses wearable sensors to depict five basic emotions and provides the user with emotional feedback. These systems apply the concept of Live Biofeedback through the introduction of an emotion-aware digital twin. An essential element in these systems guides users through an emotion-regulation routine. The proposed systems are aimed at increasing self-awareness by using visual feedback and provide insight into the future design of digital twins. We focus on workplace environments, and the recommendations are based on human emotions and the regulation of emotion in the construct of emotional intelligence. The objective is to suggest coping techniques to a user during an emotional, stressful episode based on her or his preferences, history of what worked well and appropriateness for the context. The developed solution has been studied based on usability studies and extensively compared to related works. The obtained results show the potentials use as an emotional digital twin. In turn, the proposed solution has been providing significant insights that will guide future developments of digital twins using several scenarios and settings.

Page generated in 0.1114 seconds