• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 7
  • 7
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The influence of alexithymia and sex in the recognition of emotions from visual, auditory, and bimodal cues

Sanchez Cortes, Diana January 2013 (has links)
Alexithymia is a personality trait associated with impairments in emotional processing. This study investigated the influence of alexithymia and sex in the ability to recognize emotional expressions presented in faces, voices, and their combination. Alexithymia was assessed by the Toronto Alexithymia Scale (TAS-20) and participants (n = 122) judged 12 emotions displayed uni- or bimodally in two sensory modalities as measured by the Geneva Multimodal Emotion Portrayals Core Set (GEMEP-CS). According to their scores, participants were grouped into low, average, and high alexithymia. The results showed that sex did not moderate the relationship between alexithymia and emotional recognition. The low alexithymia group recognized emotions more accurately than the other two subgroups, at least in the visual modality. No group differences were found in the voice and the bimodal tasks. These findings illustrate the importance of accounting for how different modalities influence the presentation of emotional cues, as well as suggesting the use of dynamic instruments such as GEMEP-CS that increment ecological validity and are more sensitive in detecting individual differences, over posed techniques such as still pictures / Genetic and neural factors underlying individual differences in emotion recognition ability
2

Facial Expression Recognition System

Ren, Yuan January 2008 (has links)
A key requirement for developing any innovative system in a computing environment is to integrate a sufficiently friendly interface with the average end user. Accurate design of such a user-centered interface, however, means more than just the ergonomics of the panels and displays. It also requires that designers precisely define what information to use and how, where, and when to use it. Facial expression as a natural, non-intrusive and efficient way of communication has been considered as one of the potential inputs of such interfaces. The work of this thesis aims at designing a robust Facial Expression Recognition (FER) system by combining various techniques from computer vision and pattern recognition. Expression recognition is closely related to face recognition where a lot of research has been done and a vast array of algorithms have been introduced. FER can also be considered as a special case of a pattern recognition problem and many techniques are available. In the designing of an FER system, we can take advantage of these resources and use existing algorithms as building blocks of our system. So a major part of this work is to determine the optimal combination of algorithms. To do this, we first divide the system into 3 modules, i.e. Preprocessing, Feature Extraction and Classification, then for each of them some candidate methods are implemented, and eventually the optimal configuration is found by comparing the performance of different combinations. Another issue that is of great interest to facial expression recognition systems designers is the classifier which is the core of the system. Conventional classification algorithms assume the image is a single variable function of a underlying class label. However this is not true in face recognition area where the appearance of the face is influenced by multiple factors: identity, expression, illumination and so on. To solve this problem, in this thesis we propose two new algorithms, namely Higher Order Canonical Correlation Analysis and Simple Multifactor Analysis which model the image as a multivariable function. The addressed issues are challenging problems and are substantial for developing a facial expression recognition system.
3

Facial Expression Recognition System

Ren, Yuan January 2008 (has links)
A key requirement for developing any innovative system in a computing environment is to integrate a sufficiently friendly interface with the average end user. Accurate design of such a user-centered interface, however, means more than just the ergonomics of the panels and displays. It also requires that designers precisely define what information to use and how, where, and when to use it. Facial expression as a natural, non-intrusive and efficient way of communication has been considered as one of the potential inputs of such interfaces. The work of this thesis aims at designing a robust Facial Expression Recognition (FER) system by combining various techniques from computer vision and pattern recognition. Expression recognition is closely related to face recognition where a lot of research has been done and a vast array of algorithms have been introduced. FER can also be considered as a special case of a pattern recognition problem and many techniques are available. In the designing of an FER system, we can take advantage of these resources and use existing algorithms as building blocks of our system. So a major part of this work is to determine the optimal combination of algorithms. To do this, we first divide the system into 3 modules, i.e. Preprocessing, Feature Extraction and Classification, then for each of them some candidate methods are implemented, and eventually the optimal configuration is found by comparing the performance of different combinations. Another issue that is of great interest to facial expression recognition systems designers is the classifier which is the core of the system. Conventional classification algorithms assume the image is a single variable function of a underlying class label. However this is not true in face recognition area where the appearance of the face is influenced by multiple factors: identity, expression, illumination and so on. To solve this problem, in this thesis we propose two new algorithms, namely Higher Order Canonical Correlation Analysis and Simple Multifactor Analysis which model the image as a multivariable function. The addressed issues are challenging problems and are substantial for developing a facial expression recognition system.
4

Facial affect recognition in psychosis

Bordon, Natalie Sarah January 2016 (has links)
While a correlation between suffering from psychosis and an increased risk of engaging in aggressive behaviours has been established, many factors have been explored which may contribute to increasing this risk. Patients with a diagnosis of psychosis have been shown to have significant difficulties in facial affect recognition (FAR) and some authors have proposed that this may contribute to increasing the risk of displaying aggressive or violent behaviours. A systematic review of the current evidence regarding the links between facial affect recognition and aggression was conducted. Results were varied with some studies providing evidence of a link between emotion recognition difficulties and aggression, while others were unable to establish such an association. Results should be interpreted with some caution as the quality of included studies was poor due to small sample sizes, insufficient power and limited reporting of results. Adequately powered, randomised controlled studies using appropriate blinding procedures and validated measures are therefore required. There is a substantial evidence base demonstrating difficulties in emotional perception in patients with psychosis, with evidence suggesting a relationship with reduced social functioning, increased aggression and more severe symptoms of psychosis. In this review we aim to review this field to assess if there is a causal link between facial affect recognition difficulties and psychosis. The Bradford Hill criteria for establishing a causal relationship from observational data were used to generate key hypotheses, which were then tested against existing evidence. Where a published meta-analysis was not already available, new meta-analyses were conducted. A large effect of FAR difficulties in those with a diagnosis of psychosis, with a small to moderate correlation between FAR problems and symptoms of psychosis was found. Evidence was provided for the existence of FAR problems in those at clinical high risk of psychosis, while remediation of psychosis symptoms did not appear to impact FAR difficulties. There appears to be good evidence of the existence of facial affect recognition difficulties in the causation of psychosis, though larger, longitudinal studies are required to provide further evidence of this.
5

The association between maternal responsiveness and child social and emotional development

Best, Lara January 2013 (has links)
Introduction. A mother’s verbal and non-verbal behaviour towards her infant is known as maternal responsiveness (MR). Positive MR is associated with better child social and emotional development (SED). A mother’s ability to accurately recognise emotions is thought to enhance MR. Method. Data from 1,122 mother-infant interactions from a longitudinal birth cohort study, was used firstly to examine whether positive MR at 12 months was associated with better child and adolescent SED, and secondly to explore whether better maternal facial and vocal expression recognition at 151 months was associated with positive MR and child SED. MR was measured using the Thorpe Interaction Measure (TIM) from observed mother-infant interactions and SED from questionnaire data adjusting for potential confounding variables. A test of facial expression recognition was used with vocal expression recognition additionally used in mothers. Results. Logistic regression revealed that positive MR was associated with positive SED outcomes in childhood but there was little effect in adolescence. Positive MR was associated with mothers having better facial and vocal expression recognition at 151 months and these recognition skills were associated with children showing less emotional problems at 158 months independent of MR. Adjustments for confounding variables had no effect on these results. Conclusion: These findings support the benefit of positive MR on a child’s SED in middle childhood. Further, the findings suggest that a mother’s facial and vocal expression recognition skills are important to both MR and a child’s SED. Limitations include subjective reporting of SED.
6

Enhancing oral comprehension and emotional recognition skills in children with autism: A comparison of video self modelling with video peer modelling

Koretz, Jasmine May January 2007 (has links)
Video modelling has been shown to be an effective intervention with autistic individuals as it takes into account autistic characteristics of those individuals. Research on video self modelling and video peer modelling with this population has shown both are effective. The purpose of this study was to replicate past findings that video modelling is an effective strategy for autistic individuals, and to compare video self modelling with video peer modelling, to determine which is more effective. The studies here used multiple baselines with alternating treatments designs with 6 participants across two target behaviours; emotional recognition and oral comprehension. The first compared the video modelling methods and found neither method increased the target behaviours to criterion, for 5 out of the 6 participants. For 1 participant the criterion was only reached for the video self modelling condition for the target behaviour 'oral comprehension'. The second study first examined the effectiveness of video self modelling and video peer modelling with supplementary assistance for 4 participants. Second, it examined a new peer video for a 5th participant, and third, it compared the two video modelling methods (with supplementary assistance). Results indicated 1 participant reached the criterion in both video modelling conditions, 1 participant showed improvements and 2 participants never increased responding. This study indicated that clarity of speech produced by the peer participant in the peer video, may have contributed to a participant's level of correct responding. This is because a new peer video used during the second study dramatically increased this participants responding. Intervention fidelity, generalisation and follow-up data were examined. Measures of intervention fidelity indicated procedural reliability. Generalisation was unsuccessful across three measures and follow-up data indicated similar trends to intervention. Only video self modelling effects remained at criterion during follow-up. Results are discussed with reference to limitations, future research and implications for practice.
7

Discerning Emotion Through Movement : A study of body language in portraying emotion in animation

Larsson, Pernilla January 2014 (has links)
Animators are so often taught more about how to perfect their animations than toconsider what it is that makes the animation come alive. They work away withprinciples and physics, sometimes completely overlooking a characterscommunication tools. The following thesis is a study of emotive expressive bodylanguage and its purpose in animation. The project studies various angles of bodylanguage, in an attempt at summarizing key features that could work as guidelinesfor animators in the future. It deals with the role of body language in animation andwhy it is necessary for a more realistic feel in the animation. As well as the brieflymentioning the 12 animation principles, on their necessity and faults in the matter.The thesis is divided into a theoretical investigation and a practical experiment. Theintention was to create a set of key features for the use of as tools and guides forfresh animators to understand and translate emotion into their animations. Theresults indicate the power of body language and its versatility as a tool, puttingemphasis on why it ought not to be neglected.

Page generated in 0.1087 seconds