Spelling suggestions: "subject:"[een] EMOTION RECOGNITION"" "subject:"[enn] EMOTION RECOGNITION""
21 |
Emotion recognition and set shifting in women with anorexia nervosaHall, Royston January 2018 (has links)
Objective: Neuropsychology models of anorexia nervosa (AN) propose that cognitive difficulties including poor Emotion Recognition (ER) and set-shifting ability may be central to the development and maintenance of eating pathology. This study aimed to test the central positions of such models by assessing specific ER difficulties in AN as well as the relationship between ER deficits and set-shifting performance. Methods: Fifty-one women were assessed (25 with AN; M = 28.20 SD = 8.69 and 26 control M = 21.27 SD = 5.10) on a novel measure of ER, a set-shifting test and self-report questionnaires concerning co-morbid factors. Results: The data did not reveal a global difference in ER or set-shifting performance between groups. Specific hypotheses of ER deficits in AN were also not met as performance on individual emotions was comparable between groups. There was an unexpected negative correlation between disgust recognition and set-shifting performance, however, this was only significant across the whole sample. ER performance was not related with any confounding factors. Conclusions: Despite an abundance of research supporting the position of social cognitive difficulties in AN, the current study failed to find global or specific deficits in ER in the present sample. Similarly, ER performance was not related to set-shifting as proposed by neuropsychological models of AN aetiology. Possible explanations for a lack of difference observed using this novel ER task are explored and future directions for evaluating ER in AN are discussed.
|
22 |
Speaker and Emotion Recognition System of Gaussian Mixture ModelWang, Jhong-yi 01 August 2006 (has links)
In this thesis, the speaker and emotion recognition system is established by PC and digit signal processor (DSP). Most speaker and emotion recognition systems are separately accomplished, but not combined together in the same system. In this thesis, it will show how speaker and emotion recognition systems are combined in the same system. In this system, the voice is picked up by a mike and through DSP to extract the characteristics. Then it passes the sample correctly, it can draw the result of distinguishing.
The recognition system is divided into four sub-systems: the pronunciation pre-process, the speaker training model, the speaker and emotion recognition, and the speaker confirmation. The pronunciation pre-process uses the mike to capture the voice, and through the DSP board to convey the voice to the SRAM, then movements dealt with pre-process. The speaker trained model uses the Gaussian mixture model to establish the average, coefficient of variation and weight value of the person who sets up speaker specifically. And we¡¦ll take this information to be the datum of the whole recognition system. The speaker recognition mainly uses the density of probability to recognition the speaker¡¦s identity. The emotion recognition takes advantage of the coefficient of variation to recognize the emotion. The speaker confirms is set up to sure whether the user is the same speaker who hits for the systematic database.
The recognition system based on DSP includes two parts¡GHardware setting and implementation of speaker algorithm. We use the fixed-arithmetician DSP chip (chipboard) in the DSP, the algorithm of recognition is Gaussian mixture model. In addition, compared with floating point, the fixed point DSP cost much less; it makes the system nearer to users.
|
23 |
The association between traumatic brain injury, behavioural factors and facial emotion recognition skills in delinquent youthCook, Sarah January 2014 (has links)
Objectives: To examine the association between traumatic brain injury (TBI) in delinquent youth and facial emotion recognition (FER) abilities, offending, behavioural difficulties, aggression, empathic sadness and parenting. Participants & Setting: Forty-eight delinquent youth, aged 14 to 19 years, recruited from Youth Offending Teams and Targeted Youth Support. Main Measures: A cross sectional case-control design compared individuals in a TBI versus a non-TBI group on a forced-choice, FER paradigm assessing recognition accuracy to six basic emotions. Self-reported measures of TBI, behavioural difficulties, experience of parenting, reactive and proactive aggression, and empathic sadness. Results: History of TBI was reported by 68.7% of the sample, with 94% including a loss of consciousness. No significant differences were found between TBI and non-TBI groups on FER accuracy. Participants in the TBI group self-reported significantly higher proactive and reactive aggression and lower levels of parental supervision as compared to the non-TBI group. Tendency to incorrectly give ‘anger’ as a response on the FER task was strongly positively associated with proactive and reactive aggression. Conclusions: Future research requires larger samples recruited across settings to further investigate the association between FER abilities and TBI in this population. Findings highlight the need for TBI to be appropriately assessed and managed in delinquent youth, and highlights important aggression differences.
|
24 |
CORRELATION BETWEEN COMPUTER RECOGNIZED FACIAL EMOTIONS AND INFORMED EMOTIONS DURING A CASINO COMPUTER GAMEReichert, Nils 09 January 2012 (has links)
Emotions play an important role for everyday communication. Different methods allow computers to recognize emotions. Most are trained with acted emotions and it is unknown if such a model would work for recognizing naturally appearing emotions. An experiment was setup to estimate the recognition accuracy of the emotion recognition software SHORE, which could detect the emotions angry, happy, sad, and surprised. Subjects played a casino game while being recorded. The software recognition was correlated with the recognition of ten human observers. The results showed a strong recognition for happy, medium recognition for surprised, and a weak recognition for sad and angry faces. In addition, questionnaires containing self-informed emotions were compared with the computer recognition, but only weak correlations were found. SHORE was able to recognize emotions almost as well as humans were, but if humans had problems to recognize an emotion, then the accuracy of the software was much lower.
|
25 |
Decoding Faces: The Contribution of Self-Expressiveness Level and Mimicry Processes to Emotional UnderstandingMaas, Casey 01 January 2014 (has links)
Facial expressions provide valuable information in making judgments about internal emotional states. Evaluation of facial expressions can occur through mimicry processes via the mirror neuron system (MNS) pathway, where a decoder mimics a target’s facial expression and proprioceptive perception prompts emotion recognition. Female participants rated emotional facial expressions when mimicry was inhibited by immobilization of facial muscles and when mimicry was uncontrolled, and were evaluated for self-expressiveness level. A mixed ANOVA was conducted to determine how self-expressiveness level and manipulation of facial muscles impacted recognition accuracy for facial expressions. Main effects of self-expressiveness level and facial muscle manipulation were not found to be significant (p > .05), nor did these variables appear to interact (p > .05). The results of this study suggest that an individual’s self-expressiveness level and use of mimicry processes may not play a central role in emotion recognition.
|
26 |
Context Recognition Methods using Audio Signals for Human-Machine InteractionJanuary 2015 (has links)
abstract: Audio signals, such as speech and ambient sounds convey rich information pertaining to a user’s activity, mood or intent. Enabling machines to understand this contextual information is necessary to bridge the gap in human-machine interaction. This is challenging due to its subjective nature, hence, requiring sophisticated techniques. This dissertation presents a set of computational methods, that generalize well across different conditions, for speech-based applications involving emotion recognition and keyword detection, and ambient sounds-based applications such as lifelogging.
The expression and perception of emotions varies across speakers and cultures, thus, determining features and classification methods that generalize well to different conditions is strongly desired. A latent topic models-based method is proposed to learn supra-segmental features from low-level acoustic descriptors. The derived features outperform state-of-the-art approaches over multiple databases. Cross-corpus studies are conducted to determine the ability of these features to generalize well across different databases. The proposed method is also applied to derive features from facial expressions; a multi-modal fusion overcomes the deficiencies of a speech only approach and further improves the recognition performance.
Besides affecting the acoustic properties of speech, emotions have a strong influence over speech articulation kinematics. A learning approach, which constrains a classifier trained over acoustic descriptors, to also model articulatory data is proposed here. This method requires articulatory information only during the training stage, thus overcoming the challenges inherent to large-scale data collection, while simultaneously exploiting the correlations between articulation kinematics and acoustic descriptors to improve the accuracy of emotion recognition systems.
Identifying context from ambient sounds in a lifelogging scenario requires feature extraction, segmentation and annotation techniques capable of efficiently handling long duration audio recordings; a complete framework for such applications is presented. The performance is evaluated on real world data and accompanied by a prototypical Android-based user interface.
The proposed methods are also assessed in terms of computation and implementation complexity. Software and field programmable gate array based implementations are considered for emotion recognition, while virtual platforms are used to model the complexities of lifelogging. The derived metrics are used to determine the feasibility of these methods for applications requiring real-time capabilities and low power consumption. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2015
|
27 |
Ability of adults with a learning disability to recognise facial expressions of emotion : is there evidence for the emotion specificity hypothesis?Scotland, Jennifer January 2015 (has links)
Aims Research suggests that people with a learning disability have difficulty processing and interpreting facial expressions of emotion. Emotion recognition is a fundamental skill and impairment in this area may be related to a number of negative, social and functional outcomes including increased frequency of aggressive behaviour, failure of community-based placements and mental illness. This thesis therefore had three aims: to review systematically the evidence for the presence of emotion recognition impairments in adults with a learning disability compared with the non-learning disabled population; to evaluate the emotion specificity hypothesis (which states that people with a learning disability perform less well on emotion recognition tasks as a result of a specific impairment in emotion recognition competence) and to evaluate the relationship between cognitive processing style and emotion recognition in people with a learning disability. Methods The first paper is a systematic review of studies that compared the performance of adults with a learning disability with that of a non-learning disabled control group on tasks of facial emotion recognition. The second paper reports on an empirical study that compared the performance of adults with a learning disability (n = 23) with adults (n = 23) and children (n = 23) without learning disability on tasks of facial emotion recognition and control tasks. The third paper reports further results from the empirical study which looks at cognitive processing style of adults with a learning disability and non-learning disabled children and adults. Results The systematic review found that all of the included studies reported evidence to support the proposal that adults with a learning disability are relatively impaired in recognising facial expressions of emotion. There are significant limitations associated with the research in this area and further studies are required in order to provide insight into the possible causes of emotion recognition deficits in this group of people. In the empirical study, adults with a learning disability were found to be relatively impaired on both emotion recognition and control tasks compared with both adult and child control groups. The availability of contextual information improved emotion recognition accuracy for adults with learning disability. The demands of the task also had an effect: identifying a target emotion from a choice of two images, rather than a choice of nine or naming the emotion also improved accuracy. Adults with learning disability were more likely to adopt a local processing style. A global processing style was associated with greater accuracy on the emotion recognition tasks. Conclusions Adults with learning disability are relatively impaired in facial emotion recognition when compared with non-learning disabled adults and children. This relative impairment was also evident on control tasks and therefore no evidence for the emotion specificity hypothesis was found. A number of issues in relation to future research are raised, specifically regarding the development of control tasks with comparable levels of difficulty to emotion recognition tasks.
|
28 |
Social cognition in antisocial populationsBratton, Helen January 2015 (has links)
Introduction: Impairments in facial affect recognition have been linked to the development of various disorders. The aim of the current work is to conduct a systematic review and meta-analysis of studies examining whether this ability is impaired in males with psychopathy or antisocial traits, when compared to healthy individuals. Method: Studies were eligible for inclusion if they compared facial affect recognition in either a) psychopathic vs. antisocial males, b) psychopathic vs. healthy controls and c) antisocial vs. healthy controls. Primary outcomes were group differences in overall emotion recognition, fear recognition, and sadness recognition. Secondary outcomes were differences in recognition of disgust, happiness, surprise and anger. Results: Fifteen papers comprising 214 psychopathic males, 491 antisocial males and 386 healthy community controls were identified. In psychopathy, limited evidence suggested impairments in fear (k=2), sadness (k=1) and surprise (k=1) recognition relative to healthy individuals, but overall affect recognition ability was not affected (k=2). Findings were inconclusive for antisocial (k=4-6), although impairments in surprise (k=4) and disgust (k=5) recognition were observed. Psychopathic and antisocial samples did not differ in their ability to detect sadness (k=4), but psychopaths were less able to recognise happiness (k=4) and surprise (k=3). Conclusion: Limited evidence suggests psychopathic and antisocial personality traits are associated with small to moderate deficits in specific aspects of emotion recognition. However considerable heterogeneity was identified, and study quality was often poor. Adequately powered studies using validated assessment measures, rater masking and a priori public registration of hypotheses and methods are required.
|
29 |
Improving Understanding of Emotional Speech Acoustic ContentTinnemore, Anna, Tinnemore, Anna January 2017 (has links)
Children with cochlear implants show deficits in identifying emotional intent of utterances without facial or body language cues. A known limitation to cochlear implants is the inability to accurately portray the fundamental frequency contour of speech which carries the majority of information needed to identify emotional intent. Without reliable access to the fundamental frequency, other methods of identifying vocal emotion, if identifiable, could be used to guide therapies for training children with cochlear implants to better identify vocal emotion. The current study analyzed recordings of adults speaking neutral sentences with a set array of emotions in a child-directed and adult-directed manner. The goal was to identify acoustic cues that contribute to emotion identification that may be enhanced in child-directed speech, but are also present in adult-directed speech. Results of this study showed that there were significant differences in the variation of the fundamental frequency, the variation of intensity, and the rate of speech among emotions and between intended audiences.
|
30 |
FACIAL EMOTION RECOGNITION ABILITY IN CHILDREN WITH AUTISM SPECTRUM DISORDERSUnknown Date (has links)
The present study aimed to gain a better understanding of the emotion processing abilities of children between the ages of 4 and 8 with ASD by examining their ability to correctly recognize dynamic displays of emotion. Additionally, we examined whether children with ASD showed emotion specific differences in their ability to accurately identify anger, happiness, sadness, and fear. Participants viewed a continuous display of neutral faces morphing into expressions of emotion. We aimed to measure observed power and asymmetry using EEG data in order to understand the neural activity that underlies the social aspects of ASD. Participants with ASD showed slower processing speed and decreased emotion sensitivity. On tasks that involved the recognition of expressions on the participants’ mothers’ faces, differences were less apparent. These results suggest that children with ASD are capable of recognizing facial displays of emotion after repeated exposure, this should be explored further in future research. / Includes bibliography. / Thesis (M.A.)--Florida Atlantic University, 2020. / FAU Electronic Theses and Dissertations Collection
|
Page generated in 0.0481 seconds