Spelling suggestions: "subject:"[een] EMOTION RECOGNITION"" "subject:"[enn] EMOTION RECOGNITION""
51 |
The Influence of Aging, Gaze Direction, and Context on Emotion Discrimination PerformanceMinton, Alyssa Renee 01 April 2019 (has links)
This study examined how younger and older adults differ in their ability to discriminate between pairs of emotions of varying degrees of similarity when presented with an averted or direct gaze in either a neutral, congruent, or incongruent emotional context. For Task 1, participants were presented with three blocks of emotion pairs (i.e., anger/disgust, sadness/disgust, and fear/disgust) and were asked to indicate which emotion was being expressed. The actors’ gaze direction was manipulated such that emotional facial expressions were depicted with a direct gaze or an averted gaze. For Task 2, the same stimuli were placed into emotional contexts (e.g., evocative backgrounds and expressive body posture) that were either congruent or incongruent with the emotional facial expression. Participants made emotion discrimination judgments for two emotion pairings: anger/disgust (High Similarity condition) and fear/disgust (Low Similarity condition). Discrimination performance varied as a function of age, gaze direction, degree of similarity of emotion pairs, and the congruence of the context. Across task, performance was best when evaluating less similar emotion pairs and worst when evaluating more similar emotion pairs. In addition, evaluating emotion in stimuli with averted eye gaze generally led to poorer performance than when evaluating stimuli communicating emotion with a direct eye gaze. These outcomes held for both age groups. When participants observed emotion facial expressions in the presence of congruent or incongruent emotional contexts, age differences in discrimination performance were most pronounced when the context did not support one’s estimation of the emotion expressed by the actors.
|
52 |
Facial Expression Discrimination in Adults Experiencing Posttraumatic Stress SymptomsLee, Brian N. 01 December 2011 (has links)
The present study examined the impact of posttraumatic stress symptoms (PTSS) on adults’ ability to discriminate between various facial expressions of emotions. Additionally, the study examined whether individuals reporting PTSS exhibited an attentional bias toward threat-related facial expressions of emotions. The research design was a 2 (expression intensity) x 3 (emotional pairing) x 2 (PTSS group) mixed-model factorial design. Participants for the study were 89 undergraduates recruited from psychology courses at Western Kentucky University. Participants completed the Traumatic Stress Schedule to assess for prior exposure to traumatic events. A median split was used to divide the sample into two groups (i.e., low and high PTSS). Additionally, participants also completed a demographics questionnaire, the Impact of Events Scale-Revised, the Center for Epidemiological Studies Depression Scale, and the Depression Anxiety Stress Scales to assess for possible covariates. Then, participants completed the discrimination of facial expressions task and the dot probe position task. Results indicate that individuals experiencing high levels of PTSS have difficulty discriminating between threatening and non-threatening facial expressions of emotions; additionally, these individuals’ difficulty is exacerbated by comorbid levels of anxiety symptoms. Furthermore, results suggests these individuals focus attention on threatening facial expressions while avoiding expressions that may activate memories associated with the prior trauma. These findings have significant clinical implications, as clinicians could focus treatment on correcting these difficulties which should help promote more beneficial social interactions for these individuals experiencing high levels of PTSS. Additionally, these behavioral measures could be used to assess the effectiveness of treatment. Effective treatment should help alleviate these difficulties, which could be measured by improved performance on the discrimination of facial expressions task and the dot probe position task from baseline to post-treatment.
|
53 |
Långsammare igenkänning av emotioner i ansiktsuttryck hos individer med utmattningssyndrom : En pilotstudieLöfdahl, Tomas, Wretman, Mattias January 2012 (has links)
Syftet med denna pilotstudie var att skapa hypoteser om och hur utmattningssyndrom påverkar förmågan att känna igen emotioner i ansiktsuttryck. En grupp patienter med utmattningssyndrom jämfördes med en matchad frisk kontrollgrupp (N=14). Grupperna undersöktes med ett datorbaserat test beståendes av färgbilder av autentiska ansiktsuttryck som gradvis i steg om 10% förändrades från ett neutralt ansiktsuttryck till någon av de fem grundemotionerna ilska, avsky, rädsla, glädje och ledsenhet. Mätningarna gjordes i termer av igenkänningsprecision och responshastighet. Resultatet visade att patientgruppen responderade signifikant långsammare än kontrollgruppen sett över samtliga emotioner i testet. Inga emotionsspecifika skillnader såväl som skillnader i igenkänningsprecision kunde påvisas mellan grupperna. Orsakerna till diskrepansen i responshastighet diskuterades utifrån fyra tänkbara förklaringsområden: ansiktsperceptuell funktion, visuell uppmärksamhet, självfokuserad uppmärksamhet samt noggrannhet/oro. Rekommendationer gjordes till framtida forskning om att utforska dessa områden närmare.
|
54 |
Perceived Parenting Styles, Emotion Recognition, And Emotion Regulation In Relation To Psychological Well-being: Symptoms Of Depression, Obsessive-compulsive Disorder, And Social AnxietyAka, Turkuler B. 01 June 2011 (has links) (PDF)
The purpose of the current study was to examine the path of perceived parenting styles, emotion recognition, emotion regulation, and psychological well-being in terms of depression, obsessive-compulsive disorder and social anxiety symptoms consequently. For the purpose of this study 530 adults (402 female, 128 male) between the ages of 18 and 36 (M = 22.09, SD = 2.78) participated in the current study. The data was collected by a questionnaire battery including a Demographic Category Sheet, Short-EMBU (Egna Minnen Betraffande Uppfostran- My Memories of Upbringing), &ldquo / Reading the Mind in the Eyes&rdquo / Test (Revised), Emotion Regulation Questionnaire, Emotion Regulation Processes, Beck Depression Inventory, Liebowitz Social Anxiety Scale, Maudsley Obsessive Compulsive Inventory, White Bear Suppression Inventory, Thought-Action Fusion Scale, and Emotional Approach Coping Scale. The psychometric properties of Emotion Regulation Questionnaire and Emotion Regulation Processes were investigated and found to have good
validity and reliability characteristics. The three sets of hierarchical multiple regression analyses were conducted to reveal the significant associates of psychological well-being. As expected, the results of the current study revealed that
perceived parenting styles, different emotion regulation strategies and processes had associated with psychological well-being in terms of depression, obsessivecompulsive
disorder and social anxiety symptoms. The findings, and their
implications with suggestions for future research and practice, were discussed in the light of relevant literature.
|
55 |
Recognizing facial expression of virtual agents, synthetic faces, and human faces: the effects of age and character type on emotion recognitionBeer, Jenay Michelle 08 April 2010 (has links)
An agent's facial expression may communicate emotive state to users both young and old. The ability to recognize emotions has been shown to differ with age, with older adults more commonly misidentifying the facial emotions of anger, fear, and sadness. This research study examined whether emotion recognition of facial expressions differed between different types of on-screen agents, and between age groups. Three on-screen characters were compared: a human, a synthetic human, and a virtual agent. In this study 42 younger (age 28-28) and 42 older (age 65-85) adults completed an emotion recognition task with static pictures of the characters demonstrating four basic emotions (anger, fear, happiness, and sadness) and neutral. The human face resulted in the highest proportion match, followed by the synthetic human, then the virtual agent with the lowest proportion match. Both the human and synthetic human faces resulted in age-related differences for the emotions anger, fear, sadness, and neutral, with younger adults showing higher proportion match. The virtual agent showed age-related differences for the emotions anger, fear, happiness, and neutral, with younger adults showing higher proportion match. The data analysis and interpretation of the present study differed from previous work by utilizing two unique approaches to understanding emotion recognition. First, misattributions participants made when identifying emotion were investigated. Second, a similarity index of the feature placement between any two virtual agent emotions was calculated, suggesting that emotions were commonly misattributed as other emotions similar in appearance. Overall, these results suggest that age-related differences transcend human faces to other types of on-screen characters, and differences between older and younger adults in emotion recognition may be further explained by perceptual discrimination between two emotions of similar feature appearance.
|
56 |
Supervised feature learning via sparse coding for music information rerievalO'Brien, Cian John 08 June 2015 (has links)
This thesis explores the ideas of feature learning and sparse coding for Music Information Retrieval (MIR). Sparse coding is an algorithm which aims to learn new feature representations from data automatically. In contrast to previous work which uses sparse coding in an MIR context the concept of supervised sparse coding is also investigated, which makes use of the ground-truth labels explicitly during the learning process. Here sparse coding and supervised coding are applied to two MIR problems: classification of musical genre and recognition of the emotional content of music. A variation of Label Consistent K-SVD is used to add supervision during the dictionary learning process.
In the case of Music Genre Recognition (MGR) an additional discriminative term is added to encourage tracks from the same genre to have similar sparse codes. For Music Emotion Recognition (MER) a linear regression term is added to learn an optimal classifier and dictionary pair. These results indicate that while sparse coding performs well for MGR, the additional supervision fails to improve the performance. In the case of MER, supervised coding significantly outperforms both standard sparse coding and commonly used designed features, namely MFCC and pitch chroma.
|
57 |
Inferring Speaker Affect in Spoken Natural Language CommunicationPon-Barry, Heather Roberta 15 March 2013 (has links)
The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards machine listening—interpreting the full spectrum of information in an audio stream. One part of machine listening, the problem that this thesis focuses on, is the task of using information in the speech signal to infer a person’s emotional or mental state. In this dissertation, our approach is to assess the utility of prosody, or manner of speaking, in classifying speaker affect. Prosody refers to the acoustic features of natural speech: rhythm, stress, intonation, and energy. Affect refers to a person’s emotions and attitudes such as happiness, frustration, or uncertainty. We focus on one specific dimension of affect: level of certainty. Our goal is to automatically infer whether a person is confident or uncertain based on the prosody of his or her speech. Potential applications include conversational dialogue systems (e.g., in educational technology) and voice search (e.g., smartphone personal assistants). There are three main contributions of this thesis. The first contribution is a method for eliciting uncertain speech that binds a speaker’s uncertainty to a single phrase within the larger utterance, allowing us to compare the utility of contextually-based prosodic features. Second, we devise a technique for computing prosodic features from utterance segments that both improves uncertainty classification and can be used to determine which phrase a speaker is uncertain about. The level of certainty classifier achieves an accuracy of 75%. Third, we examine the differences between perceived, self-reported, and internal level of certainty, concluding that perceived certainty is aligned with internal certainty for some but not all speakers and that self-reports are a good proxy for internal certainty. / Engineering and Applied Sciences
|
58 |
POKERFACE: EMOTION BASED GAME-PLAY TECHNIQUES FOR COMPUTER POKER PLAYERSCockerham, Lucas 01 January 2004 (has links)
Numerous algorithms/methods exist for creating computer poker players. This thesis comparesand contrasts them. A set of poker agents for the system PokerFace are then introduced. A surveyof the problem of facial expression recognition is included in the hopes it may be used to build abetter computer poker player.
|
59 |
Kalbos emocijų požymių tyrimas / Investigation of Speech Emotion FeaturesŽukas, Gediminas 17 June 2014 (has links)
Magistro baigiamajame darbe išnagrinėtas automatinio šnekos emocijų atpažinimo uždavinys. Nors pastaruoju metu šios srities populiarumas yra smarkiai išaugęs, tačiau vis dar trūksta literatūros aprašančios konkrečių požymių ar požymių rinkinių efektyvumą kalbos emocijoms atpažinti. Ši problema suformavo magistro baigiamojo darbo tikslą – ištirti akustinių požymių taikymą šnekos emocijoms atpažinti. Darbo metu buvo atlikta požymių sistemų analizė, sukurta emocijų požymių sistemų (rinkinių) testavimo sistema, kuria atliktas požymių rinkinių tyrimas. Tyrimo metu gauti rezultatai yra labai panašūs arba šiek tiek lenkia pastaruoju metu paskelbtus emocijų atpažinimo rezultatus naudojant Berlyno emocingos kalbos duomenų bazę. Remiantis gautais eksperimentų rezultatais, buvo sudarytos požymių rinkinių formavimo rekomendacijos. Magistro baigiamasis darbas informatikos inžinerijos laipsniui. Vilniaus Gedimino technikos universitetas. Vilnius, 2014. / This Master's thesis has examined the automatic speech emotion recognition task. Recently, the popularity of this area is greatly increased, but there is still a lack of literature describing specific acoustic features (or feature sets) performance in automatic emotion recognition task. This issue formed the purpose of this work - to explore suitable acoustic feature sets for emotion recognition task. This work includes analysis of emotion feature systems and development of speech emotion characteristics testing system. Using developed system, investigation experiments of speech emotion parameters were accomplished. The study results are very similar, or slightly ahead to recently published results of emotion recognition using the Berlin emotional speech database. According to the results of the experiments, recommendations for creating effective speech emotion feature sets were concluded. Master's degree thesis in informatics engineering. Vilnius Gediminas Technical University. Vilnius, 2014.
|
60 |
Emotion recognition in contextStanley, Jennifer Tehan 12 June 2008 (has links)
In spite of evidence for increased maintenance and/or improvement of emotional experience in older adulthood, past work suggests that young adults are better able than older adults to identify emotions in others. Typical emotion recognition tasks employ a single-closed-response methodology. Because older adults are more complex in their emotional experience than young adults, they may approach such response-limited emotion recognition tasks in a qualitatively different manner than young adults. The first study of the present research investigated whether older adults were more likely than young adults to interpret emotional expressions (facial task) and emotional situations (lexical task) as representing a mix of different discrete emotions. In the lexical task, older adults benefited more than young adults from the opportunity to provide more than one response. In the facial task, however, there was a cross-over interaction such that older adults benefited more than young adults for anger recognition, whereas young adults benefited more than older adults for disgust recognition. A second study investigated whether older adults benefit more than young adults from contextual cues. The addition of contextual information improved the performance of older adults more than that of young adults. Age differences in anger recognition, however, persisted across all conditions. Overall, these findings are consistent with an age-related increase in the perception of mixed emotions in lexical information. Moreover, they suggest that contextual information can help disambiguate emotional information.
|
Page generated in 0.0386 seconds