Spelling suggestions: "subject:"[een] EMOTION RECOGNITION"" "subject:"[enn] EMOTION RECOGNITION""
71 |
The social and emotional experiences of black lesbian couples in Seshego Township, Limpopo ProvinceMaotoana, M. R. January 2019 (has links)
Thesis (Ph. D. (Psychology)) --University of Limpopo, 2019 / South African has constitutional protection for the human rights of all its citizens. However, black lesbians in South Africa suffer physical, emotional and psychological abuse. This qualitative study aimed to elicit the social and emotional experiences of black lesbians living, as same-sex partners, in a township setting. The design of the study was exploratory in nature and used a purposive sample of ten couples (twenty women). The investigation was underpinned by Social domain theory (SDT) which allowed for an understanding of the judgements people make in different social settings. Semi-structured interviews were conducted with each couple in order to collect data. The data were analysed using thematic content analysis (TCA) which gleaned ten themes namely, age and sexual orientation, suicide, education, lack of support, hate crimes, substance abuse, stigma, mental health, parenting and discrimination. In the discussion it was found that these themes echoed those in other local and international studies. However, corrective rape is peculiar to South Africa and was experienced by some participants in the study. In one case a brother, with the mother’s support raped his sister repeatedly. This took place in a country which has a progressive constitution and laws. Social norms in the township allows black lesbian couples to suffer this type of abuse and have daily experiences of discrimination and stigmatisation. Recommendations included a quantitative more far reaching study (as well as longitudinal studies) and more workshops and campaigns spreading knowledge about sexuality.
|
72 |
A study of facial expression recognition technologies on deaf adults and their childrenShaffer, Irene Rogan 30 June 2018 (has links)
Facial and head movements have important linguistic roles in American Sign Language (ASL) and other sign languages and can often significantly alter the meaning or interpretation of what is being communicated. Technologies that enable accurate recognition of ASL linguistic markers could be a step toward greater independence and empowerment for the Deaf community. This study involved gathering over 2,000 photographs of five hearing subjects, five Deaf subjects, and five Child of Deaf Adults (CODA) subjects. Each subject produced the six universal emotional facial expressions: sad, happy, surprise, anger, fear, and disgust. In addition, each Deaf and CODA subject produced six different ASL linguistic facial expressions. A representative set of 750 photos was submitted to six different emotional facial expression recognition services, and the results were processed and compared across different facial expressions and subject groups (hearing, Deaf, CODA).
Key observations from these results are presented. First, poor face detection rates are observed for Deaf subjects as compared to hearing and CODA subjects. Second, emotional facial expression recognition appears to be more accurate for Deaf and CODA subjects than for hearing subjects. Third, ASL linguistic markers, which are distinct from emotional expressions, are often misinterpreted as negative emotions by existing technologies. Possible implications of this misinterpretation are discussed, such as the problems that could arise for the Deaf community with increasing surveillance and use of automated facial analysis tools.
Finally, an inclusive approach is suggested for incorporating ASL linguistic markers into existing facial expression recognition tools. Several considerations are given for constructing an unbiased database of the various ASL linguistic markers, including the types of subjects that should be photographed and the importance of including native ASL signers in the photo selection and classification process. / 2019-06-30T00:00:00Z
|
73 |
AUTOMATED FACIAL EMOTION RECOGNITION: DEVELOPMENT AND APPLICATION TOHUMAN-ROBOT INTERACTIONLiu, Xiao 28 August 2019 (has links)
No description available.
|
74 |
Neural correlates of emotion recognition in psychopaths : A systematic reviewNorlin, Jenna, Saadula, Rendek January 2023 (has links)
Science has recently begun showing interest in the different mechanisms of the psychopathic brain, and current scientific research points to deficits in the structural and functional brain regions of psychopaths. Psychopathy is a disorder distinguished by its persistent antisocial behavior, emotional callousness, grandiose self-estimation, and lack of empathy. Further, it is also a disorder which is hard to classify. Notably, the Hare-Psychopathy Checklist-Revised (PCL-R) is the most common clinical rating scale used to diagnose psychopaths. This current systematic review aims to scrutinize literature that reviews psychopathy, delving into articles on neural correlates of emotion recognition of psychopaths. By using the PRISMA guidelines, this systematic research was conducted through, MEDLINE EBSCO, Web of Science, PubMed, and Scopus. Through predisposed eligibility criteria, articles were chosen and reviewed. All selected articles found a significant result where psychopaths performed poorly on emotion recognition and important areas such as the prefrontal cortex and amygdala could be seen to perform worse. Notably, due to different test methods, one is unable to properly evaluate the results between the studies. Therefore, upcoming studies need to shed light on performing the same tests to provide stronger and equal evidence. This systematic review was done to shed better light on the disorder.
|
75 |
The automatic recognition of emotions in speechManamela, Phuti, John January 2020 (has links)
Thesis(M.Sc.(Computer Science)) -- University of Limpopo, 2020 / Speech emotion recognition (SER) refers to a technology that enables machines to detect and recognise human emotions from spoken phrases. In the literature, numerous attempts have been made to develop systems that can recognise human emotions from their voice, however, not much work has been done in the context of South African indigenous languages. The aim of this study was to develop an SER system that can classify and recognise six basic human emotions (i.e., sadness, fear, anger, disgust, happiness, and neutral) from speech spoken in Sepedi language (one of South Africa’s official languages). One of the major challenges encountered, in this study, was the lack of a proper corpus of emotional speech. Therefore, three different Sepedi emotional speech corpora consisting of acted speech data have been developed. These include a RecordedSepedi corpus collected from recruited native speakers (9 participants), a TV broadcast corpus collected from professional Sepedi actors, and an Extended-Sepedi corpus which is a combination of Recorded-Sepedi and TV broadcast emotional speech corpora. Features were extracted from the speech corpora and a data file was constructed. This file was used to train four machine learning (ML) algorithms (i.e., SVM, KNN, MLP and Auto-WEKA) based on 10 folds validation method. Three experiments were then performed on the developed speech corpora and the performance of the algorithms was compared. The best results were achieved when Auto-WEKA was applied in all the experiments. We may have expected good results for the TV broadcast speech corpus since it was collected from professional actors, however, the results showed differently. From the findings of this study, one can conclude that there are no precise or exact techniques for the development of SER systems, it is a matter of experimenting and finding the best technique for the study at hand. The study has also highlighted the scarcity of SER resources for South African indigenous languages. The quality of the dataset plays a vital role in the performance of SER systems. / National research foundation (NRF) and
Telkom Center of Excellence (CoE)
|
76 |
Emotional Prosody in Adverse Acoustic Conditions : Investigating effects of emotional prosody and noise-vocoding on speech perception and emotion recognitionIvarsson, Cecilia January 2022 (has links)
Speech perception is a fundamental function of successful vocal communication, and through prosody, we can communicate different emotions. The ability to recognize emotions is important in social interaction. Emotional prosody facilitates emotion recognition in vocal communication. Acoustic conditions are not always optimal, due to either environmental disturbances or hearing loss. When perceiving speech and recognizing emotions we make use of multimodal sources of information. The effect of noise-vocoding on speech perception and emotion recognition can increase the knowledge of these abilities. The effect of emotional prosody on speech perception and emotion recognition ability in adverse acoustic conditions is not widely explored. To explore the role of emotional prosody during adverse acoustic conditions, an online test was created. 18 participants (8 women) listened to semantically neutral sentences with different emotions expressed in prosody and presented with five different levels of noise (NV1, NV3, NV6, NV12, and Clear) using noise-vocoding. Participants’ task was to reproduce the spoken words and identify the expressed emotion (happy, surprised, angry, sad, or neutral). A Reading span test was included to investigate any potential correlation between working memory capacity and the ability to recognize emotions in prosody. Statistical analysis suggests speech perception could be facilitated by emotional prosody when sentences are noise-vocoded. The ability to recognize emotions in emotional prosody differentiated between the emotions on the different noise levels. The ability to recognize anger was least affected by noise-vocoding, and sadness was most affected. Correlation analysis shows no significant result between working memory capacity and emotion recognition accuracy.
|
77 |
Adaptive Intelligent User Interfaces With Emotion RecognitionNasoz, Fatma 01 January 2004 (has links)
The focus of this dissertation is on creating Adaptive Intelligent User Interfaces to facilitate enhanced natural communication during the Human-Computer Interaction by recognizing users' affective states (i.e., emotions experienced by the users) and responding to those emotions by adapting to the current situation via an affective user model created for each user. Controlled experiments were designed and conducted in a laboratory environment and in a Virtual Reality environment to collect physiological data signals from participants experiencing specific emotions. Algorithms (k-Nearest Neighbor [KNN], Discriminant Function Analysis [DFA], Marquardt-Backpropagation [MBP], and Resilient Backpropagation [RBP]) were implemented to analyze the collected data signals and to find unique physiological patterns of emotions. Emotion Elicitation with Movie Clips Experiment was conducted to elicit Sadness, Anger, Surprise, Fear, Frustration, and Amusement from participants. Overall, the three algorithms: KNN, DFA, and MBP, could recognize emotions with 72.3%, 75.0%, and 84.1% accuracy, respectively. Driving Simulator experiment was conducted to elicit driving-related emotions and states (panic/fear, frustration/anger, and boredom/sleepiness). The KNN, MBP and RBP Algorithms were used to classify the physiological signals by corresponding emotions. Overall, KNN could classify these three emotions with 66.3%, MBP could classify them with 76.7% and RBP could classify them with 91.9% accuracy. Adaptation of the interface was designed to provide multi-modal feedback to the users about their current affective state and to respond to users' negative emotional states in order to decrease the possible negative impacts of those emotions. Bayesian Belief Networks formalization was employed to develop the User Model to enable the intelligent system to appropriately adapt to the current context and situation by considering user-dependent factors, such as: personality traits and preferences.
|
78 |
Applying Facial Emotion Recognition to Usability Evaluations to Reduce Analysis TimeChao, Gavin Kam 01 June 2021 (has links) (PDF)
Usability testing is an important part of product design that offers developers insight into a product’s ability to help users achieve their goals. Despite the usefulness of usability testing, human usability evaluations are costly and time-intensive processes. Developing methods to reduce the time and costs of usability evaluations is important for organizations to improve the usability of their products without expensive investments. One prospective solution to this is the application of facial emotion recognition to automate the collection of qualitative metrics normally identified by human usability evaluators.
In this paper, facial emotion recognition (FER) was applied to mock usability recordings to evaluate how well FER could parse moments of emotional significance. To determine the accuracy of FER in this context, a FER Python library created by Justin Shenk was compared with data tags produced by human reporters. This study found that the facial emotion recognizer could only match its emotion recognition output with less than 40% of the human-reported emotion timestamps and less than 78% of the emotion data tags were recognized at all. The current lack of consistency with the human reported emotions found in this thesis makes it difficult to recommend using FER for parsing moments of semantic significance over conventional human usability evaluators.
|
79 |
Teaching Social-Emotional Learning to Children With Autism Using Animated Avatar Video ModelingDavis, Emelie 12 December 2022 (has links)
People with a diagnosis of autism spectrum disorder (ASD) often have difficulties understanding or applying skills related to Social-Emotional Learning (SEL). An individual having a better understanding of SEL concepts is generally associated with more fulfilling connections with others and increased satisfaction in life. Since people with ASD tend to have greater success with learning in structured environments, we created a module to teach these skills using Nearpod. These modules were created with videos of a person embodying a cartoon dog face using Animoji for two reasons; because the animation was meant to appeal to children, and the creation was user-friendly enough for teachers to potentially create or replicate this model. Along with these videos, the modules also included multiple choice questions about content from the lessons and about scenarios portraying different emotions. Participants came to a research lab where they completed the modules at a computer while being supervised by researchers. Looking at the results from the intervention there was little to no trend between baseline and intervention sessions across four participants. While Nearpod is a tool that could be useful for parents or teachers to create and present video modeling lessons, participants had difficulty navigating the modules without support from the researchers due to length of the modules, getting easily distracted and difficulty with using the technology. Some directions for future research may include delivering similar content using animated avatars through shorter, more child-friendly delivery methods.
|
80 |
How do voiceprints age?Nachesa, Maya Konstantinovna January 2023 (has links)
Voiceprints, like fingerprints, are a biometric. Where fingerprints record a person's unique pattern on their finger, voiceprints record what a person's voice "sounds like", abstracting away from what the person said. They have been used in speaker recognition, including verification and identification. In other words, they have been used to ask "is this speaker who they say they are?" or "who is this speaker?", respectively. However, people age, and so do their voices. Do voiceprints age, too? That is, can a person's voice change enough that after a while, the original voiceprint can no longer be used to identify them? In this thesis, I use Swedish audio recordings from Riksdagen's (the Swedish parliament) debate speeches to test this idea. Depending on the answer, this influences how well we can search the database for previously unmarked speeches. I find that speaker verification performance decreases as the age-gap between voiceprints increases, and that it decreases more strongly after roughly five years. Additionally, I grouped the speakers into age groups spanning five years, and found that speaker verification has the highest performance for those for whom the initial voiceprint was recorded from 29-33 years of age. Additionally, longer input speech provides higher quality voiceprints, with performance improvements stagnating when voiceprints become longer than 30 seconds. Finally, voiceprints for men age more strongly than those for women after roughly 5 years. I also investigated how emotions are encoded in voiceprints, since this could potentially impede in speaker recognition. I found that it is possible to train a classifier to recognise emotions from voiceprints, and that this classifier does better when recognising emotions from known speakers. That is, emotions are encoded more characteristically per person as opposed to per emotion itself. As such, they are unlikely to interfere with speaker recognition.
|
Page generated in 0.0415 seconds