• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 12
  • 10
  • 9
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 191
  • 191
  • 49
  • 37
  • 35
  • 34
  • 28
  • 26
  • 25
  • 24
  • 23
  • 21
  • 21
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Federated Emotion Recognition with Physiological Signals- GSR

Hassani, Tara January 2021 (has links)
Background: Human-computer interaction (HCI) is one of the daily triggering emotional events in today’s world and researchers in this area have been exploring different techniques to enhance emotional ability in computers. Due to privacy concerns and the laboratory's limited capability for gathering data from a large number of users, common machine learning techniques that are extensively used in emotion recognition tasks lack adequate data collection. To address these issues, we propose a decentralized framework based on the Federated Learning architecture where raw data is collected and analyzed locally. The effects of these analyses in large numbers of updates are transferred to a server to aggregate for the creation of a global model for the emotion recognition task using only Galvanic Skin Response (GSR) signals and their extracted features.  Objectives: This thesis aims to explore how the CNN based federated learning approach can be used in emotion recognition considering data privacy protection and investigate if it reaches the same performance as basic centralized CNN.Methods: To investigate the effect of the proposed method in emotion recognition, two architectures including centralized and federated are designed with the CNN model. Then the results of these two architectures are compared to each other. The dataset used in our work is the CASE dataset. In federated architecture, we employ neurons and weights to train the models instead of raw data, which is used in the centralized architecture.  Results: The performance results indicate that the proposed model not only can work well but also performs better than some other related work methods regarding valance accuracy. Besides, it also has the ability to collect more data from various sources and also protecting sensitive users’ data better by supporting tighter privacy regulations. The physiological data is inherently anonymous but when it comes to using it with other modalities such as video or voice, maintaining the same anonymity is challenging.  Conclusions: This thesis concludes that the federated CNN based model can be used in emotion recognition systems and obtains the same accuracy performance as centralized architecture. Regarding classifying the valance, it outperforms some other state-of-the-art methods. Meanwhile, its federated nature can provide better privacy protection and data diversity for the emotion recognition system.
92

The social and emotional experiences of black lesbian couples in Seshego Township, Limpopo Province

Maotoana, M. R. January 2019 (has links)
Thesis (Ph. D. (Psychology)) --University of Limpopo, 2019 / South African has constitutional protection for the human rights of all its citizens. However, black lesbians in South Africa suffer physical, emotional and psychological abuse. This qualitative study aimed to elicit the social and emotional experiences of black lesbians living, as same-sex partners, in a township setting. The design of the study was exploratory in nature and used a purposive sample of ten couples (twenty women). The investigation was underpinned by Social domain theory (SDT) which allowed for an understanding of the judgements people make in different social settings. Semi-structured interviews were conducted with each couple in order to collect data. The data were analysed using thematic content analysis (TCA) which gleaned ten themes namely, age and sexual orientation, suicide, education, lack of support, hate crimes, substance abuse, stigma, mental health, parenting and discrimination. In the discussion it was found that these themes echoed those in other local and international studies. However, corrective rape is peculiar to South Africa and was experienced by some participants in the study. In one case a brother, with the mother’s support raped his sister repeatedly. This took place in a country which has a progressive constitution and laws. Social norms in the township allows black lesbian couples to suffer this type of abuse and have daily experiences of discrimination and stigmatisation. Recommendations included a quantitative more far reaching study (as well as longitudinal studies) and more workshops and campaigns spreading knowledge about sexuality.
93

A study of facial expression recognition technologies on deaf adults and their children

Shaffer, Irene Rogan 30 June 2018 (has links)
Facial and head movements have important linguistic roles in American Sign Language (ASL) and other sign languages and can often significantly alter the meaning or interpretation of what is being communicated. Technologies that enable accurate recognition of ASL linguistic markers could be a step toward greater independence and empowerment for the Deaf community. This study involved gathering over 2,000 photographs of five hearing subjects, five Deaf subjects, and five Child of Deaf Adults (CODA) subjects. Each subject produced the six universal emotional facial expressions: sad, happy, surprise, anger, fear, and disgust. In addition, each Deaf and CODA subject produced six different ASL linguistic facial expressions. A representative set of 750 photos was submitted to six different emotional facial expression recognition services, and the results were processed and compared across different facial expressions and subject groups (hearing, Deaf, CODA). Key observations from these results are presented. First, poor face detection rates are observed for Deaf subjects as compared to hearing and CODA subjects. Second, emotional facial expression recognition appears to be more accurate for Deaf and CODA subjects than for hearing subjects. Third, ASL linguistic markers, which are distinct from emotional expressions, are often misinterpreted as negative emotions by existing technologies. Possible implications of this misinterpretation are discussed, such as the problems that could arise for the Deaf community with increasing surveillance and use of automated facial analysis tools. Finally, an inclusive approach is suggested for incorporating ASL linguistic markers into existing facial expression recognition tools. Several considerations are given for constructing an unbiased database of the various ASL linguistic markers, including the types of subjects that should be photographed and the importance of including native ASL signers in the photo selection and classification process. / 2019-06-30T00:00:00Z
94

AUTOMATED FACIAL EMOTION RECOGNITION: DEVELOPMENT AND APPLICATION TOHUMAN-ROBOT INTERACTION

Liu, Xiao 28 August 2019 (has links)
No description available.
95

Neural correlates of emotion recognition in psychopaths : A systematic review

Norlin, Jenna, Saadula, Rendek January 2023 (has links)
Science has recently begun showing interest in the different mechanisms of the psychopathic brain, and current scientific research points to deficits in the structural and functional brain regions of psychopaths. Psychopathy is a disorder distinguished by its persistent antisocial behavior, emotional callousness, grandiose self-estimation, and lack of empathy. Further, it is also a disorder which is hard to classify. Notably, the Hare-Psychopathy Checklist-Revised (PCL-R) is the most common clinical rating scale used to diagnose psychopaths. This current systematic review aims to scrutinize literature that reviews psychopathy, delving into articles on neural correlates of emotion recognition of psychopaths. By using the PRISMA guidelines, this systematic research was conducted through, MEDLINE EBSCO, Web of Science, PubMed, and Scopus. Through predisposed eligibility criteria, articles were chosen and reviewed. All selected articles found a significant result where psychopaths performed poorly on emotion recognition and important areas such as the prefrontal cortex and amygdala could be seen to perform worse. Notably, due to different test methods, one is unable to properly evaluate the results between the studies. Therefore, upcoming studies need to shed light on performing the same tests to provide stronger and equal evidence. This systematic review was done to shed better light on the disorder.
96

The automatic recognition of emotions in speech

Manamela, Phuti, John January 2020 (has links)
Thesis(M.Sc.(Computer Science)) -- University of Limpopo, 2020 / Speech emotion recognition (SER) refers to a technology that enables machines to detect and recognise human emotions from spoken phrases. In the literature, numerous attempts have been made to develop systems that can recognise human emotions from their voice, however, not much work has been done in the context of South African indigenous languages. The aim of this study was to develop an SER system that can classify and recognise six basic human emotions (i.e., sadness, fear, anger, disgust, happiness, and neutral) from speech spoken in Sepedi language (one of South Africa’s official languages). One of the major challenges encountered, in this study, was the lack of a proper corpus of emotional speech. Therefore, three different Sepedi emotional speech corpora consisting of acted speech data have been developed. These include a RecordedSepedi corpus collected from recruited native speakers (9 participants), a TV broadcast corpus collected from professional Sepedi actors, and an Extended-Sepedi corpus which is a combination of Recorded-Sepedi and TV broadcast emotional speech corpora. Features were extracted from the speech corpora and a data file was constructed. This file was used to train four machine learning (ML) algorithms (i.e., SVM, KNN, MLP and Auto-WEKA) based on 10 folds validation method. Three experiments were then performed on the developed speech corpora and the performance of the algorithms was compared. The best results were achieved when Auto-WEKA was applied in all the experiments. We may have expected good results for the TV broadcast speech corpus since it was collected from professional actors, however, the results showed differently. From the findings of this study, one can conclude that there are no precise or exact techniques for the development of SER systems, it is a matter of experimenting and finding the best technique for the study at hand. The study has also highlighted the scarcity of SER resources for South African indigenous languages. The quality of the dataset plays a vital role in the performance of SER systems. / National research foundation (NRF) and Telkom Center of Excellence (CoE)
97

Emotional Prosody in Adverse Acoustic Conditions : Investigating effects of emotional prosody and noise-vocoding on speech perception and emotion recognition

Ivarsson, Cecilia January 2022 (has links)
Speech perception is a fundamental function of successful vocal communication, and through prosody, we can communicate different emotions. The ability to recognize emotions is important in social interaction. Emotional prosody facilitates emotion recognition in vocal communication. Acoustic conditions are not always optimal, due to either environmental disturbances or hearing loss. When perceiving speech and recognizing emotions we make use of multimodal sources of information. The effect of noise-vocoding on speech perception and emotion recognition can increase the knowledge of these abilities. The effect of emotional prosody on speech perception and emotion recognition ability in adverse acoustic conditions is not widely explored. To explore the role of emotional prosody during adverse acoustic conditions, an online test was created. 18 participants (8 women) listened to semantically neutral sentences with different emotions expressed in prosody and presented with five different levels of noise (NV1, NV3, NV6, NV12, and Clear) using noise-vocoding. Participants’ task was to reproduce the spoken words and identify the expressed emotion (happy, surprised, angry, sad, or neutral). A Reading span test was included to investigate any potential correlation between working memory capacity and the ability to recognize emotions in prosody. Statistical analysis suggests speech perception could be facilitated by emotional prosody when sentences are noise-vocoded. The ability to recognize emotions in emotional prosody differentiated between the emotions on the different noise levels. The ability to recognize anger was least affected by noise-vocoding, and sadness was most affected. Correlation analysis shows no significant result between working memory capacity and emotion recognition accuracy.
98

Syna: Emotion Recognition based on Spatio-Temporal Machine Learning

Shahrokhian, Daniyal January 2017 (has links)
The analysis of emotions in humans is a field that has been studied for centuries. Through the last decade, multiple approaches towards automatic emotion recognition have been developed to tackle the task of making this analysis autonomous. More specifically, facial expressions in the form of Action Units have been considered until now the most efficient way to recognize emotions. In recent years, applying machine learning for this task has shown outstanding improvements in the accuracy of the solutions. Through this technique, the features can now be automatically learned from the training data, instead of relying on expert domain knowledge and hand-crafted rules. In this thesis, I present Syna and DeepSyna, two models capable of classifying emotional expressions by using both spatial and temporal features. The experimental results demonstrate the effectiveness of Syna in constrained environments, while there is still room for improvement in both constrained and in-the-wild settings. DeepSyna, while addressing this problem, on the other hand suffers from data scarcity and irrelevant transfer learning, which can be solved by future work. / Mänsklig känsloigenkänning har studerats i århundraden. Det senaste årtiondet har mängder av tillvägagångssätt för automatiska processer studerats, för att möjliggöra autonomi; mer specifikt så har ansiktsuttryck i form av Action Units ansetts vara mest effektiva. Maskininlärning har dock nyligen visat att enorma framsteg är möjliga vad gäller bra lösningar på problemen. Så kallade features kan nu automatiskt läras in från träningsdata, även utan expertkunskap och heuristik. Jag presenterar här Syna och DeepSyna, två modeller för ändamålet som använder både spatiala och temporala features. Experiment demonstrerar Synas effektivitet i vissa begränsade omgivningar, medan mycket lämnas att önska vad gäller generella sådana. DeepSyna löser detta men lider samtidigt av databristproblem och onödig så kallad transfer learning, vilket här lämnas till framtida arbete.
99

Adaptive Intelligent User Interfaces With Emotion Recognition

Nasoz, Fatma 01 January 2004 (has links)
The focus of this dissertation is on creating Adaptive Intelligent User Interfaces to facilitate enhanced natural communication during the Human-Computer Interaction by recognizing users' affective states (i.e., emotions experienced by the users) and responding to those emotions by adapting to the current situation via an affective user model created for each user. Controlled experiments were designed and conducted in a laboratory environment and in a Virtual Reality environment to collect physiological data signals from participants experiencing specific emotions. Algorithms (k-Nearest Neighbor [KNN], Discriminant Function Analysis [DFA], Marquardt-Backpropagation [MBP], and Resilient Backpropagation [RBP]) were implemented to analyze the collected data signals and to find unique physiological patterns of emotions. Emotion Elicitation with Movie Clips Experiment was conducted to elicit Sadness, Anger, Surprise, Fear, Frustration, and Amusement from participants. Overall, the three algorithms: KNN, DFA, and MBP, could recognize emotions with 72.3%, 75.0%, and 84.1% accuracy, respectively. Driving Simulator experiment was conducted to elicit driving-related emotions and states (panic/fear, frustration/anger, and boredom/sleepiness). The KNN, MBP and RBP Algorithms were used to classify the physiological signals by corresponding emotions. Overall, KNN could classify these three emotions with 66.3%, MBP could classify them with 76.7% and RBP could classify them with 91.9% accuracy. Adaptation of the interface was designed to provide multi-modal feedback to the users about their current affective state and to respond to users' negative emotional states in order to decrease the possible negative impacts of those emotions. Bayesian Belief Networks formalization was employed to develop the User Model to enable the intelligent system to appropriately adapt to the current context and situation by considering user-dependent factors, such as: personality traits and preferences.
100

Applying Facial Emotion Recognition to Usability Evaluations to Reduce Analysis Time

Chao, Gavin Kam 01 June 2021 (has links) (PDF)
Usability testing is an important part of product design that offers developers insight into a product’s ability to help users achieve their goals. Despite the usefulness of usability testing, human usability evaluations are costly and time-intensive processes. Developing methods to reduce the time and costs of usability evaluations is important for organizations to improve the usability of their products without expensive investments. One prospective solution to this is the application of facial emotion recognition to automate the collection of qualitative metrics normally identified by human usability evaluators. In this paper, facial emotion recognition (FER) was applied to mock usability recordings to evaluate how well FER could parse moments of emotional significance. To determine the accuracy of FER in this context, a FER Python library created by Justin Shenk was compared with data tags produced by human reporters. This study found that the facial emotion recognizer could only match its emotion recognition output with less than 40% of the human-reported emotion timestamps and less than 78% of the emotion data tags were recognized at all. The current lack of consistency with the human reported emotions found in this thesis makes it difficult to recommend using FER for parsing moments of semantic significance over conventional human usability evaluators.

Page generated in 0.1355 seconds