• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 12
  • 10
  • 9
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 191
  • 191
  • 49
  • 37
  • 35
  • 34
  • 28
  • 26
  • 25
  • 24
  • 23
  • 21
  • 21
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Feature Fusion Deep Learning Method for Video and Audio Based Emotion Recognition

Yanan Song (11825003) 20 December 2021 (has links)
In this thesis, we proposed a deep learning based emotion recognition system in order to improve the successive classification rate. We first use transfer learning to extract visual features and use Mel frequency Cepstral Coefficients(MFCC) to extract audio features, and then apply the recurrent neural networks(RNN) with attention mechanism to process the sequential inputs. After that, the outputs of both channels are fused into a concatenate layer, which is processed using batch normalization, to reduce internal covariate shift. Finally, the classification result is obtained by the softmax layer. From our experiments, the video and audio subsystem achieve 78% and 77% respectively, and the feature fusion system with video and audio achieves 92% accuracy based on the RAVDESS dataset for eight emotion classes. Our proposed feature fusion system outperforms conventional methods in terms of classification prediction.
132

The ability of four-year-old children to recognize basic emotions represented by graphic symbols

Visser, Naomi Aletta 16 November 2007 (has links)
Emotions are an essential part of development. There is evidence that young children understand and express emotions through facial expressions. Correct identification and recognition of facial expressions is important to facilitate communication and social interaction. Emotions are represented in a wide variety of symbol sets and systems in Alternative and Augmentative Communication (AAC) to enable a person with little or no functional speech to express emotion. These symbols consist of a facial expression with facial features to distinguish between emotions. In spite of the importance of expressing and understanding emotions to facilitate communication, there is limited research on young children’s ability to recognize emotions represented by graphic symbols. The purpose of this study was to investigate the ability of typically developing fouryearold children to recognize basic emotions as represented by graphic symbols. In order to determine their ability to recognize emotions on graphic symbols, their ability to understand emotions had to be determined. Participants were then required to recognize four basic emotions (happy, sad, afraid, angry) represented by various graphic symbols, taken from PCS (Johnson, 1981), PICSYMS (Carlson, 1985) and Makaton (Grove&Walker, 1990). The purpose was to determine which graphic symbol the children recognized as representation of an emotion. Results showed that the emotion of happy was easier to recognize, which might be because it was the only emotion in the pleasure dimension of emotions. Sad, afraid and angry were more difficult to recognize which might be because they fall in the displeasure dimension. It is also evident from the findings that the facial features in the graphic symbol play an important part in conveying a specific emotion. The results that were obtained are discussed in relation to previous findings. Finally, recommendations for future use are made. / Dissertation (MA (Augumentative and Alternative Communication))--University of Pretoria, 2008. / Centre for Augmentative and Alternative Communication (CAAC) / MA / unrestricted
133

Personalized physiological-based emotion recognition and implementation on hardware / Reconnaissance des émotions personnalisée à partir des signaux physiologiques et implémentation sur matériel

Yang, Wenlu 27 February 2018 (has links)
Cette thèse étudie la reconnaissance des émotions à partir de signaux physiologiques dans le contexte des jeux vidéo et la faisabilité de sa mise en œuvre sur un système embarqué. Les défis suivants sont abordés : la relation entre les états émotionnels et les réponses physiologiques dans le contexte du jeu, les variabilités individuelles des réponses psycho-physiologiques et les problèmes de mise en œuvre sur un système embarqué. Les contributions majeures de cette thèse sont les suivantes. Premièrement, nous construisons une base de données multimodale dans le cadre de l'Affective Gaming (DAG). Cette base de données contient plusieurs mesures concernant les modalités objectives telles que les signaux physiologiques de joueurs et des évaluations subjectives sur des phases de jeu. A l'aide de cette base, nous présentons une série d'analyses effectuées pour la détection des moments marquant émotionnellement et la classification des émotions à l'aide de diverses méthodes d'apprentissage automatique. Deuxièmement, nous étudions la variabilité individuelle de la réponse émotionnelle et proposons un modèle basé sur un groupe de joueurs déterminé par un clustering selon un ensemble de traits physiologiques pertinents. Nos travaux mettent en avant le fait que le modèle proposé, basé sur un tel groupe personnalisé, est plus performant qu'un modèle général ou qu'un modèle spécifique à un utilisateur. Troisièmement, nous appliquons la méthode proposée sur un système ARM A9 et montrons que la méthode proposée peut répondre à l'exigence de temps de calcul. / This thesis investigates physiological-based emotion recognition in a digital game context and the feasibility of implementing the model on an embedded system. The following chanllenges are addressed: the relationship between emotional states and physiological responses in the game context, individual variabilities of the pschophysiological responses and issues of implementation on an embedded system. The major contributions of this thesis are : Firstly, we construct a multi-modal Database for Affective Gaming (DAG). This database contains multiple measurements concerning objective modalities: physiological signals (ECG, EDA, EMG, Respiration), screen recording, and player's face recording, as well as subjective assessments on both game event and match level. We presented statistics of the database and run a series of analysis on issues such as emotional moment detection and emotion classification, influencing factors of the overall game experience using various machine learning methods. Secondly, we investigate the individual variability in the collected data by creating an user-specific model and analyzing the optimal feature set for each individual. We proposed a personalized group-based model created the similar user groups by using the clustering techniques based on physiological traits deduced from optimal feature set. We showed that the proposed personalized group-based model performs better than the general model and user-specific model. Thirdly, we implemente the proposed method on an ARM A9 system and showed that the proposed method can meet the requirement of computation time.
134

Facial Emotion Recognition using Convolutional Neural Network with Multiclass Classification and Bayesian Optimization for Hyper Parameter Tuning.

Bejjagam, Lokesh, Chakradhara, Reshmi January 2022 (has links)
The thesis aims to develop a deep learning model for facial emotion recognition using Convolutional Neural Network algorithm and Multiclass Classification along with Hyper-parameter tuning using Bayesian Optimization to improve the performance of the model. The developed model recognizes seven basic emotions in images of human beings such as fear, happy, surprise, sad, neutral, disgust and angry using FER-2013 dataset.
135

Emotion Recognition using Spatiotemporal Analysis of Electroencephalographic Signals

Aspiras, Theus H. 21 August 2012 (has links)
No description available.
136

Processing of Spontaneous Emotional Responses in Adolescents and Adults with Autism Spectrum Disorders Effect of Stimulus Type

Cassidy, S., Mitchell, Peter, Chapman, P., Ropar, D. 04 June 2020 (has links)
Yes / Recent research has shown that adults with autism spectrum disorders (ASD) have difficulty interpreting others' emotional responses, in order to work out what actually happened to them. It is unclear what underlies this difficulty; important cues may be missed from fast paced dynamic stimuli, or spontaneous emotional responses may be too complex for those with ASD to successfully recognise. To explore these possibilities, 17 adolescents and adults with ASD and 17 neurotypical controls viewed 21 videos and pictures of peoples' emotional responses to gifts (chocolate, a handmade novelty or Monopoly money), then inferred what gift the person received and the emotion expressed by the person while eye movements were measured. Participants with ASD were significantly more accurate at distinguishing who received a chocolate or homemade gift from static (compared to dynamic) stimuli, but significantly less accurate when inferring who received Monopoly money from static (compared to dynamic) stimuli. Both groups made similar emotion attributions to each gift in both conditions (positive for chocolate, feigned positive for homemade and confused for Monopoly money). Participants with ASD only made marginally significantly fewer fixations to the eyes of the face, and face of the person than typical controls in both conditions. Results suggest adolescents and adults with ASD can distinguish subtle emotion cues for certain emotions (genuine from feigned positive) when given sufficient processing time, however, dynamic cues are informative for recognising emotion blends (e.g. smiling in confusion). This indicates difficulties processing complex emotion responses in ASD.
137

生理訊號監控應用於智慧生活環境之研究 / Application of physiological signal monitoring in smart living space

徐世平, Shu, Shih Ping Unknown Date (has links)
在心理與認知科學領域中常使用生理訊號來測量受試者的反應,並反映出人們的心理狀態起伏。本研究探討應用生理訊號識別情緒之可能性,以及將生理訊號與其他情緒辨識結果整合之方法。 在過去的研究中,生理與心理的對應關係,並無太多著墨,可稱為一黑盒子(black-box)的方式。並因上述類別式實驗長時間收集的生理訊號,對於誘發特定情緒反應之因果(cause-effect)並未進行深入的討論。本研究由於實驗的設計與選用材料之故,可一探純粹由刺激引發的情緒下情緒在生理與心理之因果關係,在輸入輸出對應間能有較明確的解釋。 本研究中嘗試監測較短時間(<10sec)的生理資訊,期望以一近乎即時的方式判讀並回饋使用者適當的資訊,對於生理訊號與情緒狀態的關聯性研究,將以IAPS(International Affective Picture System) 素材為來源,進行較過去嚴謹的實驗設計與程序,以探究生理訊號特徵如何應用於情緒分類。 雖然本研究以維度式情緒學說為理論基礎,然考慮到實際應用情境,若有其他以類別式的理論為基礎之系統,如何整合維度式與類別式兩類的資訊,提出可行的轉換方式,亦是本研究的主要課題。 / Physiological signals can be used to measure a subject’s response to a particular stimulus, and infer the emotional status accordingly. This research investigates the feasibility of emotion recognition using physiological measurements in a smart living space. It also addresses important issues regarding the integration of classification results from multiple modalities. Most past research regarded the recognition of emotion from physiological data as a mapping mechanism which can be learned from training data. These data were collected over a long period of time, and can not model the immediate cause-effect relationship effectively. Our research employs a more rigorous experiment design to study the relationship between a specific physiological signal and the emotion status. The newly designed procedure will enable us to identify and validate the discriminating power of each type of physiological signal in recognizing emotion. Our research monitors short term (< 10s) physiological signals. We use the IAPS (International Affective Picture System) as our experiment material. Physiological data were collected during the presentation of various genres of pictures. With such controlled experiments, we expect the cause-effect relation to be better explained than previous black-box approaches. Our research employs dimensional approach for emotion modeling. However, emotion recognition based on audio and/or visual clues mostly adopt categorical method (or basic emotion types). It becomes necessary to integrate results from these different modalities. Toward this end, we have also developed a mapping process to convert the result encoded in dimensional format into categorical data.
138

Modélisation, détection et annotation des états émotionnels à l'aide d'un espace vectoriel multidimensionnel / Modeling, detection and annotation of emotional states using an algebraic multidimensional vector space

Tayari Meftah, Imen 12 April 2013 (has links)
Notre travail s'inscrit dans le domaine de l'affective computing et plus précisément la modélisation, détection et annotation des émotions. L'objectif est d'étudier, d'identifier et de modéliser les émotions afin d'assurer l’échange entre applications multimodales. Notre contribution s'axe donc sur trois points. En premier lieu, nous présentons une nouvelle vision de la modélisation des états émotionnels basée sur un modèle générique pour la représentation et l'échange des émotions entre applications multimodales. Il s'agit d'un modèle de représentation hiérarchique composé de trois couches distinctes : la couche psychologique, la couche de calcul formel et la couche langage. Ce modèle permet la représentation d'une infinité d'émotions et la modélisation aussi bien des émotions de base comme la colère, la tristesse et la peur que les émotions complexes comme les émotions simulées et masquées. Le second point de notre contribution est axé sur une approche monomodale de reconnaissance des émotions fondée sur l'analyse des signaux physiologiques. L'algorithme de reconnaissance des émotions s'appuie à la fois sur l'application des techniques de traitement du signal, sur une classification par plus proche voisins et également sur notre modèle multidimensionnel de représentation des émotions. Notre troisième contribution porte sur une approche multimodale de reconnaissance des émotions. Cette approche de traitement des données conduit à une génération d'information de meilleure qualité et plus fiable que celle obtenue à partir d'une seule modalité. Les résultats expérimentaux montrent une amélioration significative des taux de reconnaissance des huit émotions par rapport aux résultats obtenus avec l'approche monomodale. Enfin nous avons intégré notre travail dans une application de détection de la dépression des personnes âgées dans un habitat intelligent. Nous avons utilisé les signaux physiologiques recueillis à partir de différents capteurs installés dans l'habitat pour estimer l'état affectif de la personne concernée. / This study focuses on affective computing in both fields of modeling and detecting emotions. Our contributions concern three points. First, we present a generic solution of emotional data exchange between heterogeneous multi-modal applications. This proposal is based on a new algebraic representation of emotions and is composed of three distinct layers : the psychological layer, the formal computational layer and the language layer. The first layer represents the psychological theory adopted in our approach which is the Plutchik's theory. The second layer is based on a formal multidimensional model. It matches the psychological approach of the previous layer. The final layer uses XML to generate the final emotional data to be transferred through the network. In this study we demonstrate the effectiveness of our model to represent an in infinity of emotions and to model not only the basic emotions (e.g., anger, sadness, fear) but also complex emotions like simulated and masked emotions. Moreover, our proposal provides powerful mathematical tools for the analysis and the processing of these emotions and it enables the exchange of the emotional states regardless of the modalities and sensors used in the detection step. The second contribution consists on a new monomodal method of recognizing emotional states from physiological signals. The proposed method uses signal processing techniques to analyze physiological signals. It consists of two main steps : the training step and the detection step. In the First step, our algorithm extracts the features of emotion from the data to generate an emotion training data base. Then in the second step, we apply the k-nearest-neighbor classifier to assign the predefined classes to instances in the test set. The final result is defined as an eight components vector representing the felt emotion in multidimensional space. The third contribution is focused on multimodal approach for the emotion recognition that integrates information coming from different cues and modalities. It is based on our proposed formal multidimensional model. Experimental results show how the proposed approach increases the recognition rates in comparison with the unimodal approach. Finally, we integrated our study on an automatic tool for prevention and early detection of depression using physiological sensors. It consists of two main steps : the capture of physiological features and analysis of emotional information. The first step permits to detect emotions felt throughout the day. The second step consists on analyzing these emotional information to prevent depression.
139

Projevy emocí ve tváři / Facial expressions of emotions

Zajícová, Markéta January 2016 (has links)
Title: Facial Expressions of Emotions Author: Bc. Markéta Zajícová Department: Department of Psychology, Charles University in Prague, Faculty of Arts Supervisor: doc. PhDr. MUDr. Mgr. Radvan Bahbouh, Ph.D. Abstract: The theses is dedicated to facial expressions of emotions, it begins with a brief introduction to the topic of emotions as one of the cognitive functions, there is a definition of the term, classification of emotions and their psychopathology, it briefly summarizes the various theories of emotions. The greater part of the theoretical section is devoted to basic emotions and their manifestation in the face, as well as the ability to recognize and imitate them. The theoretical part is closed by the topic of emotional intelligence as a unifying element that highlights the importance of this issue. Empirical part is primarily focused on two abilities related to facial expressions of emotions, specifically the recognition and the production of them, then links these capabilities with additional characteristics as the gender, the education and their self-estimation. The main finding of this theses is that there is a statistical significant relationship (ρ=0.35, α=0.05) between the emotion recognition and production. Key words: Basic Emotion, Facial Expressions of Emotions, Emotion Recognition,...
140

Ansiedade de performance musical, reconhecimento de expressões faciais e ocitocina / Musical performance anxiety, facial emotion recognition and oxytocin

Sabino, Alini Daniéli Viana 03 May 2019 (has links)
A Ansiedade de Performance Musical (APM) é considerada uma condição caracterizada por apreensão persistente e intensa diante da apresentação musical pública, desproporcional ao nível de aptidão, treino e preparo do músico. Os sintomas ocorrem em uma escala de gravidade contínua que em seu extremo afeta a aptidão musical devido a sintomas ao nível físico, comportamental e cognitivo, além de déficits no processamento cognitivo e cognição social, em especial na capacidade de reconhecimento de expressões faciais de emoção (REFE). Assim, intervenções que possam corrigir esses vieses com eficácia são necessárias. Nesse sentido, os objetivos dos estudos que compõem esta tese são: a) avaliar o REFE em músicos com diferentes níveis de APM; b) realizar uma revisão sistemática da literatura de forma a trazer evidências sobre os efeitos das substâncias ansiolíticas no REFE em indivíduos saudáveis; e c) conduzir um ensaio clínico, cross over, randomizado, duplo cego e controlado por placebo para testar o efeito da OCT em músicos com alto/baixo nível de APM no REFE, nos indicadores de humor/ansiedade e na cognição negativa. Método: Para se atender ao objetivo realizou-se um estudoobservacional, transversal, com 150 músicos de ambos os sexos, de diferentes estilos musicais, os quais realizaram uma tarefa de REFE, após serem classificados quanto aos níveis de APM.Para atender-se o segundo objetivo conduziu-se uma revisão sistemática da literatura seguindo-se as diretrizes do PRISMA e do Cochrane Handbook for SystematicReviewsofInterventions. Por fim, para alcançar o terceiro objetivo, 43 músicos do sexo masculino, de diferentes estilos musicais participaram de um ensaio clínico, randomizado, cross over, controlado por placebo, no qual testou-se a eficácia de 24UI de OCT intranasal. Resultados:Os resultados evidenciaram que os músicos com altos níveis de APM apresentam um prejuízo global no REFE, expresso, sobretudo pela dificuldade no reconhecimento adequado da emoção alegria, a qual está associada aos sinais de aprovação social. A revisão da literatura evidenciou que poucas substâncias foram testadas até momento, e que as alterações no REFE foram específicas e dependentes do mecanismo de ação da substância no sistema nervoso central, dose e forma de administração. O ensaio clínico apontou uma melhora no reconhecimento da emoção alegria,somente em músicos com altos níveis de APM, após o uso agudo da OCT. Conclusão:O REFE mostrou-se alterado de forma específica em músicos com altos níveis de APM, os quais podem ser corrigidos através do uso da OCT intranasal, a qual desponta como uma substância promissora para o uso clínico / Musical Performance Anxiety (MPA) is considered a condition characterized by persistent and intense apprehension in circumstances involving public musical presentation, disproportionate to the musician\'s aptitude level, training and preparation. The symptoms occur on a continuous severity scale that affects, at its extreme, the musical aptitude due to symptoms at the physical, behavioral and cognitive levels, as well as interfering with cognitive processing and social cognition, especially in the facial emotion recognition (FER) ability. Thus, interventions that can effectively correct these deviances are necessary. Therefore, the aims of the studies that compose this thesis are: a) to analyze the (FER) in musicians with different levels of MPA; b) to carry out a systematic review of the literature in order to present evidence about the effects of anxiolytic substances on FER in healthy individuals; c) to conduct a randomized, double-blind, placebo-controlled, cross-over clinical trial to test the OT effect on musicians with high/low MPA level in FER, mood/anxiety indicators and negative cognition. Methods: To achieve the first aim of this study, a cross-sectional, observational study was conducted with 150 musicians of both sexes, of different musical styles, who performed a FER task, after being classified according to the MPA levels. As for the second aim, a systematic literature review was carried out in accordance with the PRISMA guidelines and the Cochrane Handbook for Systematic Reviews of Interventions. Finally, for the third aim, 43 male musicians of different musical styles have participated in a randomized, placebo-controlled, cross-over clinical trial in which the 24UI of intranasal OT efficiency was tested. Results: The results showed that musicians with high levels of MPA present a global impairment in FER, expressed mainly by the difficulty in the appropriate recognition of the emotion of joy, which is associated with signs of social approval. The review of the literature showed that few substances have been tested so far, and that the changes in FER were specific and dependent on the substance mechanism of action in the central nervous system, dose and form of administration. The clinical trial presented an improvement in the recognition of the emotion of joy, only in musicians with high levels of MPA, after the OT acute use. Conclusion: The FER was specifically altered in musicians with high levels of MPA, which can be corrected with the use of intranasal OT, which appears as a promising substance for clinical use

Page generated in 0.5486 seconds