Spelling suggestions: "subject:"[een] EMOTION RECOGNITION"" "subject:"[enn] EMOTION RECOGNITION""
111 |
Processing of Spontaneous Emotional Responses in Adolescents and Adults with Autism Spectrum Disorders Effect of Stimulus TypeCassidy, S., Mitchell, Peter, Chapman, P., Ropar, D. 04 June 2020 (has links)
Yes / Recent research has shown that adults with autism spectrum disorders (ASD) have difficulty interpreting others' emotional responses, in order to work out what actually happened to them. It is unclear what underlies this difficulty; important cues may be missed from fast paced dynamic stimuli, or spontaneous emotional responses may be too complex for those with ASD to successfully recognise. To explore these possibilities, 17 adolescents and adults with ASD and 17 neurotypical controls viewed 21 videos and pictures of peoples' emotional responses to gifts (chocolate, a handmade novelty or Monopoly money), then inferred what gift the person received and the emotion expressed by the person while eye movements were measured. Participants with ASD were significantly more accurate at distinguishing who received a chocolate or homemade gift from static (compared to dynamic) stimuli, but significantly less accurate when inferring who received Monopoly money from static (compared to dynamic) stimuli. Both groups made similar emotion attributions to each gift in both conditions (positive for chocolate, feigned positive for homemade and confused for Monopoly money). Participants with ASD only made marginally significantly fewer fixations to the eyes of the face, and face of the person than typical controls in both conditions. Results suggest adolescents and adults with ASD can distinguish subtle emotion cues for certain emotions (genuine from feigned positive) when given sufficient processing time, however, dynamic cues are informative for recognising emotion blends (e.g. smiling in confusion). This indicates difficulties processing complex emotion responses in ASD.
|
112 |
生理訊號監控應用於智慧生活環境之研究 / Application of physiological signal monitoring in smart living space徐世平, Shu, Shih Ping Unknown Date (has links)
在心理與認知科學領域中常使用生理訊號來測量受試者的反應,並反映出人們的心理狀態起伏。本研究探討應用生理訊號識別情緒之可能性,以及將生理訊號與其他情緒辨識結果整合之方法。
在過去的研究中,生理與心理的對應關係,並無太多著墨,可稱為一黑盒子(black-box)的方式。並因上述類別式實驗長時間收集的生理訊號,對於誘發特定情緒反應之因果(cause-effect)並未進行深入的討論。本研究由於實驗的設計與選用材料之故,可一探純粹由刺激引發的情緒下情緒在生理與心理之因果關係,在輸入輸出對應間能有較明確的解釋。
本研究中嘗試監測較短時間(<10sec)的生理資訊,期望以一近乎即時的方式判讀並回饋使用者適當的資訊,對於生理訊號與情緒狀態的關聯性研究,將以IAPS(International Affective Picture System) 素材為來源,進行較過去嚴謹的實驗設計與程序,以探究生理訊號特徵如何應用於情緒分類。
雖然本研究以維度式情緒學說為理論基礎,然考慮到實際應用情境,若有其他以類別式的理論為基礎之系統,如何整合維度式與類別式兩類的資訊,提出可行的轉換方式,亦是本研究的主要課題。 / Physiological signals can be used to measure a subject’s response to a particular stimulus, and infer the emotional status accordingly. This research investigates the feasibility of emotion recognition using physiological measurements in a smart living space. It also addresses important issues regarding the integration of classification results from multiple modalities.
Most past research regarded the recognition of emotion from physiological data as a mapping mechanism which can be learned from training data. These data were collected over a long period of time, and can not model the immediate cause-effect relationship effectively. Our research employs a more rigorous experiment design to study the relationship between a specific physiological signal and the emotion status. The newly designed procedure will enable us to identify and validate the discriminating power of each type of physiological signal in recognizing emotion.
Our research monitors short term (< 10s) physiological signals. We use the IAPS (International Affective Picture System) as our experiment material. Physiological data were collected during the presentation of various genres of pictures. With such controlled experiments, we expect the cause-effect relation to be better explained than previous black-box approaches.
Our research employs dimensional approach for emotion modeling. However, emotion recognition based on audio and/or visual clues mostly adopt categorical method (or basic emotion types). It becomes necessary to integrate results from these different modalities. Toward this end, we have also developed a mapping process to convert the result encoded in dimensional format into categorical data.
|
113 |
Modélisation, détection et annotation des états émotionnels à l'aide d'un espace vectoriel multidimensionnel / Modeling, detection and annotation of emotional states using an algebraic multidimensional vector spaceTayari Meftah, Imen 12 April 2013 (has links)
Notre travail s'inscrit dans le domaine de l'affective computing et plus précisément la modélisation, détection et annotation des émotions. L'objectif est d'étudier, d'identifier et de modéliser les émotions afin d'assurer l’échange entre applications multimodales. Notre contribution s'axe donc sur trois points. En premier lieu, nous présentons une nouvelle vision de la modélisation des états émotionnels basée sur un modèle générique pour la représentation et l'échange des émotions entre applications multimodales. Il s'agit d'un modèle de représentation hiérarchique composé de trois couches distinctes : la couche psychologique, la couche de calcul formel et la couche langage. Ce modèle permet la représentation d'une infinité d'émotions et la modélisation aussi bien des émotions de base comme la colère, la tristesse et la peur que les émotions complexes comme les émotions simulées et masquées. Le second point de notre contribution est axé sur une approche monomodale de reconnaissance des émotions fondée sur l'analyse des signaux physiologiques. L'algorithme de reconnaissance des émotions s'appuie à la fois sur l'application des techniques de traitement du signal, sur une classification par plus proche voisins et également sur notre modèle multidimensionnel de représentation des émotions. Notre troisième contribution porte sur une approche multimodale de reconnaissance des émotions. Cette approche de traitement des données conduit à une génération d'information de meilleure qualité et plus fiable que celle obtenue à partir d'une seule modalité. Les résultats expérimentaux montrent une amélioration significative des taux de reconnaissance des huit émotions par rapport aux résultats obtenus avec l'approche monomodale. Enfin nous avons intégré notre travail dans une application de détection de la dépression des personnes âgées dans un habitat intelligent. Nous avons utilisé les signaux physiologiques recueillis à partir de différents capteurs installés dans l'habitat pour estimer l'état affectif de la personne concernée. / This study focuses on affective computing in both fields of modeling and detecting emotions. Our contributions concern three points. First, we present a generic solution of emotional data exchange between heterogeneous multi-modal applications. This proposal is based on a new algebraic representation of emotions and is composed of three distinct layers : the psychological layer, the formal computational layer and the language layer. The first layer represents the psychological theory adopted in our approach which is the Plutchik's theory. The second layer is based on a formal multidimensional model. It matches the psychological approach of the previous layer. The final layer uses XML to generate the final emotional data to be transferred through the network. In this study we demonstrate the effectiveness of our model to represent an in infinity of emotions and to model not only the basic emotions (e.g., anger, sadness, fear) but also complex emotions like simulated and masked emotions. Moreover, our proposal provides powerful mathematical tools for the analysis and the processing of these emotions and it enables the exchange of the emotional states regardless of the modalities and sensors used in the detection step. The second contribution consists on a new monomodal method of recognizing emotional states from physiological signals. The proposed method uses signal processing techniques to analyze physiological signals. It consists of two main steps : the training step and the detection step. In the First step, our algorithm extracts the features of emotion from the data to generate an emotion training data base. Then in the second step, we apply the k-nearest-neighbor classifier to assign the predefined classes to instances in the test set. The final result is defined as an eight components vector representing the felt emotion in multidimensional space. The third contribution is focused on multimodal approach for the emotion recognition that integrates information coming from different cues and modalities. It is based on our proposed formal multidimensional model. Experimental results show how the proposed approach increases the recognition rates in comparison with the unimodal approach. Finally, we integrated our study on an automatic tool for prevention and early detection of depression using physiological sensors. It consists of two main steps : the capture of physiological features and analysis of emotional information. The first step permits to detect emotions felt throughout the day. The second step consists on analyzing these emotional information to prevent depression.
|
114 |
Projevy emocí ve tváři / Facial expressions of emotionsZajícová, Markéta January 2016 (has links)
Title: Facial Expressions of Emotions Author: Bc. Markéta Zajícová Department: Department of Psychology, Charles University in Prague, Faculty of Arts Supervisor: doc. PhDr. MUDr. Mgr. Radvan Bahbouh, Ph.D. Abstract: The theses is dedicated to facial expressions of emotions, it begins with a brief introduction to the topic of emotions as one of the cognitive functions, there is a definition of the term, classification of emotions and their psychopathology, it briefly summarizes the various theories of emotions. The greater part of the theoretical section is devoted to basic emotions and their manifestation in the face, as well as the ability to recognize and imitate them. The theoretical part is closed by the topic of emotional intelligence as a unifying element that highlights the importance of this issue. Empirical part is primarily focused on two abilities related to facial expressions of emotions, specifically the recognition and the production of them, then links these capabilities with additional characteristics as the gender, the education and their self-estimation. The main finding of this theses is that there is a statistical significant relationship (ρ=0.35, α=0.05) between the emotion recognition and production. Key words: Basic Emotion, Facial Expressions of Emotions, Emotion Recognition,...
|
115 |
Ansiedade de performance musical, reconhecimento de expressões faciais e ocitocina / Musical performance anxiety, facial emotion recognition and oxytocinSabino, Alini Daniéli Viana 03 May 2019 (has links)
A Ansiedade de Performance Musical (APM) é considerada uma condição caracterizada por apreensão persistente e intensa diante da apresentação musical pública, desproporcional ao nível de aptidão, treino e preparo do músico. Os sintomas ocorrem em uma escala de gravidade contínua que em seu extremo afeta a aptidão musical devido a sintomas ao nível físico, comportamental e cognitivo, além de déficits no processamento cognitivo e cognição social, em especial na capacidade de reconhecimento de expressões faciais de emoção (REFE). Assim, intervenções que possam corrigir esses vieses com eficácia são necessárias. Nesse sentido, os objetivos dos estudos que compõem esta tese são: a) avaliar o REFE em músicos com diferentes níveis de APM; b) realizar uma revisão sistemática da literatura de forma a trazer evidências sobre os efeitos das substâncias ansiolíticas no REFE em indivíduos saudáveis; e c) conduzir um ensaio clínico, cross over, randomizado, duplo cego e controlado por placebo para testar o efeito da OCT em músicos com alto/baixo nível de APM no REFE, nos indicadores de humor/ansiedade e na cognição negativa. Método: Para se atender ao objetivo realizou-se um estudoobservacional, transversal, com 150 músicos de ambos os sexos, de diferentes estilos musicais, os quais realizaram uma tarefa de REFE, após serem classificados quanto aos níveis de APM.Para atender-se o segundo objetivo conduziu-se uma revisão sistemática da literatura seguindo-se as diretrizes do PRISMA e do Cochrane Handbook for SystematicReviewsofInterventions. Por fim, para alcançar o terceiro objetivo, 43 músicos do sexo masculino, de diferentes estilos musicais participaram de um ensaio clínico, randomizado, cross over, controlado por placebo, no qual testou-se a eficácia de 24UI de OCT intranasal. Resultados:Os resultados evidenciaram que os músicos com altos níveis de APM apresentam um prejuízo global no REFE, expresso, sobretudo pela dificuldade no reconhecimento adequado da emoção alegria, a qual está associada aos sinais de aprovação social. A revisão da literatura evidenciou que poucas substâncias foram testadas até momento, e que as alterações no REFE foram específicas e dependentes do mecanismo de ação da substância no sistema nervoso central, dose e forma de administração. O ensaio clínico apontou uma melhora no reconhecimento da emoção alegria,somente em músicos com altos níveis de APM, após o uso agudo da OCT. Conclusão:O REFE mostrou-se alterado de forma específica em músicos com altos níveis de APM, os quais podem ser corrigidos através do uso da OCT intranasal, a qual desponta como uma substância promissora para o uso clínico / Musical Performance Anxiety (MPA) is considered a condition characterized by persistent and intense apprehension in circumstances involving public musical presentation, disproportionate to the musician\'s aptitude level, training and preparation. The symptoms occur on a continuous severity scale that affects, at its extreme, the musical aptitude due to symptoms at the physical, behavioral and cognitive levels, as well as interfering with cognitive processing and social cognition, especially in the facial emotion recognition (FER) ability. Thus, interventions that can effectively correct these deviances are necessary. Therefore, the aims of the studies that compose this thesis are: a) to analyze the (FER) in musicians with different levels of MPA; b) to carry out a systematic review of the literature in order to present evidence about the effects of anxiolytic substances on FER in healthy individuals; c) to conduct a randomized, double-blind, placebo-controlled, cross-over clinical trial to test the OT effect on musicians with high/low MPA level in FER, mood/anxiety indicators and negative cognition. Methods: To achieve the first aim of this study, a cross-sectional, observational study was conducted with 150 musicians of both sexes, of different musical styles, who performed a FER task, after being classified according to the MPA levels. As for the second aim, a systematic literature review was carried out in accordance with the PRISMA guidelines and the Cochrane Handbook for Systematic Reviews of Interventions. Finally, for the third aim, 43 male musicians of different musical styles have participated in a randomized, placebo-controlled, cross-over clinical trial in which the 24UI of intranasal OT efficiency was tested. Results: The results showed that musicians with high levels of MPA present a global impairment in FER, expressed mainly by the difficulty in the appropriate recognition of the emotion of joy, which is associated with signs of social approval. The review of the literature showed that few substances have been tested so far, and that the changes in FER were specific and dependent on the substance mechanism of action in the central nervous system, dose and form of administration. The clinical trial presented an improvement in the recognition of the emotion of joy, only in musicians with high levels of MPA, after the OT acute use. Conclusion: The FER was specifically altered in musicians with high levels of MPA, which can be corrected with the use of intranasal OT, which appears as a promising substance for clinical use
|
116 |
Analyse acoustique de la voix émotionnelle de locuteurs lors d’une interaction humain-robot / Acoustic analysis of speakers emotional voices during a human-robot interactionTahon, Marie 15 November 2012 (has links)
Mes travaux de thèse s'intéressent à la voix émotionnelle dans un contexte d'interaction humain-robot. Dans une interaction réaliste, nous définissons au moins quatre grands types de variabilités : l'environnement (salle, microphone); le locuteur, ses caractéristiques physiques (genre, âge, type de voix) et sa personnalité; ses états émotionnels; et enfin le type d'interaction (jeu, situation d'urgence ou de vie quotidienne). A partir de signaux audio collectés dans différentes conditions, nous avons cherché, grâce à des descripteurs acoustiques, à imbriquer la caractérisation d'un locuteur et de son état émotionnel en prenant en compte ces variabilités.Déterminer quels descripteurs sont essentiels et quels sont ceux à éviter est un défi complexe puisqu'il nécessite de travailler sur un grand nombre de variabilités et donc d'avoir à sa disposition des corpus riches et variés. Les principaux résultats portent à la fois sur la collecte et l'annotation de corpus émotionnels réalistes avec des locuteurs variés (enfants, adultes, personnes âgées), dans plusieurs environnements, et sur la robustesse de descripteurs acoustiques suivant ces quatre variabilités. Deux résultats intéressants découlent de cette analyse acoustique: la caractérisation sonore d'un corpus et l'établissement d'une liste "noire" de descripteurs très variables. Les émotions ne sont qu'une partie des indices paralinguistiques supportés par le signal audio, la personnalité et le stress dans la voix ont également été étudiés. Nous avons également mis en oeuvre un module de reconnaissance automatique des émotions et de caractérisation du locuteur qui a été testé au cours d'interactions humain-robot réalistes. Une réflexion éthique a été menée sur ces travaux. / This thesis deals with emotional voices during a human-robot interaction. In a natural interaction, we define at least, four kinds of variabilities: environment (room, microphone); speaker, its physic characteristics (gender, age, voice type) and personality; emotional states; and finally the kind of interaction (game scenario, emergency, everyday life). From audio signals collected in different conditions, we tried to find out, with acoustic features, to overlap speaker and his emotional state characterisation taking into account these variabilities.To find which features are essential and which are to avoid is hard challenge because it needs to work with a high number of variabilities and then to have riche and diverse data to our disposal. The main results are about the collection and the annotation of natural emotional corpora that have been recorded with different kinds of speakers (children, adults, elderly people) in various environments, and about how reliable are acoustic features across the four variabilities. This analysis led to two interesting aspects: the audio characterisation of a corpus and the drawing of a black list of features which vary a lot. Emotions are ust a part of paralinguistic features that are supported by the audio channel, other paralinguistic features have been studied such as personality and stress in the voice. We have also built automatic emotion recognition and speaker characterisation module that we have tested during realistic interactions. An ethic discussion have been driven on our work.
|
117 |
Emotion recognition from expressions in voice and face – Behavioral and Endocrinological evidence –Lausen, Adi 24 April 2019 (has links)
No description available.
|
118 |
Reconnaissance et mimétisme des émotions exprimées sur le visage : vers une compréhension des mécanismes à travers le modèle parkinsonien / Facial emotion recognition and facial mimicry : new insights in Parkinson's diseaseArgaud, Soizic 07 November 2016 (has links)
La maladie de Parkinson est une affection neurodégénérative principalement associée à la dégénérescence progressive des neurones dopaminergiques du mésencéphale provoquant un dysfonctionnement des noyaux gris centraux. En parallèle de symptômes moteurs bien connus, cette affection entraîne également l’émergence de déficits émotionnels impactant en outre l’expression et la reconnaissance des émotions. Ici, se pose la question d’un déficit de reconnaissance des émotions faciales chez les patients parkinsoniens lié au moins en partie aux troubles moteurs. En effet, selon les théories de simulation des émotions, copier les émotions de l’autre nous permettrait de mieux les reconnaître. Ce serait le rôle du mimétisme facial. Automatique et inconscient, ce phénomène est caractérisé par des réactions musculaires congruentes à l’émotion exprimée par autrui. Dans ce contexte, une perturbation des capacités motrices pourrait conduire à une altération des capacités de reconnaissance des émotions. Or, l’un des symptômes moteurs les plus fréquents dans la maladie de Parkinson, l’amimie faciale, consiste en une perte de la mobilité des muscles du visage. Ainsi, nous avons examiné l’efficience du mimétisme facial dans la maladie de Parkinson, son influence sur la qualité du processus de reconnaissance des émotions, ainsi que l’effet du traitement dopaminergique antiparkinsonien sur ces processus. Pour cela, nous avons développé un paradigme permettant l’évaluation simultanée des capacités de reconnaissance et de mimétisme (corrugator supercilii, zygomaticus major et orbicularis oculi) d’émotions exprimées sur des visages dynamiques (joie, colère, neutre). Cette expérience a été proposée à un groupe de patients parkinsoniens comparé à un groupe de sujets sains témoins. Nos résultats supportent l’hypothèse selon laquelle le déficit de reconnaissance des émotions chez le patient parkinsonien pourrait résulter d’un système « bruité » au sein duquel le mimétisme facial participerait. Cependant, l’altération du mimétisme facial dans la maladie de Parkinson et son influence sur la reconnaissance des émotions dépendraient des muscles impliqués dans l’expression à reconnaître. En effet, ce serait davantage le relâchement du corrugateur plutôt que les contractions du zygomatique ou de l’orbiculaire de l’œil qui nous aiderait à bien reconnaître les expressions de joie. D’un autre côté, rien ne nous permet ici de confirmer l’influence du mimétisme facial sur la reconnaissance des expressions de colère. Enfin, nous avons proposé cette expérience à des patients en condition de traitement habituel et après une interruption temporaire de traitement. Les résultats préliminaires de cette étude apportent des éléments en faveur d’un effet bénéfique du traitement dopaminergique tant sur la reconnaissance des émotions que sur les capacités de mimétisme. L’hypothèse d’un effet bénéfique dit « périphérique » sur la reconnaissance des émotions par restauration du mimétisme facial reste à tester à ce jour. Nous discutons l’ensemble de ces résultats selon les conceptions récentes sur le rôle des noyaux gris centraux et sous l’angle de l’hypothèse de feedback facial. / Parkinson’s disease is a neurodegenerative condition primarily resulting from a dysfunction of the basal ganglia following a progressive loss of midbrain dopamine neurons. Alongside the well-known motor symptoms, PD patients also suffer from emotional disorders including difficulties to recognize and to produce facial emotions. Here, there is a question whether the emotion recognition impairments in Parkinson’s disease could be in part related to motor symptoms. Indeed, according to embodied simulation theory, understanding other people’s emotions would be fostered by facial mimicry. Automatic and non-conscious, facial mimicry is characterized by congruent valence-related facial responses to the emotion expressed by others. In this context, disturbed motor processing could lead to impairments in emotion recognition. Yet, one of the most distinctive clinical features in Parkinson’s disease is facial amimia, a reduction in facial expressiveness. Thus, we studied the ability to mimic facial expression in Parkinson’s disease, its effective influence on emotion recognition as well as the effect of dopamine replacement therapy both on emotion recognition and facial mimicry. For these purposes, we investigated electromyographic responses (corrugator supercilii, zygomaticus major and orbicularis oculi) to facial emotion among patients suffering from Parkinson’s disease and healthy participants in a facial emotion recognition paradigm (joy, anger, neutral). Our results showed that the facial emotion processing in Parkinson’s disease could be swung from a normal to a pathological, noisy, functioning because of a weaker signal-to-noise ratio. Besides, facial mimicry could have a beneficial effect on the recognition of emotion. Nevertheless, the negative impact of Parkinson’s disease on facial mimicry and its influence on emotion recognition would depend on the muscles involved in the production of the emotional expression to decode. Indeed, the corrugator relaxation would be a stronger predictor of the recognition of joy expressions than the zygomatic or orbicularis contractions. On the other hand, we cannot conclude here that the corrugator reactions foster the recognition of anger. Furthermore, we proposed this experiment to a group of patients under dopamine replacement therapy but also during a temporary withdrawal from treatment. The preliminary results are in favour of a beneficial effect of dopaminergic medication on both emotion recognition and facial mimicry. The potential positive “peripheral” impact of dopamine replacement therapy on emotion recognition through restoration of facial mimicry has still to be tested. We discussed these findings in the light of recent considerations about the role of basal ganglia-based circuits and embodied simulation theory ending with the results’ clinical significances.
|
119 |
The Role Of Meta-mood Experience On The Mood Congruency Effect In Recognizing Emotions From Facial ExpressionsKavcioglu, Fatih Cemil 01 September 2011 (has links) (PDF)
The aim of the current study was to investigate the roles of meta-mood experience on the mood congruency effect in recognizing emotions from neutral facial expressions. For this aim, three scales were translated and adapted to Turkish, namely Brief Mood Introspection Scale (BMIS), State Meta-Mood Scale (SMMS), and Trait Meta-Mood Scale (TMMS). The reliability and validity analyses came out to be satisfactory. For the main analyses, an experimental study was conducted. The experimental design consisted of the administration of the Brief Symptom Inventory, Pre- induction Brief Mood Introspection Scale, Trait Meta-MoodScale, and Basic Personality Traits Inventory in the first step, followed by a sad mood induction procedure and the administration of Post- Brief Symptom Inventory, and State Meta-Mood Scale in the second step. The last step consisted of the administration of the NimStim Set of Facial Expressions. For the main analyses regarding mood congruency only the
v
mislabelings of neutral faces as sad or happy were considered. The results revealed that among personality traits Agreeableness was negatively associated with perceiving fast displayed neutral faces as sad. After controlling for personality traits / however, unpleasant mood measured before the mood induction procedure was positively associated with perceiving neutral faces as sad. When perceiving slow displayed neutral faces as happy were examined, it was found that anxiety was positively associated with such a bias. After controlling for symptomatology, among personality traits, extraversion and conscientiousness were found to be negatively associated with mislabelling slow displayed neutral faces as happy. Among the evaluative domain of the SMMS, typicality was found to be negatively associated with such a bias / and lastly, among the regulatory domain of the SMMS, emotional repair was found to be negatively associated with mislabelling slow displayed neutral faces as happy.
|
120 |
Decisional-Emotional Support System for a Synthetic Agent : Influence of Emotions in Decision-Making Toward the Participation of Automata in SocietyGuerrero Razuri, Javier Francisco January 2015 (has links)
Emotion influences our actions, and this means that emotion has subjective decision value. Emotions, properly interpreted and understood, of those affected by decisions provide feedback to actions and, as such, serve as a basis for decisions. Accordingly, "affective computing" represents a wide range of technological opportunities toward the implementation of emotions to improve human-computer interaction, which also includes insights across a range of contexts of computational sciences into how we can design computer systems to communicate and recognize the emotional states provided by humans. Today, emotional systems such as software-only agents and embodied robots seem to improve every day at managing large volumes of information, and they remain emotionally incapable to read our feelings and react according to them. From a computational viewpoint, technology has made significant steps in determining how an emotional behavior model could be built; such a model is intended to be used for the purpose of intelligent assistance and support to humans. Human emotions are engines that allow people to generate useful responses to the current situation, taking into account the emotional states of others. Recovering the emotional cues emanating from the natural behavior of humans such as facial expressions and bodily kinetics could help to develop systems that allow recognition, interpretation, processing, simulation, and basing decisions on human emotions. Currently, there is a need to create emotional systems able to develop an emotional bond with users, reacting emotionally to encountered situations with the ability to help, assisting users to make their daily life easier. Handling emotions and their influence on decisions can improve the human-machine communication with a wider vision. The present thesis strives to provide an emotional architecture applicable for an agent, based on a group of decision-making models influenced by external emotional information provided by humans, acquired through a group of classification techniques from machine learning algorithms. The system can form positive bonds with the people it encounters when proceeding according to their emotional behavior. The agent embodied in the emotional architecture will interact with a user, facilitating their adoption in application areas such as caregiving to provide emotional support to the elderly. The agent's architecture uses an adversarial structure based on an Adversarial Risk Analysis framework with a decision analytic flavor that includes models forecasting a human's behavior and their impact on the surrounding environment. The agent perceives its environment and the actions performed by an individual, which constitute the resources needed to execute the agent's decision during the interaction. The agent's decision that is carried out from the adversarial structure is also affected by the information of emotional states provided by a classifiers-ensemble system, giving rise to a "decision with emotional connotation" included in the group of affective decisions. The performance of different well-known classifiers was compared in order to select the best result and build the ensemble system, based on feature selection methods that were introduced to predict the emotion. These methods are based on facial expression, bodily gestures, and speech, with satisfactory accuracy long before the final system. / <p>At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 8: Accepted.</p>
|
Page generated in 0.0519 seconds