• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 12
  • 10
  • 9
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 191
  • 191
  • 49
  • 37
  • 35
  • 34
  • 28
  • 26
  • 25
  • 24
  • 23
  • 21
  • 21
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Analyse acoustique de la voix émotionnelle de locuteurs lors d’une interaction humain-robot / Acoustic analysis of speakers emotional voices during a human-robot interaction

Tahon, Marie 15 November 2012 (has links)
Mes travaux de thèse s'intéressent à la voix émotionnelle dans un contexte d'interaction humain-robot. Dans une interaction réaliste, nous définissons au moins quatre grands types de variabilités : l'environnement (salle, microphone); le locuteur, ses caractéristiques physiques (genre, âge, type de voix) et sa personnalité; ses états émotionnels; et enfin le type d'interaction (jeu, situation d'urgence ou de vie quotidienne). A partir de signaux audio collectés dans différentes conditions, nous avons cherché, grâce à des descripteurs acoustiques, à imbriquer la caractérisation d'un locuteur et de son état émotionnel en prenant en compte ces variabilités.Déterminer quels descripteurs sont essentiels et quels sont ceux à éviter est un défi complexe puisqu'il nécessite de travailler sur un grand nombre de variabilités et donc d'avoir à sa disposition des corpus riches et variés. Les principaux résultats portent à la fois sur la collecte et l'annotation de corpus émotionnels réalistes avec des locuteurs variés (enfants, adultes, personnes âgées), dans plusieurs environnements, et sur la robustesse de descripteurs acoustiques suivant ces quatre variabilités. Deux résultats intéressants découlent de cette analyse acoustique: la caractérisation sonore d'un corpus et l'établissement d'une liste "noire" de descripteurs très variables. Les émotions ne sont qu'une partie des indices paralinguistiques supportés par le signal audio, la personnalité et le stress dans la voix ont également été étudiés. Nous avons également mis en oeuvre un module de reconnaissance automatique des émotions et de caractérisation du locuteur qui a été testé au cours d'interactions humain-robot réalistes. Une réflexion éthique a été menée sur ces travaux. / This thesis deals with emotional voices during a human-robot interaction. In a natural interaction, we define at least, four kinds of variabilities: environment (room, microphone); speaker, its physic characteristics (gender, age, voice type) and personality; emotional states; and finally the kind of interaction (game scenario, emergency, everyday life). From audio signals collected in different conditions, we tried to find out, with acoustic features, to overlap speaker and his emotional state characterisation taking into account these variabilities.To find which features are essential and which are to avoid is hard challenge because it needs to work with a high number of variabilities and then to have riche and diverse data to our disposal. The main results are about the collection and the annotation of natural emotional corpora that have been recorded with different kinds of speakers (children, adults, elderly people) in various environments, and about how reliable are acoustic features across the four variabilities. This analysis led to two interesting aspects: the audio characterisation of a corpus and the drawing of a black list of features which vary a lot. Emotions are ust a part of paralinguistic features that are supported by the audio channel, other paralinguistic features have been studied such as personality and stress in the voice. We have also built automatic emotion recognition and speaker characterisation module that we have tested during realistic interactions. An ethic discussion have been driven on our work.
142

Emotion recognition from expressions in voice and face – Behavioral and Endocrinological evidence –

Lausen, Adi 24 April 2019 (has links)
No description available.
143

Reconnaissance et mimétisme des émotions exprimées sur le visage : vers une compréhension des mécanismes à travers le modèle parkinsonien / Facial emotion recognition and facial mimicry : new insights in Parkinson's disease

Argaud, Soizic 07 November 2016 (has links)
La maladie de Parkinson est une affection neurodégénérative principalement associée à la dégénérescence progressive des neurones dopaminergiques du mésencéphale provoquant un dysfonctionnement des noyaux gris centraux. En parallèle de symptômes moteurs bien connus, cette affection entraîne également l’émergence de déficits émotionnels impactant en outre l’expression et la reconnaissance des émotions. Ici, se pose la question d’un déficit de reconnaissance des émotions faciales chez les patients parkinsoniens lié au moins en partie aux troubles moteurs. En effet, selon les théories de simulation des émotions, copier les émotions de l’autre nous permettrait de mieux les reconnaître. Ce serait le rôle du mimétisme facial. Automatique et inconscient, ce phénomène est caractérisé par des réactions musculaires congruentes à l’émotion exprimée par autrui. Dans ce contexte, une perturbation des capacités motrices pourrait conduire à une altération des capacités de reconnaissance des émotions. Or, l’un des symptômes moteurs les plus fréquents dans la maladie de Parkinson, l’amimie faciale, consiste en une perte de la mobilité des muscles du visage. Ainsi, nous avons examiné l’efficience du mimétisme facial dans la maladie de Parkinson, son influence sur la qualité du processus de reconnaissance des émotions, ainsi que l’effet du traitement dopaminergique antiparkinsonien sur ces processus. Pour cela, nous avons développé un paradigme permettant l’évaluation simultanée des capacités de reconnaissance et de mimétisme (corrugator supercilii, zygomaticus major et orbicularis oculi) d’émotions exprimées sur des visages dynamiques (joie, colère, neutre). Cette expérience a été proposée à un groupe de patients parkinsoniens comparé à un groupe de sujets sains témoins. Nos résultats supportent l’hypothèse selon laquelle le déficit de reconnaissance des émotions chez le patient parkinsonien pourrait résulter d’un système « bruité » au sein duquel le mimétisme facial participerait. Cependant, l’altération du mimétisme facial dans la maladie de Parkinson et son influence sur la reconnaissance des émotions dépendraient des muscles impliqués dans l’expression à reconnaître. En effet, ce serait davantage le relâchement du corrugateur plutôt que les contractions du zygomatique ou de l’orbiculaire de l’œil qui nous aiderait à bien reconnaître les expressions de joie. D’un autre côté, rien ne nous permet ici de confirmer l’influence du mimétisme facial sur la reconnaissance des expressions de colère. Enfin, nous avons proposé cette expérience à des patients en condition de traitement habituel et après une interruption temporaire de traitement. Les résultats préliminaires de cette étude apportent des éléments en faveur d’un effet bénéfique du traitement dopaminergique tant sur la reconnaissance des émotions que sur les capacités de mimétisme. L’hypothèse d’un effet bénéfique dit « périphérique » sur la reconnaissance des émotions par restauration du mimétisme facial reste à tester à ce jour. Nous discutons l’ensemble de ces résultats selon les conceptions récentes sur le rôle des noyaux gris centraux et sous l’angle de l’hypothèse de feedback facial. / Parkinson’s disease is a neurodegenerative condition primarily resulting from a dysfunction of the basal ganglia following a progressive loss of midbrain dopamine neurons. Alongside the well-known motor symptoms, PD patients also suffer from emotional disorders including difficulties to recognize and to produce facial emotions. Here, there is a question whether the emotion recognition impairments in Parkinson’s disease could be in part related to motor symptoms. Indeed, according to embodied simulation theory, understanding other people’s emotions would be fostered by facial mimicry. Automatic and non-conscious, facial mimicry is characterized by congruent valence-related facial responses to the emotion expressed by others. In this context, disturbed motor processing could lead to impairments in emotion recognition. Yet, one of the most distinctive clinical features in Parkinson’s disease is facial amimia, a reduction in facial expressiveness. Thus, we studied the ability to mimic facial expression in Parkinson’s disease, its effective influence on emotion recognition as well as the effect of dopamine replacement therapy both on emotion recognition and facial mimicry. For these purposes, we investigated electromyographic responses (corrugator supercilii, zygomaticus major and orbicularis oculi) to facial emotion among patients suffering from Parkinson’s disease and healthy participants in a facial emotion recognition paradigm (joy, anger, neutral). Our results showed that the facial emotion processing in Parkinson’s disease could be swung from a normal to a pathological, noisy, functioning because of a weaker signal-to-noise ratio. Besides, facial mimicry could have a beneficial effect on the recognition of emotion. Nevertheless, the negative impact of Parkinson’s disease on facial mimicry and its influence on emotion recognition would depend on the muscles involved in the production of the emotional expression to decode. Indeed, the corrugator relaxation would be a stronger predictor of the recognition of joy expressions than the zygomatic or orbicularis contractions. On the other hand, we cannot conclude here that the corrugator reactions foster the recognition of anger. Furthermore, we proposed this experiment to a group of patients under dopamine replacement therapy but also during a temporary withdrawal from treatment. The preliminary results are in favour of a beneficial effect of dopaminergic medication on both emotion recognition and facial mimicry. The potential positive “peripheral” impact of dopamine replacement therapy on emotion recognition through restoration of facial mimicry has still to be tested. We discussed these findings in the light of recent considerations about the role of basal ganglia-based circuits and embodied simulation theory ending with the results’ clinical significances.
144

The Role Of Meta-mood Experience On The Mood Congruency Effect In Recognizing Emotions From Facial Expressions

Kavcioglu, Fatih Cemil 01 September 2011 (has links) (PDF)
The aim of the current study was to investigate the roles of meta-mood experience on the mood congruency effect in recognizing emotions from neutral facial expressions. For this aim, three scales were translated and adapted to Turkish, namely Brief Mood Introspection Scale (BMIS), State Meta-Mood Scale (SMMS), and Trait Meta-Mood Scale (TMMS). The reliability and validity analyses came out to be satisfactory. For the main analyses, an experimental study was conducted. The experimental design consisted of the administration of the Brief Symptom Inventory, Pre- induction Brief Mood Introspection Scale, Trait Meta-MoodScale, and Basic Personality Traits Inventory in the first step, followed by a sad mood induction procedure and the administration of Post- Brief Symptom Inventory, and State Meta-Mood Scale in the second step. The last step consisted of the administration of the NimStim Set of Facial Expressions. For the main analyses regarding mood congruency only the v mislabelings of neutral faces as sad or happy were considered. The results revealed that among personality traits Agreeableness was negatively associated with perceiving fast displayed neutral faces as sad. After controlling for personality traits / however, unpleasant mood measured before the mood induction procedure was positively associated with perceiving neutral faces as sad. When perceiving slow displayed neutral faces as happy were examined, it was found that anxiety was positively associated with such a bias. After controlling for symptomatology, among personality traits, extraversion and conscientiousness were found to be negatively associated with mislabelling slow displayed neutral faces as happy. Among the evaluative domain of the SMMS, typicality was found to be negatively associated with such a bias / and lastly, among the regulatory domain of the SMMS, emotional repair was found to be negatively associated with mislabelling slow displayed neutral faces as happy.
145

Decisional-Emotional Support System for a Synthetic Agent : Influence of Emotions in Decision-Making Toward the Participation of Automata in Society

Guerrero Razuri, Javier Francisco January 2015 (has links)
Emotion influences our actions, and this means that emotion has subjective decision value. Emotions, properly interpreted and understood, of those affected by decisions provide feedback to actions and, as such, serve as a basis for decisions. Accordingly, "affective computing" represents a wide range of technological opportunities toward the implementation of emotions to improve human-computer interaction, which also includes insights across a range of contexts of computational sciences into how we can design computer systems to communicate and recognize the emotional states provided by humans. Today, emotional systems such as software-only agents and embodied robots seem to improve every day at managing large volumes of information, and they remain emotionally incapable to read our feelings and react according to them. From a computational viewpoint, technology has made significant steps in determining how an emotional behavior model could be built; such a model is intended to be used for the purpose of intelligent assistance and support to humans. Human emotions are engines that allow people to generate useful responses to the current situation, taking into account the emotional states of others. Recovering the emotional cues emanating from the natural behavior of humans such as facial expressions and bodily kinetics could help to develop systems that allow recognition, interpretation, processing, simulation, and basing decisions on human emotions. Currently, there is a need to create emotional systems able to develop an emotional bond with users, reacting emotionally to encountered situations with the ability to help, assisting users to make their daily life easier. Handling emotions and their influence on decisions can improve the human-machine communication with a wider vision. The present thesis strives to provide an emotional architecture applicable for an agent, based on a group of decision-making models influenced by external emotional information provided by humans, acquired through a group of classification techniques from machine learning algorithms. The system can form positive bonds with the people it encounters when proceeding according to their emotional behavior. The agent embodied in the emotional architecture will interact with a user, facilitating their adoption in application areas such as caregiving to provide emotional support to the elderly. The agent's architecture uses an adversarial structure based on an Adversarial Risk Analysis framework with a decision analytic flavor that includes models forecasting a human's behavior and their impact on the surrounding environment. The agent perceives its environment and the actions performed by an individual, which constitute the resources needed to execute the agent's decision during the interaction. The agent's decision that is carried out from the adversarial structure is also affected by the information of emotional states provided by a classifiers-ensemble system, giving rise to a "decision with emotional connotation" included in the group of affective decisions. The performance of different well-known classifiers was compared in order to select the best result and build the ensemble system, based on feature selection methods that were introduced to predict the emotion. These methods are based on facial expression, bodily gestures, and speech, with satisfactory accuracy long before the final system. / <p>At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 8: Accepted.</p>
146

Emocijų atpažinimas tiriant žmogaus fiziologinius parametrus / Emotion recognition on researching human physiological parameters

Marozas, JULIUS, Marozas, Julius 25 August 2008 (has links)
Emocijų atpažinimas, tiriant žmogaus fiziologinius parametrus, yra labai aktualus šiuolaikinės informatikos mokslo uždavinys. Šio darbo siekis yra suprojektuoti emocijų atpažinimo sistemos prototipą, kurį su minimaliais kaštais galima būtų pritaikyti įvairiose emocijomis grindžiamose sistemose. Pateikiamas tokios sistemos koncepcinis modelis, susidedantis iš aparatūrinės įrangos ir programinio modelių. Fiziologinių signalų stiprinimui naudojami AD620 ir OP97 operaciniai stiprintuvai. Analoginio signalo keitimas ir perdavimas į kompiuterį atliekamas naudojant Atmega16 mikrovaldiklį. Schemų testavimui naudojamas kompiuterinis oscilografas (PCS500). Atliekamas skaitmeninio signalo filtravimo metodų tyrimas. Pristatomi fiziologinių parametrų fiksavimo metodų (EKG, SC) atpažinimo algoritmai, pagrįsti SVW algoritmu. Atliekama HRV spektrinė analizė į dažninę sritį, naudojant Furjė transformacijas. Pateikiama praplėsta neraiškioji valdymo sistema, kuri iš fiziologinių parametrų (HR, HRVL, HRVH, SCR, STpirštas, STgalva) išveda susijaudinimo–valentingumo laipsnius. Pagal šiuos laipsnius išvedamos emocijos. Aptariami sukurtų algoritmų, realizuojant juos realioje aplinkoje, nauji elgsenos charakteristikų ypatumai. / Very important task of modern computer science is emotion recognition by human physiological parameters. The purpose of this work is to design a prototype of emotion recognition system, which could be possible customize in various emotions systems with minimal costs. There is representing such system conceptual model, which consists from them realization hardware and programme models. For amplifying the physiological signals is using instrumental AD620 and high precision OP97 operations amplifiers. Analogical signal converting to digital and transmitting to the computer redundant by Atmega16 microcontroller. For electrical schemas testing is using oscillograph (PCS500) connected on the computer. It is redundant analysis for digital signal filtering methods. There is presenting the recognition algorithms, based on SVW algorithm, for physiological parameters fixing methods (ECG, SC). For HRV spectral analyze in frequency domain it is using Furies transformations. There is extended fuzzy control system, which deduces arousal–valence levels from physiological parameters (HR, HRVL, HRVH, SCR, STfinger, SThead). By theses levels is deduces emotions. The new characteristical singularities of designed algorithms in real environment are also discussed.
147

Emotion recognition in children with Fetal Alcohol Spectrum Disorders

Siklos, Susan 02 December 2008 (has links)
Despite the anecdotal evidence of social difficulties in children with Fetal Alcohol Spectrum Disorders (FASD), and the risk for secondary disabilities as a result of these social difficulties, very little research has examined social-emotional functioning in children with FASD. The majority of the research conducted thus far has relied on parent and teacher reports to document social impairments. These parent and teacher reports provide a broad measure of social functioning but are unable to elucidate the specific aspects of social functioning that this group of children might find difficult. As a result, it has been very difficult to develop effective social interventions for children with FASD because it is unclear what aspects of social functioning should be targeted. The current study aimed to examine emotion recognition abilities in children with FASD, as recognition of emotions is an important precursor for appropriate social interaction. The study included 22 participants with diagnosed FASD (ages 8-14), with age- and gender- matched typically developing controls. Participants were assessed using computerized measures of emotion recognition from three nonlinguistic modalities: facial expressions (static and dynamic, child and adult faces), emotional tone of voice (child and adult voices), and body positioning and movement (postures and point-light walkers). In addition, participants completed a task assessing emotion recognition in real-life scenarios. Finally, caregivers completed measures of behavioural functioning, adaptive functioning, FASD symptomatology, and a demographics questionnaire. Overall, findings suggest that children with FASD do have more difficulties than age-matched typically developing peers in aspects of emotion recognition, with particular difficulties in recognizing emotions from adult facial expressions and adult emotional prosody. In addition, children with FASD had more difficulty perceiving differences in facial expressions. When the effect of age was examined, it was found that some aspects of emotion recognition were more impaired in children with FASD between age eight and ten years compared to same-age typically developing peers and compared to children with FASD age 11-14. This finding suggests that younger children with FASD may demonstrate a delay in the acquisition of some aspects of emotion recognition or may be more vulnerable to the information processing demands of some tasks compared to older children with FASD. The types of emotion recognition difficulties found in the current study supported a pattern where children with FASD make more errors on emotion recognition tasks when the complexity of the task is increased and consequently demands greater information processing. As such, it is anticipated that children with FASD would be likely to have the most difficulty with emotion recognition abilities embedded within complex, rapidly changing, real-world social situations, and in recognizing more subtle emotional displays. Caregivers, teachers, and professionals living and working with children and youth with FASD should be aware of possible emotion recognition difficulties in complex social situations and should help foster stronger emotion recognition skills when difficulties are detected.
148

Emotion recognition in children with Fetal Alcohol Spectrum Disorders

Siklos, Susan 02 December 2008 (has links)
Despite the anecdotal evidence of social difficulties in children with Fetal Alcohol Spectrum Disorders (FASD), and the risk for secondary disabilities as a result of these social difficulties, very little research has examined social-emotional functioning in children with FASD. The majority of the research conducted thus far has relied on parent and teacher reports to document social impairments. These parent and teacher reports provide a broad measure of social functioning but are unable to elucidate the specific aspects of social functioning that this group of children might find difficult. As a result, it has been very difficult to develop effective social interventions for children with FASD because it is unclear what aspects of social functioning should be targeted. The current study aimed to examine emotion recognition abilities in children with FASD, as recognition of emotions is an important precursor for appropriate social interaction. The study included 22 participants with diagnosed FASD (ages 8-14), with age- and gender- matched typically developing controls. Participants were assessed using computerized measures of emotion recognition from three nonlinguistic modalities: facial expressions (static and dynamic, child and adult faces), emotional tone of voice (child and adult voices), and body positioning and movement (postures and point-light walkers). In addition, participants completed a task assessing emotion recognition in real-life scenarios. Finally, caregivers completed measures of behavioural functioning, adaptive functioning, FASD symptomatology, and a demographics questionnaire. Overall, findings suggest that children with FASD do have more difficulties than age-matched typically developing peers in aspects of emotion recognition, with particular difficulties in recognizing emotions from adult facial expressions and adult emotional prosody. In addition, children with FASD had more difficulty perceiving differences in facial expressions. When the effect of age was examined, it was found that some aspects of emotion recognition were more impaired in children with FASD between age eight and ten years compared to same-age typically developing peers and compared to children with FASD age 11-14. This finding suggests that younger children with FASD may demonstrate a delay in the acquisition of some aspects of emotion recognition or may be more vulnerable to the information processing demands of some tasks compared to older children with FASD. The types of emotion recognition difficulties found in the current study supported a pattern where children with FASD make more errors on emotion recognition tasks when the complexity of the task is increased and consequently demands greater information processing. As such, it is anticipated that children with FASD would be likely to have the most difficulty with emotion recognition abilities embedded within complex, rapidly changing, real-world social situations, and in recognizing more subtle emotional displays. Caregivers, teachers, and professionals living and working with children and youth with FASD should be aware of possible emotion recognition difficulties in complex social situations and should help foster stronger emotion recognition skills when difficulties are detected.
149

[en] BUILDING THE VISUAL TRACKING PARADIGM IN THE RECOG-NITION OF EMOTIONAL IN CHILDREN WITH AUTISM / [pt] CONSTRUÇÃO DE UM PARADIGMA DE RASTREIO VISUAL NO RECONHECIMENTO DE EMOÇÕES EM CRIANÇAS AUTISTAS

KELLY LUANA MAMEDE N ZANGRANDO 13 September 2018 (has links)
[pt] O Autismo é um transtorno do neurodesenvolvimento caracterizado por prejuízos na interação social, na comunicação e no comportamento. Um dos deficit apresentados em seu escopo é no reconhecimento de emoções, apontando para uma série de estratégias de visualização atípicas, tais como: olhar reduzido para estímulos sociais; preferência para a região da boca em vez dos olhos e dificuldades em fixar a atenção. Todavia, não existe um consenso, até o momento, sobre os fatores que podem acarretar tais prejuízos, bem como se existe um padrão característico do rastreio viso espacial para essa população. Com base nesses dados, que a presente dissertação desenvolveu um paradigma de rastreio visual no reconhecimento de emoções em crianças do Espectro Autista (EA). Para tanto, foi necessária uma revisão sistemática, que a partir de uma seleção criteriosa, verificou 65 paradigmas investigados na avaliação do Transtorno do Espectro Autista (TEA) que utilizaram o Eye-tracker como instrumento. A partir de então foi desenvolvido um roteiro para a posterior programação das tarefas. O paradigma de rastreio foi, então, aplicado em quatro crianças diagnosticadas com TEA, que compunham o grupo experimental e em três com desenvolvimento típico para controle, com a finalidade de avaliar a sua aplicabilidade. E embora existam limitações na tarefa que precisam passar por adaptações, foi possível verificar que os participantes do grupo experimental tiveram a duração da tarefa ampliada em decorrência de uma dificuldade na fixação do olhar, bem como tiveram o desempenho prejudicado no reconhecimento das emoções. Esses dados, junto a outros estudos, sugerem que os indivíduos do espectro autista utilizam estratégias visuais atípicas. Entretanto mais pesquisas são necessárias sobre o assunto. / [en] Autism is a neurodevelopmental disorder characterized by impairments in social interaction, communication and behavior. One of the deficits presented in its scope is the emotions recognition, pointing to a number of atypical visualization strategies, such as: reduced look at social stimuli; preference for the mouth instead of the eyes region, and difficulties in fixing attention. However, there is no consensus so far on the factors that can lead to such damages, as well as whether there is a characteristic pattern of visuospatial screening for that population. Based on these data, this dissertation developed a visual tracking paradigm in the recognition of emotions in children of the Autistic Spectrum (EA). Therefore, a system-atic review was necessary, which, based on a careful selection, verified 65 paradigms investigated in the evaluation of Autistic Spectrum Disorder (ASD) and that used the Eye-tracker as instrument. From then on, a script was developed for later tasks programming. The screening paradigm was then applied in four children diagnosed with ASD, who composed the experimental group, and in three with typical development, to control, to evaluate its applicability. Although there are limitations in the task, that must undergo adaptations, it was possible to verify that the participants of the experimental group had a longer duration of the task, due to it s difficulty in fixing the look, as well as they had the performance impaired in the emotions recognition. These data, along with other studies, suggest that individuals on the autistic spectrum use atypical visual strategies. However more research is needed on the subject.
150

Reconnaissance des émotions par traitement d’images / Emotions recognition based on image processing

Gharsalli, Sonia 12 July 2016 (has links)
La reconnaissance des émotions est l'un des domaines scientifiques les plus complexes. Ces dernières années, de plus en plus d'applications tentent de l'automatiser. Ces applications innovantes concernent plusieurs domaines comme l'aide aux enfants autistes, les jeux vidéo, l'interaction homme-machine. Les émotions sont véhiculées par plusieurs canaux. Nous traitons dans notre recherche les expressions émotionnelles faciales en s'intéressant spécifiquement aux six émotions de base à savoir la joie, la colère, la peur, le dégoût, la tristesse et la surprise. Une étude comparative de deux méthodes de reconnaissance des émotions l'une basée sur les descripteurs géométriques et l'autre basée sur les descripteurs d'apparence est effectuée sur la base CK+, base d'émotions simulées, et la base FEEDTUM, base d'émotions spontanées. Différentes contraintes telles que le changement de résolution, le nombre limité d'images labélisées dans les bases d'émotions, la reconnaissance de nouveaux sujets non inclus dans la base d'apprentissage sont également prises en compte. Une évaluation de différents schémas de fusion est ensuite réalisée lorsque de nouveaux cas, non inclus dans l'ensemble d'apprentissage, sont considérés. Les résultats obtenus sont prometteurs pour les émotions simulées (ils dépassent 86%), mais restent insuffisant pour les émotions spontanées. Nous avons appliqué également une étude sur des zones locales du visage, ce qui nous a permis de développer des méthodes hybrides par zone. Ces dernières améliorent les taux de reconnaissance des émotions spontanées. Finalement, nous avons développé une méthode de sélection des descripteurs d'apparence basée sur le taux d'importance que nous avons comparée avec d'autres méthodes de sélection. La méthode de sélection proposée permet d'améliorer le taux de reconnaissance par rapport aux résultats obtenus par deux méthodes reprises de la littérature. / Emotion recognition is one of the most complex scientific domains. In the last few years, various emotion recognition systems are developed. These innovative applications are applied in different domains such as autistic children, video games, human-machine interaction… Different channels are used to express emotions. We focus on facial emotion recognition specially the six basic emotions namely happiness, anger, fear, disgust, sadness and surprise. A comparative study between geometric method and appearance method is performed on CK+ database as the posed emotion database, and FEEDTUM database as the spontaneous emotion database. We consider different constraints in this study such as different image resolutions, the low number of labelled images in learning step and new subjects. We evaluate afterward various fusion schemes on new subjects, not included in the training set. Good recognition rate is obtained for posed emotions (more than 86%), however it is still low for spontaneous emotions. Based on local feature study, we develop local features fusion methods. These ones increase spontaneous emotions recognition rates. A feature selection method is finally developed based on features importance scores. Compared with two methods, our developed approach increases the recognition rate.

Page generated in 0.123 seconds