• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 12
  • 10
  • 9
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 191
  • 191
  • 49
  • 37
  • 35
  • 34
  • 28
  • 26
  • 25
  • 24
  • 23
  • 21
  • 21
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Face emotion recognition in children and adolescents; effects of puberty and callous unemotional traits in a community sample

Merz, Sabine, Psychology, Faculty of Science, UNSW January 2008 (has links)
Previous research suggests that as well as behavioural difficulties, a small subset of aggressive and antisocial children show callous unemotional (CU) personality traits (i.e., lack of remorse and absence of empathy) that set them apart from their low-CU peers. These children have been identified as being most at risk to follow a path of severe and persistent antisocial behaviour, showing distinct behavioural patterns, and have been found to respond less to traditional treatment programs. One particular focus of this thesis is that emerging findings have shown emotion recognition deficits within both groups. Whereas children who only show behavioural difficulties (in the absence of CU traits) have been found to misclassify vague and neutral expressions as anger, the presence of CU traits has been associated with an inability to correctly identify fear and to a lesser extend, sadness. Furthermore, emotion recognition competence varies with age and development. In general, emotion recognition improves with age, but interestingly there is some evidence that it may become less efficient during puberty. No research could be located, however, that assessed emotion recognition through childhood and adolescence for children high and low on CU traits and antisocial behaviour. The primary focus of this study was to investigate the impact of these personality traits and pubertal development on emotion recognition competence in isolation and in combination. A specific aim was to assess if puberty would exacerbate these deficits in children with pre-existing deficits in emotion recognition. The effect of gender, emotion type and measure characteristics, in particular the age of the target face, was also examined. A community sample of 703 children and adolescents aged 7-17 were administered the Strength and Difficulties Questionnaire to assess adjustment, the Antisocial Process Screening Device to assess antisocial traits, and the Pubertal Development Scale was administered to evaluate pubertal stage. Empathy was assessed using the Bryant Index of Empathy for Children and Adolescents. Parents or caregivers completed parent version of these measures for their children. Emotion recognition ability was measured using the newly developed UNSW FACES task (Dadds, Hawes & Merz, 2004). Description of the development and validation of this measure are included. Contrary to expectations, emotion recognition accuracy was not negatively affected by puberty. In addition, no overall differences in emotion recognition ability were found due to participant’s gender or target face age group characteristics. The hypothesis that participants would be better at recognising emotions expressed by their own age group was therefore not supported. In line with expectations, significant negative associations between CU traits and fear recognition were found. However, these were small, and contrary to expectations, were found for girls rather than boys. Also, puberty did not exacerbate emotion recognition deficits in high CU children. However, the relationship between CU traits and emotion recognition was affected differently by pubertal status. The implications of these results are discussed in relation to future research into emotion recognition deficits within this population. In addition, theoretical and practical implications of these findings for the development of antisocial behaviour and the treatment of children showing CU traits are explored.
122

Studentų emocinės būklės testavimo metu tyrimas panauduojant biometrines technologijas / Research of emotional state students during test using biometric technology

Vlasenko, Andrej 29 March 2012 (has links)
Disertacijoje nagrinėjamas kompiuterinės sistemos kūrimas, su kuria būtų galima nustatyti asmens psichoemicinę būseną pagal jo balso signalų požymius. Taip pat pateikiama vyzdžio skersmens matavimo sistema. Taigi, pagrindiniai mokslinio tyrimo objektai yra žmogaus balso požymiai ir jo vyzdžio dydžio pa-sikeitimo dinamika. Pagrindinis disertacijos tikslas – sukurti metodikas ir algo-ritmus, skirtus automatiškai apdoroti ir išanalizuoti balso signalo požymius. Šių sukurtų algoritmų taikymo sritis – streso valdymo sistemos programinė įranga. Šiame darbe sprendžiami keli pagrindiniai uždaviniai: analizuojant kalbėtojo balsą, kalbančiojo psichoemocinės būklės identifikavimo galimybės ir vyzdžio dydžio kaitos dinamika. Disertaciją sudaro įvadas, keturi skyriai, rezultatų apibendrinimas, naudotos literatūros sąrašas ir autoriaus publikacijų disertacijos tema sąrašas. Įvade aptariama tiriamoji problema, darbo aktualumas, aprašomas tyrimų objektas, formuluojamas darbo tikslas bei uždaviniai, aprašoma tyrimų metodi-ka, darbo mokslinis naujumas, darbo rezultatų praktinė reikšmė, ginamieji teigi-niai. Įvado pabaigoje pristatomos disertacijos tema autoriaus paskelbtos publika-cijos bei pranešimai konferencijose ir disertacijos struktūra. Pirmajame skyriuje pateikta asmens biometrinių bei fiziologiniu požymiu analizės pagrindu sukurta „Rekomendacine biometrinė streso valdymo sistema” (angl. Recommended Biometric Stress Management System). Sistema gali padėti nustatyti neigiamą streso lygį... [toliau žr. visą tekstą] / The dissertation investigates the issues of creating a computer system that uses voice signal features to determine person’s emotional state. In addition pre-sented system of measuring pupil diameter.The main objects of research include emotion recognition from speech and dynamics of eye pupil size change.The main purpose of this dissertation is employing suitable methodologies and algo-rithms to automatically process and analyse human voice parameters. Created algorithms can be used in Stress Management System software. The dissertation also focuses on researching the possibilities of identification of speaker’s psy-choemotional state: applying the analysis of speaker’s voice parameters and the analysis of dynamics of eye pupil size change. The dissertation consists of four parts including Introduction, 4 chapters, Conclusions and References. The introduction reveals the investigated problem, importance of the thesis and the object of research and describes the purpose and tasks of the paper, re-search methodology, scientific novelty, the practical significance of results ex-amined in the paper and defended statements. The introduction ends in present-ing the author’s publications on the subject of the defended dissertation, offering the material of made presentations in conferences and defining the structure of the dissertation. Chapter 1- the Recommended Biometric Stress Management System found-ed on the speech analysis. The System can assist in determining the level of... [to full text]
123

Optimization techniques for speech emotion recognition

Sidorova, Julia 15 December 2009 (has links)
Hay tres aspectos innovadores. Primero, un algoritmo novedoso para calcular el contenido emocional de un enunciado, con un diseño mixto que emplea aprendizaje estadístico e información sintáctica. Segundo, una extensión para selección de rasgos que permite adaptar los pesos y así aumentar la flexibilidad del sistema. Tercero, una propuesta para incorporar rasgos de alto nivel al sistema. Dichos rasgos, combinados con los rasgos de bajo nivel, permiten mejorar el rendimiento del sistema. / The first contribution of this thesis is a speech emotion recognition system called the ESEDA capable of recognizing emotions in di®erent languages. The second contribution is the classifier TGI+. First objects are modeled by means of a syntactic method and then, with a statistical method the mappings of samples are classified, not their feature vectors. The TGI+ outperforms the state of the art top performer on a benchmark data set of acted emotions. The third contribution is high-level features, which are distances from a feature vector to the tree automata accepting class i, for all i in the set of class labels. The set of low-level features and the set of high-level features are concatenated and the resulting set is submitted to the feature selection procedure. Then the classification step is done in the usual way. Testing on a benchmark dataset of authentic emotions showed that this classification strategy outperforms the state of the art top performer.
124

Face emotion recognition in children and adolescents; effects of puberty and callous unemotional traits in a community sample

Merz, Sabine, Psychology, Faculty of Science, UNSW January 2008 (has links)
Previous research suggests that as well as behavioural difficulties, a small subset of aggressive and antisocial children show callous unemotional (CU) personality traits (i.e., lack of remorse and absence of empathy) that set them apart from their low-CU peers. These children have been identified as being most at risk to follow a path of severe and persistent antisocial behaviour, showing distinct behavioural patterns, and have been found to respond less to traditional treatment programs. One particular focus of this thesis is that emerging findings have shown emotion recognition deficits within both groups. Whereas children who only show behavioural difficulties (in the absence of CU traits) have been found to misclassify vague and neutral expressions as anger, the presence of CU traits has been associated with an inability to correctly identify fear and to a lesser extend, sadness. Furthermore, emotion recognition competence varies with age and development. In general, emotion recognition improves with age, but interestingly there is some evidence that it may become less efficient during puberty. No research could be located, however, that assessed emotion recognition through childhood and adolescence for children high and low on CU traits and antisocial behaviour. The primary focus of this study was to investigate the impact of these personality traits and pubertal development on emotion recognition competence in isolation and in combination. A specific aim was to assess if puberty would exacerbate these deficits in children with pre-existing deficits in emotion recognition. The effect of gender, emotion type and measure characteristics, in particular the age of the target face, was also examined. A community sample of 703 children and adolescents aged 7-17 were administered the Strength and Difficulties Questionnaire to assess adjustment, the Antisocial Process Screening Device to assess antisocial traits, and the Pubertal Development Scale was administered to evaluate pubertal stage. Empathy was assessed using the Bryant Index of Empathy for Children and Adolescents. Parents or caregivers completed parent version of these measures for their children. Emotion recognition ability was measured using the newly developed UNSW FACES task (Dadds, Hawes & Merz, 2004). Description of the development and validation of this measure are included. Contrary to expectations, emotion recognition accuracy was not negatively affected by puberty. In addition, no overall differences in emotion recognition ability were found due to participant’s gender or target face age group characteristics. The hypothesis that participants would be better at recognising emotions expressed by their own age group was therefore not supported. In line with expectations, significant negative associations between CU traits and fear recognition were found. However, these were small, and contrary to expectations, were found for girls rather than boys. Also, puberty did not exacerbate emotion recognition deficits in high CU children. However, the relationship between CU traits and emotion recognition was affected differently by pubertal status. The implications of these results are discussed in relation to future research into emotion recognition deficits within this population. In addition, theoretical and practical implications of these findings for the development of antisocial behaviour and the treatment of children showing CU traits are explored.
125

Μελέτη γλωσσολογικών μοντέλων για αναγνώριση συναισθημάτων ομιλητή

Αποστολόπουλος, Γεώργιος 07 June 2010 (has links)
Με τη συνεχώς αυξανόμενη παρουσία αυτόματων συστημάτων στην καθημερινότητά μας, εισέρχεται και το βάρος της αλληλεπίδρασης με αυτά τα συστήματα εξαιτίας της έλλειψης συναισθηματικής νοημοσύνης από την πλευρά των μηχανών [1]. Η συναισθηματική πληροφορία που μεταδίδεται μέσω της ανθρώπινης ομιλίας αποτελεί σημαντικό παράγοντα στις ανθρώπινες επικοινωνίες και αλληλεπιδράσεις. Όταν οι άνθρωποι αλληλεπιδρούν με μηχανές ή υπολογιστικά συστήματα υπάρχει ένα κενό μεταξύ της πληροφορίας που μεταδίδεται και αυτής που γίνεται αντιληπτή. Η εργασία αυτή επικεντρώνεται στον τρόπο με τον οποίο ένα υπολογιστικό σύστημα μπορεί να αντιληφθεί την συναισθηματική πληροφορία που υποβόσκει στην ανθρώπινη ομιλία χρησιμοποιώντας την πληροφορία που βρίσκεται στα διάφορα γλωσσολογικά μοντέλα. Γίνεται μελέτη ενός συστήματος αναγνώρισης της συναισθηματικής κατάστασης του ομιλητή, και πιο συγκεκριμένα επικεντρωνόμαστε στην επεξεργασία ομιλίας και την εξαγωγή των κατάλληλων παραμέτρων, οι οποίες θα μπορέσουν να χαρακτηρίσουν μονοσήμαντα κάθε συναισθηματική κατάσταση. Κάνουμε επεξεργασία οπτικοακουστικού υλικού χρησιμοποιώντας διάφορα εργαλεία λογισμικού με σκοπό να αντλήσουμε αξιόπιστη γλωσσολογική πληροφορία, η οποία να είναι αντιπροσωπευτική των διαφόρων συναισθημάτων που εξετάζουμε. Συνδυάζοντας τη γλωσσολογική με την ακουστική πληροφορία καταλήγουμε σε ένα ολοκληρωμένο μοντέλο αναγνώρισης συναισθημάτων. Τα αποτελέσματά μας υποδεικνύουν το ποσοστό κατά το οποίο τα εξαγόμενα γλωσσολογικά μοντέλα μπορούν να μας προσφέρουν αξιόπιστη αναγνώριση συναισθημάτων ενός ομιλητή. / Along with the constantly increasing presence of automatic systems in our everyday lives, there comes the problem of interaction with thesse sytems because of the lack of artificial intelligence from the systems themselves. Emotion information transcripted through human language is an important factor of human interactions and conversations. When people interact with computer systems though, there is a gap between the information sent and the information perceived. This diploma thesis focuses on the way a computer system can perceive the information of emotions that underlies in human speech, by using the information found in linguistic models. We study a recognition system for the emotional state of the speaker himself and specifically we focus on the speech recognition and its parameters, which could uniquely identify every emotional state. We edit some video samples using the appropriate software in order to draw credible linguistic information, which is representative of the examined emotions. By combining the linguistic information with the aural information, we can reach a state where we can have a complete speech recognition system. The results of our work present the percentage at which these models can provide acceptable emotional recognition of a speaker.
126

Avaliação da influência de emoções na tomada de decisão de sistemas computacionais. / Evaluation of the influence of emotions in decision-taking of computer systems.

Ana Carolina Nicolosi da Rocha Gracioso 17 March 2016 (has links)
Este trabalho avalia a influência das emoções humanas expressas pela mímica da face na tomada de decisão de sistemas computacionais, com o objetivo de melhorar a experiência do usuário. Para isso, foram desenvolvidos três módulos: o primeiro trata-se de um sistema de computação assistiva - uma prancha de comunicação alternativa e ampliada em versão digital. O segundo módulo, aqui denominado Módulo Afetivo, trata-se de um sistema de computação afetiva que, por meio de Visão Computacional, capta a mímica da face do usuário e classifica seu estado emocional. Este segundo módulo foi implementado em duas etapas, as duas inspiradas no Sistema de Codificação de Ações Faciais (FACS), que identifica expressões faciais com base no sistema cognitivo humano. Na primeira etapa, o Módulo Afetivo realiza a inferência dos estados emocionais básicos: felicidade, surpresa, raiva, medo, tristeza, aversão e, ainda, o estado neutro. Segundo a maioria dos pesquisadores da área, as emoções básicas são inatas e universais, o que torna o módulo afetivo generalizável a qualquer população. Os testes realizados com o modelo proposto apresentaram resultados 10,9% acima dos resultados que usam metodologias semelhantes. Também foram realizadas análises de emoções espontâneas, e os resultados computacionais aproximam-se da taxa de acerto dos seres humanos. Na segunda etapa do desenvolvimento do Módulo Afetivo, o objetivo foi identificar expressões faciais que refletem a insatisfação ou a dificuldade de uma pessoa durante o uso de sistemas computacionais. Assim, o primeiro modelo do Módulo Afetivo foi ajustado para este fim. Por fim, foi desenvolvido um Módulo de Tomada de Decisão que recebe informações do Módulo Afetivo e faz intervenções no Sistema Computacional. Parâmetros como tamanho do ícone, arraste convertido em clique e velocidade de varredura são alterados em tempo real pelo Módulo de Tomada de Decisão no sistema computacional assistivo, de acordo com as informações geradas pelo Módulo Afetivo. Como o Módulo Afetivo não possui uma etapa de treinamento para inferência do estado emocional, foi proposto um algoritmo de face neutra para resolver o problema da inicialização com faces contendo emoções. Também foi proposto, neste trabalho, a divisão dos sinais faciais rápidos entre sinais de linha base (tique e outros ruídos na movimentação da face que não se tratam de sinais emocionais) e sinais emocionais. Os resultados dos Estudos de Caso realizados com os alunos da APAE de Presidente Prudente demonstraram que é possível melhorar a experiência do usuário, configurando um sistema computacional com informações emocionais expressas pela mímica da face. / The influence of human emotions expressed by facial mimics in decision-taking of computer systems is analyzed to improve user´s experience. Three modules were developed: the first module comprises a system of assistive computation - a digital alternative and amplified communication board. The second module, called the Affective Module, is a system of affective computation which, through a Computational Vision, retrieves the user\'s facial mimic and classifies their emotional state. The second module was implemented in two stages derived from the Facial Action Codification System (FACS) which identifies facial expressions based on the human cognitive system. In the first stage, the Affective Module infers the basic emotional stages, namely, happiness, surprise, anger, fear, sadness, disgust, and the neutral state. According to most researchers, basic emotions are innate and universal. Thus, the affective module is common to any population. Tests undertaken with the suggested model provided results which were 10.9% above those that employ similar methodologies. Spontaneous emotions were also undertaken and computer results were close to human score rates. The second stage for the development of the Affective Module, facial expressions that reflect dissatisfaction or difficulties during the use of computer systems were identified. The first model of the Affective Module was adjusted to this end. A Decision-taking Module which receives information from the Affective Module and intervenes in the Computer System was developed. Parameters such as icon size, draw transformed into a click, and scanning speed are changed in real time by the Decision-taking Module in the assistive computer system, following information by the Affective Module. Since the Affective Module does not have a training stage to infer the emotional stage, a neutral face algorithm has been suggested to solve the problem of initialing with emotion-featuring faces. Current assay also suggests the distinction between quick facial signals among the base signs (a click or any other sound in the face movement which is not an emotional sign) and emotional signs. Results from Case Studies with APAE children in Presidente Prudente SP Brazil showed that user´s experience may be improved through a computer system with emotional information expressed by facial mimics.
127

Um modelo para inferência do estado emocional baseado em superfícies emocionais dinâmicas planares. / A model for facial emotion inference based on planar dynamic emotional surfaces.

João Pedro Prospero Ruivo 21 November 2017 (has links)
Emoções exercem influência direta sobre a vida humana, mediando a maneira como os indivíduos interagem e se relacionam, seja em âmbito pessoal ou social. Por essas razões, o desenvolvimento de interfaces homem-máquina capazes de manter interações mais naturais e amigáveis com os seres humanos se torna importante. No desenvolvimento de robôs sociais, assunto tratado neste trabalho, a adequada interpretação do estado emocional dos indivíduos que interagem com os robôs é indispensável. Assim, este trabalho trata do desenvolvimento de um modelo matemático para o reconhecimento do estado emocional humano por meio de expressões faciais. Primeiramente, a face humana é detectada e rastreada por meio de um algoritmo; então, características descritivas são extraídas da mesma e são alimentadas no modelo de reconhecimento de estados emocionais desenvolvidos, que consiste de um classificador de emoções instantâneas, um filtro de Kalman e um classificador dinâmico de emoções, responsável por fornecer a saída final do modelo. O modelo é otimizado através de um algoritmo de têmpera simulada e é testado sobre diferentes bancos de dados relevantes, tendo seu desempenho medido para cada estado emocional considerado. / Emotions have direct influence on the human life and are of great importance in relationships and in the way interactions between individuals develop. Because of this, they are also important for the development of human-machine interfaces that aim to maintain natural and friendly interactions with its users. In the development of social robots, which this work aims for, a suitable interpretation of the emotional state of the person interacting with the social robot is indispensable. The focus of this work is the development of a mathematical model for recognizing emotional facial expressions in a sequence of frames. Firstly, a face tracker algorithm is used to find and keep track of a human face in images; then relevant information is extracted from this face and fed into the emotional state recognition model developed in this work, which consists of an instantaneous emotional expression classifier, a Kalman filter and a dynamic classifier, which gives the final output of the model. The model is optimized via a simulated annealing algorithm and is experimented on relevant datasets, having its performance measured for each of the considered emotional states.
128

Deep Learning of Human Emotion Recognition in Videos

Li, Yuqing January 2017 (has links)
No description available.
129

Emotion recognition from speech using prosodic features

Väyrynen, E. (Eero) 29 April 2014 (has links)
Abstract Emotion recognition, a key step of affective computing, is the process of decoding an embedded emotional message from human communication signals, e.g. visual, audio, and/or other physiological cues. It is well-known that speech is the main channel for human communication and thus vital in the signalling of emotion and semantic cues for the correct interpretation of contexts. In the verbal channel, the emotional content is largely conveyed as constant paralinguistic information signals, from which prosody is the most important component. The lack of evaluation of affect and emotional states in human machine interaction is, however, currently limiting the potential behaviour and user experience of technological devices. In this thesis, speech prosody and related acoustic features of speech are used for the recognition of emotion from spoken Finnish. More specifically, methods for emotion recognition from speech relying on long-term global prosodic parameters are developed. An information fusion method is developed for short segment emotion recognition using local prosodic features and vocal source features. A framework for emotional speech data visualisation is presented for prosodic features. Emotion recognition in Finnish comparable to the human reference is demonstrated using a small set of basic emotional categories (neutral, sad, happy, and angry). A recognition rate for Finnish was found comparable with those reported in the western language groups. Increased emotion recognition is shown for short segment emotion recognition using fusion techniques. Visualisation of emotional data congruent with the dimensional models of emotion is demonstrated utilising supervised nonlinear manifold modelling techniques. The low dimensional visualisation of emotion is shown to retain the topological structure of the emotional categories, as well as the emotional intensity of speech samples. The thesis provides pattern recognition methods and technology for the recognition of emotion from speech using long speech samples, as well as short stressed words. The framework for the visualisation and classification of emotional speech data developed here can also be used to represent speech data from other semantic viewpoints by using alternative semantic labellings if available. / Tiivistelmä Emootiontunnistus on affektiivisen laskennan keskeinen osa-alue. Siinä pyritään ihmisen kommunikaatioon sisältyvien emotionaalisten viestien selvittämiseen, esim. visuaalisten, auditiivisten ja/tai fysiologisten vihjeiden avulla. Puhe on ihmisten tärkein tapa kommunikoida ja on siten ensiarvoisen tärkeässä roolissa viestinnän oikean semanttisen ja emotionaalisen tulkinnan kannalta. Emotionaalinen tieto välittyy puheessa paljolti jatkuvana paralingvistisenä viestintänä, jonka tärkein komponentti on prosodia. Tämän affektiivisen ja emotionaalisen tulkinnan vajaavaisuus ihminen-kone – interaktioissa rajoittaa kuitenkin vielä nykyisellään teknologisten laitteiden toimintaa ja niiden käyttökokemusta. Tässä väitöstyössä on käytetty puheen prosodisia ja akustisia piirteitä puhutun suomen emotionaalisen sisällön tunnistamiseksi. Työssä on kehitetty pitkien puhenäytteiden prosodisiin piirteisiin perustuvia emootiontunnistusmenetelmiä. Lyhyiden puheenpätkien emotionaalisen sisällön tunnistamiseksi on taas kehitetty informaatiofuusioon perustuva menetelmä käyttäen prosodian sekä äänilähteen laadullisten piirteiden yhdistelmää. Lisäksi on kehitetty teknologinen viitekehys emotionaalisen puheen visualisoimiseksi prosodisten piirteiden avulla. Tutkimuksessa saavutettiin ihmisten tunnistuskykyyn verrattava automaattisen emootiontunnistuksen taso käytettäessä suppeaa perusemootioiden joukkoa (neutraali, surullinen, iloinen ja vihainen). Emootiontunnistuksen suorituskyky puhutulle suomelle havaittiin olevan verrannollinen länsieurooppalaisten kielten kanssa. Lyhyiden puheenpätkien emotionaalisen sisällön tunnistamisessa saavutettiin taas parempi suorituskyky käytettäessä fuusiomenetelmää. Emotionaalisen puheen visualisoimiseksi kehitetyllä opetettavalla epälineaarisella manifoldimallinnustekniikalla pystyttiin tuottamaan aineistolle emootion dimensionaalisen mallin kaltainen visuaalinen rakenne. Mataladimensionaalisen kuvauksen voitiin edelleen osoittaa säilyttävän sekä tutkimusaineiston emotionaalisten luokkien että emotionaalisen intensiteetin topologisia rakenteita. Tässä väitöksessä kehitettiin hahmontunnistusmenetelmiin perustuvaa teknologiaa emotionaalisen puheen tunnistamiseksi käytettäessä sekä pitkiä että lyhyitä puhenäytteitä. Emotionaalisen aineiston visualisointiin ja luokitteluun kehitettyä teknologista kehysmenetelmää käyttäen voidaan myös esittää puheaineistoa muidenkin semanttisten rakenteiden mukaisesti.
130

Modelagem computacional para reconhecimento de emoções baseada na análise facial / Computational modeling for emotion recognition based on facial analysis

Giampaolo Luiz Libralon 24 November 2014 (has links)
As emoções são objeto de estudo não apenas da psicologia, mas também de diversas áreas como filosofia, psiquiatria, biologia, neurociências e, a partir da segunda metade do século XX, das ciências cognitivas. Várias teorias e modelos emocionais foram propostos, mas não existe consenso quanto à escolha de uma ou outra teoria ou modelo. Neste sentido, diversos pesquisadores argumentam que existe um conjunto de emoções básicas que foram preservadas durante o processo evolutivo, pois servem a propósitos específicos. Porém, quantas e quais são as emoções básicas aceitas ainda é um tópico em discussão. De modo geral, o modelo de emoções básicas mais difundido é o proposto por Paul Ekman, que afirma a existência de seis emoções: alegria, tristeza, medo, raiva, aversão e surpresa. Estudos também indicam que existe um pequeno conjunto de expressões faciais universais capaz de representar as seis emoções básicas. No contexto das interações homem-máquina, o relacionamento entre ambos vem se tornando progressivamente natural e social. Desta forma, à medida que as interfaces evoluem, a capacidade de interpretar sinais emocionais de interlocutores e reagir de acordo com eles de maneira apropriada é um desafio a ser superado. Embora os seres humanos utilizem diferentes maneiras para expressar emoções, existem evidências de que estas são mais precisamente descritas por expressões faciais. Assim, visando obter interfaces que propiciem interações mais realísticas e naturais, nesta tese foi desenvolvida uma modelagem computacional, baseada em princípios psicológicos e biológicos, que simula o sistema de reconhecimento emocional existente nos seres humanos. Diferentes etapas são utilizadas para identificar o estado emocional: a utilização de um mecanismo de pré-atenção visual, que rapidamente interpreta as prováveis emoções, a detecção das características faciais mais relevantes para o reconhecimento das expressões emocionais identificadas, e a análise de características geométricas da face para determinar o estado emocional final. Vários experimentos demonstraram que a modelagem proposta apresenta taxas de acerto elevadas, boa capacidade de generalização, e permite a interpretabilidade das características faciais encontradas. / Emotions are the object of study not only of psychology, but also of various research areas such as philosophy, psychiatry, biology, neuroscience and, from the second half of the twentieth century, the cognitive sciences. A number of emotional theories and models have been proposed, but there is no consensus on the choice of one or another of these models or theories. In this sense, several researchers argue that there is a set of basic emotions that have been preserved during the evolutionary process because they serve specific purposes. However, it is still a topic for discussion how many and which the accepted basic emotions are. In general, the model of basic emotions proposed by Paul Ekman, which asserts the existence of six emotions - happiness, sadness, fear, anger, disgust and surprise, is the most popular. Studies also indicate the existence of a small set of universal facial expressions related to the six basic emotions. In the context of human-machine interactions, the relationship between human beings and machines is becoming increasingly natural and social. Thus, as the interfaces evolve, the ability to interpret emotional signals of interlocutors and to react accordingly in an appropriate manner is a challenge to surpass. Even though emotions are expressed in different ways by human beings, there is evidence that they are more accurately described by facial expressions. In order to obtain interfaces that allow more natural and realistic interactions, a computational modeling based on psychological and biological principles was developed to simulate the emotional recognition system existing in human beings. It presents distinct steps to identify an emotional state: the use of a preattentive visual mechanism, which quickly interprets the most likely emotions, the detection of the most important facial features for recognition of the identified emotional expressions, and the analysis of geometric facial features to determine the final emotional state. A number of experiments demonstrated that the proposed computational modeling achieves high accuracy rates, good generalization performance, and allows the interpretability of the facial features revealed.

Page generated in 0.1226 seconds