• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 75
  • 11
  • 10
  • 7
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 164
  • 164
  • 43
  • 34
  • 31
  • 31
  • 24
  • 24
  • 22
  • 21
  • 19
  • 18
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Μελέτη γλωσσολογικών μοντέλων για αναγνώριση συναισθημάτων ομιλητή

Αποστολόπουλος, Γεώργιος 07 June 2010 (has links)
Με τη συνεχώς αυξανόμενη παρουσία αυτόματων συστημάτων στην καθημερινότητά μας, εισέρχεται και το βάρος της αλληλεπίδρασης με αυτά τα συστήματα εξαιτίας της έλλειψης συναισθηματικής νοημοσύνης από την πλευρά των μηχανών [1]. Η συναισθηματική πληροφορία που μεταδίδεται μέσω της ανθρώπινης ομιλίας αποτελεί σημαντικό παράγοντα στις ανθρώπινες επικοινωνίες και αλληλεπιδράσεις. Όταν οι άνθρωποι αλληλεπιδρούν με μηχανές ή υπολογιστικά συστήματα υπάρχει ένα κενό μεταξύ της πληροφορίας που μεταδίδεται και αυτής που γίνεται αντιληπτή. Η εργασία αυτή επικεντρώνεται στον τρόπο με τον οποίο ένα υπολογιστικό σύστημα μπορεί να αντιληφθεί την συναισθηματική πληροφορία που υποβόσκει στην ανθρώπινη ομιλία χρησιμοποιώντας την πληροφορία που βρίσκεται στα διάφορα γλωσσολογικά μοντέλα. Γίνεται μελέτη ενός συστήματος αναγνώρισης της συναισθηματικής κατάστασης του ομιλητή, και πιο συγκεκριμένα επικεντρωνόμαστε στην επεξεργασία ομιλίας και την εξαγωγή των κατάλληλων παραμέτρων, οι οποίες θα μπορέσουν να χαρακτηρίσουν μονοσήμαντα κάθε συναισθηματική κατάσταση. Κάνουμε επεξεργασία οπτικοακουστικού υλικού χρησιμοποιώντας διάφορα εργαλεία λογισμικού με σκοπό να αντλήσουμε αξιόπιστη γλωσσολογική πληροφορία, η οποία να είναι αντιπροσωπευτική των διαφόρων συναισθημάτων που εξετάζουμε. Συνδυάζοντας τη γλωσσολογική με την ακουστική πληροφορία καταλήγουμε σε ένα ολοκληρωμένο μοντέλο αναγνώρισης συναισθημάτων. Τα αποτελέσματά μας υποδεικνύουν το ποσοστό κατά το οποίο τα εξαγόμενα γλωσσολογικά μοντέλα μπορούν να μας προσφέρουν αξιόπιστη αναγνώριση συναισθημάτων ενός ομιλητή. / Along with the constantly increasing presence of automatic systems in our everyday lives, there comes the problem of interaction with thesse sytems because of the lack of artificial intelligence from the systems themselves. Emotion information transcripted through human language is an important factor of human interactions and conversations. When people interact with computer systems though, there is a gap between the information sent and the information perceived. This diploma thesis focuses on the way a computer system can perceive the information of emotions that underlies in human speech, by using the information found in linguistic models. We study a recognition system for the emotional state of the speaker himself and specifically we focus on the speech recognition and its parameters, which could uniquely identify every emotional state. We edit some video samples using the appropriate software in order to draw credible linguistic information, which is representative of the examined emotions. By combining the linguistic information with the aural information, we can reach a state where we can have a complete speech recognition system. The results of our work present the percentage at which these models can provide acceptable emotional recognition of a speaker.
102

Avaliação da influência de emoções na tomada de decisão de sistemas computacionais. / Evaluation of the influence of emotions in decision-taking of computer systems.

Ana Carolina Nicolosi da Rocha Gracioso 17 March 2016 (has links)
Este trabalho avalia a influência das emoções humanas expressas pela mímica da face na tomada de decisão de sistemas computacionais, com o objetivo de melhorar a experiência do usuário. Para isso, foram desenvolvidos três módulos: o primeiro trata-se de um sistema de computação assistiva - uma prancha de comunicação alternativa e ampliada em versão digital. O segundo módulo, aqui denominado Módulo Afetivo, trata-se de um sistema de computação afetiva que, por meio de Visão Computacional, capta a mímica da face do usuário e classifica seu estado emocional. Este segundo módulo foi implementado em duas etapas, as duas inspiradas no Sistema de Codificação de Ações Faciais (FACS), que identifica expressões faciais com base no sistema cognitivo humano. Na primeira etapa, o Módulo Afetivo realiza a inferência dos estados emocionais básicos: felicidade, surpresa, raiva, medo, tristeza, aversão e, ainda, o estado neutro. Segundo a maioria dos pesquisadores da área, as emoções básicas são inatas e universais, o que torna o módulo afetivo generalizável a qualquer população. Os testes realizados com o modelo proposto apresentaram resultados 10,9% acima dos resultados que usam metodologias semelhantes. Também foram realizadas análises de emoções espontâneas, e os resultados computacionais aproximam-se da taxa de acerto dos seres humanos. Na segunda etapa do desenvolvimento do Módulo Afetivo, o objetivo foi identificar expressões faciais que refletem a insatisfação ou a dificuldade de uma pessoa durante o uso de sistemas computacionais. Assim, o primeiro modelo do Módulo Afetivo foi ajustado para este fim. Por fim, foi desenvolvido um Módulo de Tomada de Decisão que recebe informações do Módulo Afetivo e faz intervenções no Sistema Computacional. Parâmetros como tamanho do ícone, arraste convertido em clique e velocidade de varredura são alterados em tempo real pelo Módulo de Tomada de Decisão no sistema computacional assistivo, de acordo com as informações geradas pelo Módulo Afetivo. Como o Módulo Afetivo não possui uma etapa de treinamento para inferência do estado emocional, foi proposto um algoritmo de face neutra para resolver o problema da inicialização com faces contendo emoções. Também foi proposto, neste trabalho, a divisão dos sinais faciais rápidos entre sinais de linha base (tique e outros ruídos na movimentação da face que não se tratam de sinais emocionais) e sinais emocionais. Os resultados dos Estudos de Caso realizados com os alunos da APAE de Presidente Prudente demonstraram que é possível melhorar a experiência do usuário, configurando um sistema computacional com informações emocionais expressas pela mímica da face. / The influence of human emotions expressed by facial mimics in decision-taking of computer systems is analyzed to improve user´s experience. Three modules were developed: the first module comprises a system of assistive computation - a digital alternative and amplified communication board. The second module, called the Affective Module, is a system of affective computation which, through a Computational Vision, retrieves the user\'s facial mimic and classifies their emotional state. The second module was implemented in two stages derived from the Facial Action Codification System (FACS) which identifies facial expressions based on the human cognitive system. In the first stage, the Affective Module infers the basic emotional stages, namely, happiness, surprise, anger, fear, sadness, disgust, and the neutral state. According to most researchers, basic emotions are innate and universal. Thus, the affective module is common to any population. Tests undertaken with the suggested model provided results which were 10.9% above those that employ similar methodologies. Spontaneous emotions were also undertaken and computer results were close to human score rates. The second stage for the development of the Affective Module, facial expressions that reflect dissatisfaction or difficulties during the use of computer systems were identified. The first model of the Affective Module was adjusted to this end. A Decision-taking Module which receives information from the Affective Module and intervenes in the Computer System was developed. Parameters such as icon size, draw transformed into a click, and scanning speed are changed in real time by the Decision-taking Module in the assistive computer system, following information by the Affective Module. Since the Affective Module does not have a training stage to infer the emotional stage, a neutral face algorithm has been suggested to solve the problem of initialing with emotion-featuring faces. Current assay also suggests the distinction between quick facial signals among the base signs (a click or any other sound in the face movement which is not an emotional sign) and emotional signs. Results from Case Studies with APAE children in Presidente Prudente SP Brazil showed that user´s experience may be improved through a computer system with emotional information expressed by facial mimics.
103

Um modelo para inferência do estado emocional baseado em superfícies emocionais dinâmicas planares. / A model for facial emotion inference based on planar dynamic emotional surfaces.

João Pedro Prospero Ruivo 21 November 2017 (has links)
Emoções exercem influência direta sobre a vida humana, mediando a maneira como os indivíduos interagem e se relacionam, seja em âmbito pessoal ou social. Por essas razões, o desenvolvimento de interfaces homem-máquina capazes de manter interações mais naturais e amigáveis com os seres humanos se torna importante. No desenvolvimento de robôs sociais, assunto tratado neste trabalho, a adequada interpretação do estado emocional dos indivíduos que interagem com os robôs é indispensável. Assim, este trabalho trata do desenvolvimento de um modelo matemático para o reconhecimento do estado emocional humano por meio de expressões faciais. Primeiramente, a face humana é detectada e rastreada por meio de um algoritmo; então, características descritivas são extraídas da mesma e são alimentadas no modelo de reconhecimento de estados emocionais desenvolvidos, que consiste de um classificador de emoções instantâneas, um filtro de Kalman e um classificador dinâmico de emoções, responsável por fornecer a saída final do modelo. O modelo é otimizado através de um algoritmo de têmpera simulada e é testado sobre diferentes bancos de dados relevantes, tendo seu desempenho medido para cada estado emocional considerado. / Emotions have direct influence on the human life and are of great importance in relationships and in the way interactions between individuals develop. Because of this, they are also important for the development of human-machine interfaces that aim to maintain natural and friendly interactions with its users. In the development of social robots, which this work aims for, a suitable interpretation of the emotional state of the person interacting with the social robot is indispensable. The focus of this work is the development of a mathematical model for recognizing emotional facial expressions in a sequence of frames. Firstly, a face tracker algorithm is used to find and keep track of a human face in images; then relevant information is extracted from this face and fed into the emotional state recognition model developed in this work, which consists of an instantaneous emotional expression classifier, a Kalman filter and a dynamic classifier, which gives the final output of the model. The model is optimized via a simulated annealing algorithm and is experimented on relevant datasets, having its performance measured for each of the considered emotional states.
104

Deep Learning of Human Emotion Recognition in Videos

Li, Yuqing January 2017 (has links)
No description available.
105

Emotion recognition from speech using prosodic features

Väyrynen, E. (Eero) 29 April 2014 (has links)
Abstract Emotion recognition, a key step of affective computing, is the process of decoding an embedded emotional message from human communication signals, e.g. visual, audio, and/or other physiological cues. It is well-known that speech is the main channel for human communication and thus vital in the signalling of emotion and semantic cues for the correct interpretation of contexts. In the verbal channel, the emotional content is largely conveyed as constant paralinguistic information signals, from which prosody is the most important component. The lack of evaluation of affect and emotional states in human machine interaction is, however, currently limiting the potential behaviour and user experience of technological devices. In this thesis, speech prosody and related acoustic features of speech are used for the recognition of emotion from spoken Finnish. More specifically, methods for emotion recognition from speech relying on long-term global prosodic parameters are developed. An information fusion method is developed for short segment emotion recognition using local prosodic features and vocal source features. A framework for emotional speech data visualisation is presented for prosodic features. Emotion recognition in Finnish comparable to the human reference is demonstrated using a small set of basic emotional categories (neutral, sad, happy, and angry). A recognition rate for Finnish was found comparable with those reported in the western language groups. Increased emotion recognition is shown for short segment emotion recognition using fusion techniques. Visualisation of emotional data congruent with the dimensional models of emotion is demonstrated utilising supervised nonlinear manifold modelling techniques. The low dimensional visualisation of emotion is shown to retain the topological structure of the emotional categories, as well as the emotional intensity of speech samples. The thesis provides pattern recognition methods and technology for the recognition of emotion from speech using long speech samples, as well as short stressed words. The framework for the visualisation and classification of emotional speech data developed here can also be used to represent speech data from other semantic viewpoints by using alternative semantic labellings if available. / Tiivistelmä Emootiontunnistus on affektiivisen laskennan keskeinen osa-alue. Siinä pyritään ihmisen kommunikaatioon sisältyvien emotionaalisten viestien selvittämiseen, esim. visuaalisten, auditiivisten ja/tai fysiologisten vihjeiden avulla. Puhe on ihmisten tärkein tapa kommunikoida ja on siten ensiarvoisen tärkeässä roolissa viestinnän oikean semanttisen ja emotionaalisen tulkinnan kannalta. Emotionaalinen tieto välittyy puheessa paljolti jatkuvana paralingvistisenä viestintänä, jonka tärkein komponentti on prosodia. Tämän affektiivisen ja emotionaalisen tulkinnan vajaavaisuus ihminen-kone – interaktioissa rajoittaa kuitenkin vielä nykyisellään teknologisten laitteiden toimintaa ja niiden käyttökokemusta. Tässä väitöstyössä on käytetty puheen prosodisia ja akustisia piirteitä puhutun suomen emotionaalisen sisällön tunnistamiseksi. Työssä on kehitetty pitkien puhenäytteiden prosodisiin piirteisiin perustuvia emootiontunnistusmenetelmiä. Lyhyiden puheenpätkien emotionaalisen sisällön tunnistamiseksi on taas kehitetty informaatiofuusioon perustuva menetelmä käyttäen prosodian sekä äänilähteen laadullisten piirteiden yhdistelmää. Lisäksi on kehitetty teknologinen viitekehys emotionaalisen puheen visualisoimiseksi prosodisten piirteiden avulla. Tutkimuksessa saavutettiin ihmisten tunnistuskykyyn verrattava automaattisen emootiontunnistuksen taso käytettäessä suppeaa perusemootioiden joukkoa (neutraali, surullinen, iloinen ja vihainen). Emootiontunnistuksen suorituskyky puhutulle suomelle havaittiin olevan verrannollinen länsieurooppalaisten kielten kanssa. Lyhyiden puheenpätkien emotionaalisen sisällön tunnistamisessa saavutettiin taas parempi suorituskyky käytettäessä fuusiomenetelmää. Emotionaalisen puheen visualisoimiseksi kehitetyllä opetettavalla epälineaarisella manifoldimallinnustekniikalla pystyttiin tuottamaan aineistolle emootion dimensionaalisen mallin kaltainen visuaalinen rakenne. Mataladimensionaalisen kuvauksen voitiin edelleen osoittaa säilyttävän sekä tutkimusaineiston emotionaalisten luokkien että emotionaalisen intensiteetin topologisia rakenteita. Tässä väitöksessä kehitettiin hahmontunnistusmenetelmiin perustuvaa teknologiaa emotionaalisen puheen tunnistamiseksi käytettäessä sekä pitkiä että lyhyitä puhenäytteitä. Emotionaalisen aineiston visualisointiin ja luokitteluun kehitettyä teknologista kehysmenetelmää käyttäen voidaan myös esittää puheaineistoa muidenkin semanttisten rakenteiden mukaisesti.
106

Modelagem computacional para reconhecimento de emoções baseada na análise facial / Computational modeling for emotion recognition based on facial analysis

Giampaolo Luiz Libralon 24 November 2014 (has links)
As emoções são objeto de estudo não apenas da psicologia, mas também de diversas áreas como filosofia, psiquiatria, biologia, neurociências e, a partir da segunda metade do século XX, das ciências cognitivas. Várias teorias e modelos emocionais foram propostos, mas não existe consenso quanto à escolha de uma ou outra teoria ou modelo. Neste sentido, diversos pesquisadores argumentam que existe um conjunto de emoções básicas que foram preservadas durante o processo evolutivo, pois servem a propósitos específicos. Porém, quantas e quais são as emoções básicas aceitas ainda é um tópico em discussão. De modo geral, o modelo de emoções básicas mais difundido é o proposto por Paul Ekman, que afirma a existência de seis emoções: alegria, tristeza, medo, raiva, aversão e surpresa. Estudos também indicam que existe um pequeno conjunto de expressões faciais universais capaz de representar as seis emoções básicas. No contexto das interações homem-máquina, o relacionamento entre ambos vem se tornando progressivamente natural e social. Desta forma, à medida que as interfaces evoluem, a capacidade de interpretar sinais emocionais de interlocutores e reagir de acordo com eles de maneira apropriada é um desafio a ser superado. Embora os seres humanos utilizem diferentes maneiras para expressar emoções, existem evidências de que estas são mais precisamente descritas por expressões faciais. Assim, visando obter interfaces que propiciem interações mais realísticas e naturais, nesta tese foi desenvolvida uma modelagem computacional, baseada em princípios psicológicos e biológicos, que simula o sistema de reconhecimento emocional existente nos seres humanos. Diferentes etapas são utilizadas para identificar o estado emocional: a utilização de um mecanismo de pré-atenção visual, que rapidamente interpreta as prováveis emoções, a detecção das características faciais mais relevantes para o reconhecimento das expressões emocionais identificadas, e a análise de características geométricas da face para determinar o estado emocional final. Vários experimentos demonstraram que a modelagem proposta apresenta taxas de acerto elevadas, boa capacidade de generalização, e permite a interpretabilidade das características faciais encontradas. / Emotions are the object of study not only of psychology, but also of various research areas such as philosophy, psychiatry, biology, neuroscience and, from the second half of the twentieth century, the cognitive sciences. A number of emotional theories and models have been proposed, but there is no consensus on the choice of one or another of these models or theories. In this sense, several researchers argue that there is a set of basic emotions that have been preserved during the evolutionary process because they serve specific purposes. However, it is still a topic for discussion how many and which the accepted basic emotions are. In general, the model of basic emotions proposed by Paul Ekman, which asserts the existence of six emotions - happiness, sadness, fear, anger, disgust and surprise, is the most popular. Studies also indicate the existence of a small set of universal facial expressions related to the six basic emotions. In the context of human-machine interactions, the relationship between human beings and machines is becoming increasingly natural and social. Thus, as the interfaces evolve, the ability to interpret emotional signals of interlocutors and to react accordingly in an appropriate manner is a challenge to surpass. Even though emotions are expressed in different ways by human beings, there is evidence that they are more accurately described by facial expressions. In order to obtain interfaces that allow more natural and realistic interactions, a computational modeling based on psychological and biological principles was developed to simulate the emotional recognition system existing in human beings. It presents distinct steps to identify an emotional state: the use of a preattentive visual mechanism, which quickly interprets the most likely emotions, the detection of the most important facial features for recognition of the identified emotional expressions, and the analysis of geometric facial features to determine the final emotional state. A number of experiments demonstrated that the proposed computational modeling achieves high accuracy rates, good generalization performance, and allows the interpretability of the facial features revealed.
107

Feature Fusion Deep Learning Method for Video and Audio Based Emotion Recognition

Yanan Song (11825003) 20 December 2021 (has links)
In this thesis, we proposed a deep learning based emotion recognition system in order to improve the successive classification rate. We first use transfer learning to extract visual features and use Mel frequency Cepstral Coefficients(MFCC) to extract audio features, and then apply the recurrent neural networks(RNN) with attention mechanism to process the sequential inputs. After that, the outputs of both channels are fused into a concatenate layer, which is processed using batch normalization, to reduce internal covariate shift. Finally, the classification result is obtained by the softmax layer. From our experiments, the video and audio subsystem achieve 78% and 77% respectively, and the feature fusion system with video and audio achieves 92% accuracy based on the RAVDESS dataset for eight emotion classes. Our proposed feature fusion system outperforms conventional methods in terms of classification prediction.
108

Personalized physiological-based emotion recognition and implementation on hardware / Reconnaissance des émotions personnalisée à partir des signaux physiologiques et implémentation sur matériel

Yang, Wenlu 27 February 2018 (has links)
Cette thèse étudie la reconnaissance des émotions à partir de signaux physiologiques dans le contexte des jeux vidéo et la faisabilité de sa mise en œuvre sur un système embarqué. Les défis suivants sont abordés : la relation entre les états émotionnels et les réponses physiologiques dans le contexte du jeu, les variabilités individuelles des réponses psycho-physiologiques et les problèmes de mise en œuvre sur un système embarqué. Les contributions majeures de cette thèse sont les suivantes. Premièrement, nous construisons une base de données multimodale dans le cadre de l'Affective Gaming (DAG). Cette base de données contient plusieurs mesures concernant les modalités objectives telles que les signaux physiologiques de joueurs et des évaluations subjectives sur des phases de jeu. A l'aide de cette base, nous présentons une série d'analyses effectuées pour la détection des moments marquant émotionnellement et la classification des émotions à l'aide de diverses méthodes d'apprentissage automatique. Deuxièmement, nous étudions la variabilité individuelle de la réponse émotionnelle et proposons un modèle basé sur un groupe de joueurs déterminé par un clustering selon un ensemble de traits physiologiques pertinents. Nos travaux mettent en avant le fait que le modèle proposé, basé sur un tel groupe personnalisé, est plus performant qu'un modèle général ou qu'un modèle spécifique à un utilisateur. Troisièmement, nous appliquons la méthode proposée sur un système ARM A9 et montrons que la méthode proposée peut répondre à l'exigence de temps de calcul. / This thesis investigates physiological-based emotion recognition in a digital game context and the feasibility of implementing the model on an embedded system. The following chanllenges are addressed: the relationship between emotional states and physiological responses in the game context, individual variabilities of the pschophysiological responses and issues of implementation on an embedded system. The major contributions of this thesis are : Firstly, we construct a multi-modal Database for Affective Gaming (DAG). This database contains multiple measurements concerning objective modalities: physiological signals (ECG, EDA, EMG, Respiration), screen recording, and player's face recording, as well as subjective assessments on both game event and match level. We presented statistics of the database and run a series of analysis on issues such as emotional moment detection and emotion classification, influencing factors of the overall game experience using various machine learning methods. Secondly, we investigate the individual variability in the collected data by creating an user-specific model and analyzing the optimal feature set for each individual. We proposed a personalized group-based model created the similar user groups by using the clustering techniques based on physiological traits deduced from optimal feature set. We showed that the proposed personalized group-based model performs better than the general model and user-specific model. Thirdly, we implemente the proposed method on an ARM A9 system and showed that the proposed method can meet the requirement of computation time.
109

Facial Emotion Recognition using Convolutional Neural Network with Multiclass Classification and Bayesian Optimization for Hyper Parameter Tuning.

Bejjagam, Lokesh, Chakradhara, Reshmi January 2022 (has links)
The thesis aims to develop a deep learning model for facial emotion recognition using Convolutional Neural Network algorithm and Multiclass Classification along with Hyper-parameter tuning using Bayesian Optimization to improve the performance of the model. The developed model recognizes seven basic emotions in images of human beings such as fear, happy, surprise, sad, neutral, disgust and angry using FER-2013 dataset.
110

Emotion Recognition using Spatiotemporal Analysis of Electroencephalographic Signals

Aspiras, Theus H. 21 August 2012 (has links)
No description available.

Page generated in 0.0413 seconds