• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 12
  • 10
  • 9
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 191
  • 191
  • 49
  • 37
  • 35
  • 34
  • 28
  • 26
  • 25
  • 24
  • 23
  • 21
  • 21
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

[en] REAL TIME EMOTION RECOGNITION BASED ON IMAGES USING ASM AND SVM / [pt] RECONHECIMENTO DE EMOÇÕES ATRAVÉS DE IMAGENS EM TEMPO REAL COM O USO DE ASM E SVM

GUILHERME CARVALHO CUNHA 09 July 2014 (has links)
[pt] As expressões faciais transmitem muita informação sobre um indivíduo, tornando a capacidade de interpretá-las uma tarefa muito importante, com aplicações em diversas áreas, tais como Interação Homem Máquina, Jogos Digitais, storytelling interativo e TV/Cinema digital. Esta dissertação discute o processo de reconhecimento de emoções em tempo real usando ASM (Active Shape Model) e SVM (Support Vector Machine) e apresenta uma comparação entre duas formas comumente utilizadas na etapa de extração de atributos: faces neutra e média. Como não existe tal comparação na literatura, os resultados apresentados são valiosos para o desenvolvimento de aplicações envolvendo expressões de emoção em tempo real. O presente trabalho considera seis tipos de emoções: felicidade, tristeza, raiva, medo, surpresa e desgosto. / [en] The facial expressions provide a high amount of information about a person, making the ability to interpret them a high valued task that can be used in several fields of Informatics such as Human Machine Interface, Digital Games, interactive storytelling and digital TV/Cinema. This dissertation discusses the process of recognizing emotions in real time using ASM (Active Shape Model) and SVM (Support Vector Machine) and presents a comparison between two commonly used ways when extracting the attributes: neutral face and average. As such comparison can not be found in the literature, the results presented are valuable to the development of applications that deal with emotion expression in real time. The current study considers six types of emotions: happiness, sadness, anger, fear, surprise and disgust.
172

Configuration et exploitation d'une machine émotionnelle

Trabelsi, Amine 11 1900 (has links)
Dans ce travail, nous explorons la faisabilité de doter les machines de la capacité de prédire, dans un contexte d'interaction homme-machine (IHM), l'émotion d'un utilisateur, ainsi que son intensité, de manière instantanée pour une grande variété de situations. Plus spécifiquement, une application a été développée, appelée machine émotionnelle, capable de «comprendre» la signification d'une situation en se basant sur le modèle théorique d'évaluation de l'émotion Ortony, Clore et Collins (OCC). Cette machine est apte, également, à prédire les réactions émotionnelles des utilisateurs, en combinant des versions améliorées des k plus proches voisins et des réseaux de neurones. Une procédure empirique a été réalisée pour l'acquisition des données. Ces dernières ont fourni une connaissance consistante aux algorithmes d'apprentissage choisis et ont permis de tester la performance de la machine. Les résultats obtenus montrent que la machine émotionnelle proposée est capable de produire de bonnes prédictions. Une telle réalisation pourrait encourager son utilisation future dans des domaines exploitant la reconnaissance automatique de l'émotion. / This work explores the feasibility of equipping computers with the ability to predict, in a context of a human computer interaction, the probable user’s emotion and its intensity for a wide variety of emotion-eliciting situations. More specifically, an online framework, the Emotional Machine, is developed enabling computers to «understand» situations using OCC model of emotion and to predict user’s reaction by combining refined versions of Artificial Neural Network and k Nearest Neighbours algorithms. An empirical procedure including a web-based anonymous questionnaire for data acquisition was designed to provide the chosen machine learning algorithms with a consistent knowledge and to test the application’s recognition performance. Results from the empirical investigation show that the proposed Emotional Machine is capable of producing accurate predictions. Such an achievement may encourage future using of our framework for automated emotion recognition in various application fields.
173

Αναγνώριση συναισθημάτων από ομιλία με χρήση τεχνικών ψηφιακής επεξεργασίας σήματος και μηχανικής μάθησης / Emotion recognition from speech using digital signal processing and machine learning techniques

Κωστούλας, Θεόδωρος 28 February 2013 (has links)
Η παρούσα διδακτορική διατριβή πραγματεύεται προβλήματα που αφορούν το χώρο της τεχνολογίας ομιλίας, με στόχο τη αναγνώριση συναισθημάτων από ομιλία με χρήση τεχνικών ψηφιακής επεξεργασίας σήματος και μηχανικής μάθησης. Πιο αναλυτικά, στα πλαίσια της διατριβής προτάθηκαν και μελετήθηκαν καινοτόμες μέθοδοι σε μια σειρά από εφαρμογές που αξιοποιούν σύστημα αναγνώρισης συναισθηματικών καταστάσεων από ομιλία. Ο βασικός στόχος των μεθόδων ήταν η αντιμετώπιση των προκλήσεων που παρουσιάζονται όταν ένα σύστημα αναγνώρισης συναισθηματικών καταστάσεων καλείται να λειτουργήσει σε πραγματικές συνθήκες, με αυθόρμητες αντιδράσεις, ανεξαρτήτως ομιλητή. Πιο συγκεκριμένα, στα πλαίσια της διατριβής, αξιολογήθηκε η συμπεριφορά ενός συστήματος αναγνώρισης συναισθημάτων σε προσποιητή ομιλία και σε διαφορετικές συνθήκες θορύβου, και συγκρίθηκε η απόδοση του συστήματος με την υποκειμενική αξιολόγηση των ακροατών. Επιπλέον, περιγράφηκε ο σχεδιασμός και η υλοποίηση βάση δεδομένων συναισθηματικής ομιλίας, όπως αυτή προκύπτει από την αλληλεπίδραση μη-έμπειρων χρηστών με ένα διαλογικό σύστημα και προτάθηκε ένα σύστημα το οποίο εντοπίζει αρνητικές συναισθηματικές καταστάσεις, στο ανεξάρτητου ομιλητή πρόβλημα, με χρήση μοντέλου Γκαουσιανών κατανομών. Η προτεινόμενη αρχιτεκτονική συνδυάζει παραμέτρους ομιλίας χαμηλού και υψηλού επιπέδου και εφαρμόζεται στα πραγματικά δεδομένα. Επίσης, αξιολογήθηκε και υλοποιήθηκε η πρακτική εφαρμογή ενός συστήματος αναγνώρισης συναισθημάτων βασισμένου σε οικουμενικό μοντέλο Γκαουσιανών κατανομών σε διαφορετικούς τύπους δεδομένων πραγματικής ζωής. Ακόμα, παρουσιάστηκε μια πρωτότυπη αρχιτεκτονική κατηγοριοποίησης για αναγνώριση συνυπαρχόντων συναισθημάτων από ομιλία προερχόμενη από αλληλεπίδραση σε πραγματικά περιβάλλοντα. Σε αντίθεση με γνωστές προσεγγίσεις, η προτεινόμενη αρχιτεκτονική μοντελοποιεί τις συνυπάρχουσες συναισθηματικές καταστάσεις μέσω της κατασκευής μιας πολυσταδιακής αρχιτεκτονικής κατηγοριοποίησης. Τα πειραματικά αποτελέσματα που διενεργήθηκαν υποδεικνύουν ότι η προτεινόμενη αρχιτεκτονική είναι πλεονεκτική για τις συναισθηματικές καταστάσεις που είναι πιο διαχωρίσιμες, γεγονός που οδηγεί σε βελτίωση της συνολικής απόδοσης του συστήματος. / In this doctoral dissertation a number of novel approaches were proposed and evaluated in different applications that utilize emotion awareness. The major target of the proposed methods was facing the difficulties existing, when an emotion recognition system is asked to operate in real-life conditions, where human speech is characterized by spontaneous and genuine formulations. In detail, within the present dissertation, the performance of an emotion recognition system was evaluated, initially, in acted speech, under different noise conditions, and this performance was compared to the one of human listeners. Further, the design and implementation of a real world emotional speech corpus is described, as this results from the interaction of naive users with a smart home dialogue system. Moreover, a system which utilizes low and high level descriptors was suggested. The suggested architecture leads to significantly better performance in some working points of the integrated system in the dialogue system. Furthermore, we propose a novel multistage classification scheme for affect recognition from real-life speech. In contrast with conventional approaches for affect/emotion recognition from speech, the proposed scheme models co-occurring affective states by constructing a multistage classification scheme. The empirical experiments performed indicate that the proposed classification scheme offers an advantage for those classes that are more separable, which contributes for improving the overall performance of the affect recognition system.
174

Environnements virtuels émotionnellement intelligents

Benlamine, Mohamed Sahbi 04 1900 (has links)
No description available.
175

Rozpoznávání hudební nálady a emocí za pomoci technik Music Information Retrieval / Music mood and emotion recognition using Music information retrieval techniques

Smělý, Pavel January 2019 (has links)
This work focuses on scientific area called Music Information Retrieval, more precisely it’s subdivision focusing on the recognition of emotions in music called Music Emotion Recognition. The beginning of the work deals with general overview and definition of MER, categorization of individual methods and offers a comprehensive view of this discipline. The thesis also concentrates on the selection and description of suitable parameters for the recognition of emotions, using tools openSMILE and MIRtoolbox. A freely available DEAM database was used to obtain the set of music recordings and their subjective emotional annotations. The practical part deals with the design of a static dimensional regression evaluation system for numerical prediction of musical emotions in music recordings, more precisely their position in the AV emotional space. The thesis publishes and comments on the results obtained by individual analysis of the significance of individual parameters and for the overall analysis of the prediction of the proposed model.
176

Rozpoznávání emocí v česky psaných textech / Recognition of emotions in Czech texts

Červenec, Radek January 2011 (has links)
With advances in information and communication technologies over the past few years, the amount of information stored in the form of electronic text documents has been rapidly growing. Since the human abilities to effectively process and analyze large amounts of information are limited, there is an increasing demand for tools enabling to automatically analyze these documents and benefit from their emotional content. These kinds of systems have extensive applications. The purpose of this work is to design and implement a system for identifying expression of emotions in Czech texts. The proposed system is based mainly on machine learning methods and therefore design and creation of a training set is described as well. The training set is eventually utilized to create a model of classifier using the SVM. For the purpose of improving classification results, additional components were integrated into the system, such as lexical database, lemmatizer or derived keyword dictionary. The thesis also presents results of text documents classification into defined emotion classes and evaluates various approaches to categorization.
177

Creation of a vocal emotional profile (VEP) and measurement tools

Aghajani, Mahsa 10 1900 (has links)
La parole est le moyen de communication dominant chez les humains. Les signaux vocaux véhiculent à la fois des informations et des émotions du locuteur. La combinaison de ces informations aide le récepteur à mieux comprendre ce que veut dire le locuteur et diminue la probabilité de malentendus. Les robots et les ordinateurs peuvent également bénéficier de ce mode de communication. La capacité de reconnaître les émotions dans la voix des locuteurs aide les ordinateurs à mieux répondre aux besoins humains. Cette amélioration de la communication entre les humains et les ordinateurs conduit à une satisfaction accrue des utilisateurs. Dans cette étude, nous avons proposé plusieurs approches pour détecter les émotions de la parole ou de la voix par ordinateur. Nous avons étudié comment différentes techniques et classificateurs d'apprentissage automatique et d'apprentissage profond permettent de détecter les émotions de la parole. Les classificateurs sont entraînés avec des ensembles de données d'émotions audio couramment utilisés et bien connus, ainsi qu'un ensemble de données personnalisé. Cet ensemble de données personnalisé a été enregistré à partir de personnes non-acteurs et non-experts tout en essayant de déclencher des émotions associées. La raison de considérer cet ensemble de données important est de rendre le modèle compétent pour reconnaître les émotions chez les personnes qui ne sont pas aussi parfaites que les acteurs pour refléter leurs émotions dans leur voix. Les résultats de plusieurs classificateurs d'apprentissage automatique et d'apprentissage profond tout en reconnaissant sept émotions de colère, de bonheur, de tristesse, de neutralité, de surprise, de peur et de dégoût sont rapportés et analysés. Les modèles ont été évalués avec et sans prise en compte de l'ensemble de données personnalisé pour montrer l'effet de l'utilisation d'un ensemble de données imparfait. Dans cette étude, tirer parti des techniques d'apprentissage en profondeur et des méthodes d'apprentissage en ensemble a dépassé les autres techniques. Nos meilleurs classificateurs pourraient obtenir des précisions de 90,41 % et 91,96 %, tout en étant entraînés par des réseaux de neurones récurrents et des classificateurs d'ensemble à vote majoritaire, respectivement. / Speech is the dominant way of communication among humans. Voice signals carry both information and emotion of the speaker. The combination of this information helps the receiver to get a better understanding of what the speaker means and decreases the probability of misunderstandings. Robots and computers can also benefit from this way of communication. The capability of recognizing emotions in speakers voice, helps the computers to serve the human need better. This improvement in communication between humans and computers leads to increased user satisfaction. In this study we have proposed several approaches to detect the emotions from speech or voice computationally. We have investigated how different machine learning and deep learning techniques and classifiers perform in detecting the emotions from speech. The classifiers are trained with some commonly used and well-known audio emotion datasets together with a custom dataset. This custom dataset was recorded from non-actor and non-expert people while trying to trigger related emotions in them. The reason for considering this important dataset is to make the model proficient in recognizing emotions in people who are not as perfect as actors in reflecting their emotions in their voices. The results from several machine learning and deep learning classifiers while recognizing seven emotions of anger, happiness, sadness, neutrality, surprise, fear and disgust are reported and analyzed. Models were evaluated with and without considering the custom data set to show the effect of employing an imperfect dataset. In this study, leveraging deep learning techniques and ensemble learning methods has surpassed the other techniques. Our best classifiers could obtain accuracies of 90.41% and 91.96%, while being trained by recurrent neural networks and majority voting ensemble classifiers, respectively.
178

A Multi-modal Emotion Recognition Framework Through The Fusion Of Speech With Visible And Infrared Images

Siddiqui, Mohammad Faridul Haque 29 August 2019 (has links)
No description available.
179

Prediktion av användaromdömen om språkcafé-samtal baserat på automatisk röstanalys / Prediction of user ratings of language cafe conversations based on automatic voice analysis

Hansson Svan, Angus, Mannerstråle, Carl January 2019 (has links)
Spoken communication between humans generate information in two channels; the primary channel, linked to the syntactic-semantic part of the speech (what a person is litteraly saying); the secondary channel conveys paralinguistic information (tone, emotional state and gestures). This study examines the paralinguistic part of the speech, more specific the tone and emotional state. The study examines if there is a correlation between human speech and the opinion of a participant to a language café based conversation. The language café conversations is moderated by the social robot platform Furhat created by Furhat Robotics. The report is written from two perspectives. A data scientific view where identified emotions in audio files are analysed with machine learning algorithms and mathematical models. Vokaturi, an emotion recognition software, analyses the audio files and quantifies the emotional attributes. The classification model is based upon these attributes and the answers from the language café survey. Speech emotion recognition is also evaluated as a method for gathering customer opinions in a customer feedback loop. The results show an accuracy of 61% and indicates that some sort of prediction is possible. However there is no clear correlation between the recorded human voice and the participants opinion of the conversation. In the discussion part the difficulties of creating a high accuracy model with current data is analysed. It also contains a hypothetic analysis of the model as a gathering method for customer data. / En person som talar sprider information genom en primär samt en sekundär kanal. Den primära kanalen är kopplat till den syntaktiska semantiken av talet (vad personen bokstavligen säger), medan den sekundära kanalen är kopplat till den paralingvistiska delen (ton, känslotillstånd och gester). Denna studie undersöker den paralingvistiska delen av talet, mer specifikt en människas tonläge och känsla. Studien undersöker om det finns någon korrelation mellan mänskligt tal och vad personen tycker om ett parkcafé-samtal. Parkcafé samtalen i denna studie har genomförts tillsammans med den sociala roboten Furhat skapad av Furhat Robotics. Rapporten är skriven ur två perspektiv. Ett datatekniskt perspektiv där känsloyttringar i ljudfiler analyseras med hjälp av maskininlärning och matematiska modeller. Med hjälp av Vokaturi, som tillhandahåller mjukvara för känsloigenkänning av ljud, analyseras inspelade konversationer och attribut för olika känslor kvantifieras. Klassificeringsmodellen skapas sedan av dessa attribut, svar på enkätundersökningar (del ett) samt av författarna egen-annoterade ljudfiler (del två). Dessutom analyseras känsloigenkänning som metod för insamling av användaråsikter ur ett företagsekonomiskt perspektiv. Resultaten påvisar en träffsäkerhet på ca 62% och 61% för del ett respektive två och pekar på att någon form av prediktion är möjlig. Ett tydligt samband mellan deltagarens röst och dess åsikt om samtalet är dock svårt att finna med dessa resultat. I analysen och slutsatsen diskuteras svårigheterna med att ta fram en funktionell modell med tillgänglig data samt en hypotetisk diskussion kring modellen som del av en customer feedback loop.
180

Patient Psychological Factors Related to Cosmetic Surgery Satisfaction

Koveleskie, Michaela R. 10 August 2022 (has links)
No description available.

Page generated in 0.7629 seconds