• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 87
  • 12
  • 10
  • 9
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 192
  • 192
  • 49
  • 38
  • 36
  • 34
  • 28
  • 26
  • 25
  • 24
  • 23
  • 21
  • 21
  • 19
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Recognizing facial expression of virtual agents, synthetic faces, and human faces: the effects of age and character type on emotion recognition

Beer, Jenay Michelle 08 April 2010 (has links)
An agent's facial expression may communicate emotive state to users both young and old. The ability to recognize emotions has been shown to differ with age, with older adults more commonly misidentifying the facial emotions of anger, fear, and sadness. This research study examined whether emotion recognition of facial expressions differed between different types of on-screen agents, and between age groups. Three on-screen characters were compared: a human, a synthetic human, and a virtual agent. In this study 42 younger (age 28-28) and 42 older (age 65-85) adults completed an emotion recognition task with static pictures of the characters demonstrating four basic emotions (anger, fear, happiness, and sadness) and neutral. The human face resulted in the highest proportion match, followed by the synthetic human, then the virtual agent with the lowest proportion match. Both the human and synthetic human faces resulted in age-related differences for the emotions anger, fear, sadness, and neutral, with younger adults showing higher proportion match. The virtual agent showed age-related differences for the emotions anger, fear, happiness, and neutral, with younger adults showing higher proportion match. The data analysis and interpretation of the present study differed from previous work by utilizing two unique approaches to understanding emotion recognition. First, misattributions participants made when identifying emotion were investigated. Second, a similarity index of the feature placement between any two virtual agent emotions was calculated, suggesting that emotions were commonly misattributed as other emotions similar in appearance. Overall, these results suggest that age-related differences transcend human faces to other types of on-screen characters, and differences between older and younger adults in emotion recognition may be further explained by perceptual discrimination between two emotions of similar feature appearance.
72

Supervised feature learning via sparse coding for music information rerieval

O'Brien, Cian John 08 June 2015 (has links)
This thesis explores the ideas of feature learning and sparse coding for Music Information Retrieval (MIR). Sparse coding is an algorithm which aims to learn new feature representations from data automatically. In contrast to previous work which uses sparse coding in an MIR context the concept of supervised sparse coding is also investigated, which makes use of the ground-truth labels explicitly during the learning process. Here sparse coding and supervised coding are applied to two MIR problems: classification of musical genre and recognition of the emotional content of music. A variation of Label Consistent K-SVD is used to add supervision during the dictionary learning process. In the case of Music Genre Recognition (MGR) an additional discriminative term is added to encourage tracks from the same genre to have similar sparse codes. For Music Emotion Recognition (MER) a linear regression term is added to learn an optimal classifier and dictionary pair. These results indicate that while sparse coding performs well for MGR, the additional supervision fails to improve the performance. In the case of MER, supervised coding significantly outperforms both standard sparse coding and commonly used designed features, namely MFCC and pitch chroma.
73

Automatic Macro- and Micro-Facial Expression Spotting and Applications

Shreve, Matthew Adam 01 January 2013 (has links)
Automatically determining the temporal characteristics of facial expressions has extensive application domains such as human-machine interfaces for emotion recognition, face identification, as well as medical analysis. However, many papers in the literature have not addressed the step of determining when such expressions occur. This dissertation is focused on the problem of automatically segmenting macro- and micro-expressions frames (or retrieving the expression intervals) in video sequences, without the need for training a model on a specific subset of such expressions. The proposed method exploits the non-rigid facial motion that occurs during facial expressions by modeling the strain observed during the elastic deformation of facial skin tissue. The method is capable of spotting both macro expressions which are typically associated with emotions such as happiness, sadness, anger, disgust, and surprise, and rapid micro- expressions which are typically, but not always, associated with semi-suppressed macro-expressions. Additionally, we have used this method to automatically retrieve strain maps generated from peak expressions for human identification. This dissertation also contributes a novel 3-D surface strain estimation algorithm using commodity 3-D sensors aligned with an HD camera. We demonstrate the feasibility of the method, as well as the improvements gained when using 3-D, by providing empirical and quantitative comparisons between 2-D and 3-D strain estimations.
74

Inferring Speaker Affect in Spoken Natural Language Communication

Pon-Barry, Heather Roberta 15 March 2013 (has links)
The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards machine listening—interpreting the full spectrum of information in an audio stream. One part of machine listening, the problem that this thesis focuses on, is the task of using information in the speech signal to infer a person’s emotional or mental state. In this dissertation, our approach is to assess the utility of prosody, or manner of speaking, in classifying speaker affect. Prosody refers to the acoustic features of natural speech: rhythm, stress, intonation, and energy. Affect refers to a person’s emotions and attitudes such as happiness, frustration, or uncertainty. We focus on one specific dimension of affect: level of certainty. Our goal is to automatically infer whether a person is confident or uncertain based on the prosody of his or her speech. Potential applications include conversational dialogue systems (e.g., in educational technology) and voice search (e.g., smartphone personal assistants). There are three main contributions of this thesis. The first contribution is a method for eliciting uncertain speech that binds a speaker’s uncertainty to a single phrase within the larger utterance, allowing us to compare the utility of contextually-based prosodic features. Second, we devise a technique for computing prosodic features from utterance segments that both improves uncertainty classification and can be used to determine which phrase a speaker is uncertain about. The level of certainty classifier achieves an accuracy of 75%. Third, we examine the differences between perceived, self-reported, and internal level of certainty, concluding that perceived certainty is aligned with internal certainty for some but not all speakers and that self-reports are a good proxy for internal certainty. / Engineering and Applied Sciences
75

POKERFACE: EMOTION BASED GAME-PLAY TECHNIQUES FOR COMPUTER POKER PLAYERS

Cockerham, Lucas 01 January 2004 (has links)
Numerous algorithms/methods exist for creating computer poker players. This thesis comparesand contrasts them. A set of poker agents for the system PokerFace are then introduced. A surveyof the problem of facial expression recognition is included in the hopes it may be used to build abetter computer poker player.
76

The evaluation of the stability of acoustic features in affective conveyance across multiple emotional databases

Sun, Rui 20 September 2013 (has links)
The objective of the research presented in this thesis was to systematically investigate the computational structure for cross-database emotion recognition. The research consisted of evaluating the stability of acoustic features, particularly the glottal and Teager Energy based features, and investigating three normalization methods and two data fusion techniques. One of the challenges of cross-database training and testing is accounting for the potential variation in the types of emotions expressed as well as the recording conditions. In an attempt to alleviate the impact of these types of variations, three normalization methods on the acoustic data were studied. Motivated by the lack of large and diverse enough emotional database to train the classifier, using multiple databases to train posed another challenge: data fusion. This thesis proposed two data fusion techniques, pre-classification SDS and post-classification ROVER to study the issue. Using the glottal, TEO and TECC features, of which the stability of emotion distinguishing ability has been highlighted on multiple databases, the systematic computational structure proposed in this thesis could improve the performance of cross-database binary-emotion recognition by up to 23% for neutral vs. emotional and 10% for positive vs. negative.
77

Kalbos emocijų požymių tyrimas / Investigation of Speech Emotion Features

Žukas, Gediminas 17 June 2014 (has links)
Magistro baigiamajame darbe išnagrinėtas automatinio šnekos emocijų atpažinimo uždavinys. Nors pastaruoju metu šios srities populiarumas yra smarkiai išaugęs, tačiau vis dar trūksta literatūros aprašančios konkrečių požymių ar požymių rinkinių efektyvumą kalbos emocijoms atpažinti. Ši problema suformavo magistro baigiamojo darbo tikslą – ištirti akustinių požymių taikymą šnekos emocijoms atpažinti. Darbo metu buvo atlikta požymių sistemų analizė, sukurta emocijų požymių sistemų (rinkinių) testavimo sistema, kuria atliktas požymių rinkinių tyrimas. Tyrimo metu gauti rezultatai yra labai panašūs arba šiek tiek lenkia pastaruoju metu paskelbtus emocijų atpažinimo rezultatus naudojant Berlyno emocingos kalbos duomenų bazę. Remiantis gautais eksperimentų rezultatais, buvo sudarytos požymių rinkinių formavimo rekomendacijos. Magistro baigiamasis darbas informatikos inžinerijos laipsniui. Vilniaus Gedimino technikos universitetas. Vilnius, 2014. / This Master's thesis has examined the automatic speech emotion recognition task. Recently, the popularity of this area is greatly increased, but there is still a lack of literature describing specific acoustic features (or feature sets) performance in automatic emotion recognition task. This issue formed the purpose of this work - to explore suitable acoustic feature sets for emotion recognition task. This work includes analysis of emotion feature systems and development of speech emotion characteristics testing system. Using developed system, investigation experiments of speech emotion parameters were accomplished. The study results are very similar, or slightly ahead to recently published results of emotion recognition using the Berlin emotional speech database. According to the results of the experiments, recommendations for creating effective speech emotion feature sets were concluded. Master's degree thesis in informatics engineering. Vilnius Gediminas Technical University. Vilnius, 2014.
78

Emotion recognition in context

Stanley, Jennifer Tehan 12 June 2008 (has links)
In spite of evidence for increased maintenance and/or improvement of emotional experience in older adulthood, past work suggests that young adults are better able than older adults to identify emotions in others. Typical emotion recognition tasks employ a single-closed-response methodology. Because older adults are more complex in their emotional experience than young adults, they may approach such response-limited emotion recognition tasks in a qualitatively different manner than young adults. The first study of the present research investigated whether older adults were more likely than young adults to interpret emotional expressions (facial task) and emotional situations (lexical task) as representing a mix of different discrete emotions. In the lexical task, older adults benefited more than young adults from the opportunity to provide more than one response. In the facial task, however, there was a cross-over interaction such that older adults benefited more than young adults for anger recognition, whereas young adults benefited more than older adults for disgust recognition. A second study investigated whether older adults benefit more than young adults from contextual cues. The addition of contextual information improved the performance of older adults more than that of young adults. Age differences in anger recognition, however, persisted across all conditions. Overall, these findings are consistent with an age-related increase in the perception of mixed emotions in lexical information. Moreover, they suggest that contextual information can help disambiguate emotional information.
79

Automatic emotion recognition: an investigation of acoustic and prosodic parameters

Sethu, Vidhyasaharan , Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2009 (has links)
An essential step to achieving human-machine speech communication with the naturalness of communication between humans is developing a machine that is capable of recognising emotions based on speech. This thesis presents research addressing this problem, by making use of acoustic and prosodic information. At a feature level, novel group delay and weighted frequency features are proposed. The group delay features are shown to emphasise information pertaining to formant bandwidths and are shown to be indicative of emotions. The weighted frequency feature, based on the recently introduced empirical mode decomposition, is proposed as a compact representation of the spectral energy distribution and is shown to outperform other estimates of energy distribution. Feature level comparisons suggest that detailed spectral measures are very indicative of emotions while exhibiting greater speaker specificity. Moreover, it is shown that all features are characteristic of the speaker and require some of sort of normalisation prior to use in a multi-speaker situation. A novel technique for normalising speaker-specific variability in features is proposed, which leads to significant improvements in the performances of systems trained and tested on data from different speakers. This technique is also used to investigate the amount of speaker-specific variability in different features. A preliminary study of phonetic variability suggests that phoneme specific traits are not modelled by the emotion models and that speaker variability is a more significant problem in the investigated setup. Finally, a novel approach to emotion modelling that takes into account temporal variations of speech parameters is analysed. An explicit model of the glottal spectrum is incorporated into the framework of the traditional source-filter model, and the parameters of this combined model are used to characterise speech signals. An automatic emotion recognition system that takes into account the shape of the contours of these parameters as they vary with time is shown to outperform a system that models only the parameter distributions. The novel approach is also empirically shown to be on par with human emotion classification performance.
80

Examining convergence of emotional abilities using objective measures / Undersöka konvergens av emotionella förmågor med objektiva mått

Paulsson, Niklas January 2018 (has links)
Recent developments in emotion and EI research have introduced new ways of measuring emotional abilities, including performance based tests. The current study aimed to examine the associations of three emotional abilities, using three objective measures. The study consisted of a survey and an experiment, where 89 participants completed performance based multimodal emotion recognition and emotion understanding tests, and a conditioning task using social aversive and appetitive stimuli. The results showed that individuals who are more proficient in emotion understanding were more accurate in emotion recognition and more effective in extinguishing fear-evoking responses. In addition, individuals proficient in emotion recognition were shown to have stronger general responding during fear acquisition. Furthermore, various findings related to emotion understanding and emotion recognition modalities, including item difficulty and specific emotions. Implications of current findings support the notion of separate but related emotional abilities while also highlighting a potentially underlying mechanism or core emotional competence.

Page generated in 0.1211 seconds