• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 33
  • 11
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 132
  • 132
  • 42
  • 41
  • 34
  • 31
  • 25
  • 23
  • 22
  • 22
  • 20
  • 19
  • 19
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

WASABI: affect simulation for agents with believable interactivity /

Becker-Asano, Christian. January 2008 (has links)
Zugl.: Bielefeld, Univ., Diss., 2008.
2

Modélisation et détection des émotions à partir de données expressives et contextuelles / Emotion modelization and detection from expressive and contextual data

Berthelon, Franck 16 December 2013 (has links)
Nous proposons un modèle informatique pour la détection des émotions basé sur le comportement humain. Pour ce travail, nous utilisons la théorie des deux facteurs de Schachter et Singer pour reproduire dans notre architecture le comportement naturel en utilisant à la fois des données expressives et contextuelles. Nous concentrons nos efforts sur l’interprétation d’expressions en introduisant les Cartes Émotionnelles Personnalisées (CEPs) et sur la contextualisation des émotions via une ontologie du contexte émotionnel(EmOCA). Les CEPs sont motivées par le modèle complexe de Scherer et représentent les émotions déterminées par de multiple capteurs. Les CEPs sont calibrées individuellement, puis un algorithme de régression les utilises pour définir le ressenti émotionnel à partir des mesures des expressions corporelles. L’objectif de cette architecture est de séparer l’interprétation de la capture des expressions, afin de faciliter le choix des capteurs. De plus, les CEPs peuvent aussi être utilisées pour la synthétisation des expressions émotionnelles. EmOCA utilise le contexte pour simuler la modulation cognitive et pondérer l’émotion prédite. Nous utilisons pour cela un outil de raisonnement interopérable, une ontologie, nous permettant de décrire et de raisonner sur les philies et phobies pour pondérer l’émotion calculée à partir des expressions. Nous présentons également un prototype utilisant les expressions faciales pour évaluer la reconnaissance des motions en temps réel à partir de séquences vidéos. De plus, nous avons pu remarquer que le système décrit une sorte d’hystérésis lors du changement émotionnel comme suggéré par Scherer pour son modèle psychologique. / We present a computational model for emotion detection based on human behavioural expression. For this work, we use the two-factor theory of Schachter and Singer to map our architecture onto natural behavior, using both expressive and contextual data to build our emotion detector. We focus our effort on expression interpretation by introducing Personalized Emotion Maps (PEMs), and on emotion contextualisation via an Emotion Ontology for Contex Awareness (EmOCA). PEMs are motivated by Scherer’s complex system model of emotions and represent emotion values determined from multiple sensors. PEMs are calibrated to individuals, then a regression algorithm uses individual-specific PEMs to determine a person’s emotional feeling from sensor measurements of their bodily expressions. The aim of this architecture is to dissociate expression interpretation from sensor measurements, thus allowing flexibility in the choice of sensors. Moreover, PEMs can also be used in facial expression synthesis. EmOCA brings context into the emotion-modulating cognitive input to weight predicted emotion. We use a well known interoperable reasoning tool, an ontology, allowing us to describe and to reason about philia and phobia in order to modulate emotion determined from expression. We present a prototype using facial expressions to evaluate emotion recognition from real-time video sequences. Moreover, we note that, interestingly, the system detects the sort of hysteresis phenomenon in changing emotion state as suggested by Scherer’s psychological model.
3

Identifying emotional states through keystroke dynamics

Epp, Clayton Charles 09 September 2010
The ability to recognize emotions is an important part of building intelligent computers. Extracting the emotional aspects of a situation could provide computers with a rich context to make appropriate decisions about how to interact with the user or adapt the system response. The problem that we address in this thesis is that the current methods of determining user emotion have two issues: the equipment that is required is expensive, and the majority of these sensors are invasive to the user. These problems limit the real-world applicability of existing emotion-sensing methods because the equipment costs limit the availability of the technology, and the obtrusive nature of the sensors are not realistic in typical home or office settings. Our solution is to determine user emotions by analyzing the rhythm of an individuals typing patterns on a standard keyboard. Our keystroke dynamics approach would allow for the uninfluenced determination of emotion using technology that is in widespread use today. We conducted a field study where participants keystrokes were collected in situ and their emotional states were recorded via self reports. Using various data mining techniques, we created models based on 15 different emotional states. With the results from our cross-validation, we identify our best-performing emotional state models as well as other emotional states that can be explored in future studies. We also provide a set of recommendations for future analysis on the existing data set as well as suggestions for future data collection and experimentation.
4

MoodScope: Building a Mood Sensor from Smartphone Usage Patterns

Li Kam Wa, Robert 06 September 2012 (has links)
MoodScope is a first-of-its-kind smartphone software system that learns the mood of its user based on how the smartphone is used. While commonly available sensors on smartphones measure physical properties, MoodScope is a sensor that measures an important mental state of the user and brings mood as an important context into context-aware computing. We design MoodScope using a formative study with 32 participants and collect mood journals and usage data from them over two months. Through the study, we find that by analyzing communication history and application usage patterns, we can statistically infer a user’s daily mood average with 93% accuracy after a two-month training period. To a lesser extent, we can also estimate Sudden Mood Change events with reasonable accuracy (74%). Motivated by these results, we build a service, MoodScope, which analyzes usage history to act as a sensor of the user’s mood. We provide a MoodScope API for developers to use our system to create mood-enabled applications and create and deploy sample applications.
5

Identifying emotional states through keystroke dynamics

Epp, Clayton Charles 09 September 2010 (has links)
The ability to recognize emotions is an important part of building intelligent computers. Extracting the emotional aspects of a situation could provide computers with a rich context to make appropriate decisions about how to interact with the user or adapt the system response. The problem that we address in this thesis is that the current methods of determining user emotion have two issues: the equipment that is required is expensive, and the majority of these sensors are invasive to the user. These problems limit the real-world applicability of existing emotion-sensing methods because the equipment costs limit the availability of the technology, and the obtrusive nature of the sensors are not realistic in typical home or office settings. Our solution is to determine user emotions by analyzing the rhythm of an individuals typing patterns on a standard keyboard. Our keystroke dynamics approach would allow for the uninfluenced determination of emotion using technology that is in widespread use today. We conducted a field study where participants keystrokes were collected in situ and their emotional states were recorded via self reports. Using various data mining techniques, we created models based on 15 different emotional states. With the results from our cross-validation, we identify our best-performing emotional state models as well as other emotional states that can be explored in future studies. We also provide a set of recommendations for future analysis on the existing data set as well as suggestions for future data collection and experimentation.
6

Using Music and Emotion to Enable Effective Affective Computing

Bortz, Brennon Christopher 02 July 2019 (has links)
The computing devices with which we interact daily continue to become ever smaller, intelligent, and pervasive. Not only are they becoming more intelligent, but some are developing awareness of a user's affective state. Affective computing—computing that in some way senses, expresses, or modifies affect—is still a field very much in its youth. While progress has been made, the field is still limited by the need for larger sets of diverse, naturalistic, and multimodal data. This work first considers effective strategies for designing psychophysiological studies that permit the assembly of very large samples that cross numerous demographic boundaries, data collection in naturalistic environments, distributed study locations, rapid iterations on study designs, and the simultaneous investigation of multiple research questions. It then explores how commodity hardware and general-purpose software tools can be used to record, represent, store, and disseminate such data. As a realization of these strategies, this work presents a new database from the Emotion in Motion (EiM) study of human psychophysiological response to musical affective stimuli comprising over 23,000 participants and nearly 67,000 psychophysiological responses. Because music presents an excellent tool for the investigation of human response to affective stimuli, this work uses this wealth of data to explore how to design more effective affective computing systems by characterizing the strongest responses to musical stimuli used in EiM. This work identifies and characterizes the strongest of these responses, with a focus on modeling the characteristics of listeners that make them more or less prone to demonstrating strong physiological responses to music stimuli. This dissertation contributes the findings from a number of explorations of the relationships between strong reactions to music and the characteristics and self-reported affect of listeners. It demonstrates not only that such relationships do exist, but takes steps toward automatically predicting whether or not a listener will exhibit such exceptional responses. Second, this work contributes a flexible strategy and functional system for both successfully executing large-scale, distributed studies of psychophysiology and affect; and for synthesizing, managing, and disseminating the data collected through such efforts. Finally, and most importantly, this work presents the EiM database itself. / Doctor of Philosophy / The computing devices with which we interact daily continue to become ever smaller, intelligent, and pervasive. Not only are they becoming more intelligent, but some are developing awareness of a user’s affective state. Affective computing—computing that in some way senses, expresses, or modifies affect—is still a field very much in its youth. While progress has been made, the field is still limited by the need for larger sets of diverse, naturalistic, and multimodal data. This dissertation contributes the findings from a number of explorations of the relationships between strong reactions to music and the characteristics and self-reported affect of listeners. It demonstrates not only that such relationships do exist, but takes steps toward automatically predicting whether or not a listener will exhibit such exceptional responses. Second, this work contributes a flexible strategy and functional system for both successfully executing large-scale, distributed studies of psychophysiology and affect; and for synthesizing, managing, and disseminating the data collected through such efforts. Finally, and most importantly, this work presents the Emotion in Motion (EiM) (a study of human affective/psychophysiological response to musical stimuli) database comprising over 23,000 participants and nearly 67,000 psychophysiological responses.
7

Software support for experience sampling

Lippold, Mike 25 February 2011
User interface design is becoming more reliant on user emotional states to improve usability, adapt to the users state, and allow greater expressiveness. Historically, usability has relied on performance metrics for evaluation, but user experience, with an emphasis on aesthetics and emotions, has become recognized as important for improving user interfaces. Research is ongoing into systems that automatically adapt to users states such as expertise or physical impairments and emotions are the next frontier for adaptive user interfaces. Improving the emotional expressiveness of computers adds a missing element that exists in human face-to-face interactions. The first step of incorporating users emotions into usability evaluation, adaptive interfaces, and expressive interfaces is to sense and gather the users emotional responses. Affective computing research has used predictive modeling to determine user emotional states, but studies are usually performed in controlled laboratory settings and lack realism. Field studies can be conducted to improve realism, but there are a number of logistical challenges with field studies: user activity data is difficult to gather, emotional state ground truth is difficult to collect, and relating the two is difficult. In this thesis, we describe a software solution that addresses the logistical issues of conducting affective computing field studies and we also describe an evaluation of the software using a field study. Based on the results of our study, we found that a software solution can reduce the logistical issues of conducting an affective computing field study and we provide some suggestions for future affective computing field studies.
8

Software support for experience sampling

Lippold, Mike 25 February 2011 (has links)
User interface design is becoming more reliant on user emotional states to improve usability, adapt to the users state, and allow greater expressiveness. Historically, usability has relied on performance metrics for evaluation, but user experience, with an emphasis on aesthetics and emotions, has become recognized as important for improving user interfaces. Research is ongoing into systems that automatically adapt to users states such as expertise or physical impairments and emotions are the next frontier for adaptive user interfaces. Improving the emotional expressiveness of computers adds a missing element that exists in human face-to-face interactions. The first step of incorporating users emotions into usability evaluation, adaptive interfaces, and expressive interfaces is to sense and gather the users emotional responses. Affective computing research has used predictive modeling to determine user emotional states, but studies are usually performed in controlled laboratory settings and lack realism. Field studies can be conducted to improve realism, but there are a number of logistical challenges with field studies: user activity data is difficult to gather, emotional state ground truth is difficult to collect, and relating the two is difficult. In this thesis, we describe a software solution that addresses the logistical issues of conducting affective computing field studies and we also describe an evaluation of the software using a field study. Based on the results of our study, we found that a software solution can reduce the logistical issues of conducting an affective computing field study and we provide some suggestions for future affective computing field studies.
9

Automatic facial expression analysis

Baltrušaitis, Tadas January 2014 (has links)
Humans spend a large amount of their time interacting with computers of one type or another. However, computers are emotionally blind and indifferent to the affective states of their users. Human-computer interaction which does not consider emotions, ignores a whole channel of available information. Faces contain a large portion of our emotionally expressive behaviour. We use facial expressions to display our emotional states and to manage our interactions. Furthermore, we express and read emotions in faces effortlessly. However, automatic understanding of facial expressions is a very difficult task computationally, especially in the presence of highly variable pose, expression and illumination. My work furthers the field of automatic facial expression tracking by tackling these issues, bringing emotionally aware computing closer to reality. Firstly, I present an in-depth analysis of the Constrained Local Model (CLM) for facial expression and head pose tracking. I propose a number of extensions that make location of facial features more accurate. Secondly, I introduce a 3D Constrained Local Model (CLM-Z) which takes full advantage of depth information available from various range scanners. CLM-Z is robust to changes in illumination and shows better facial tracking performance. Thirdly, I present the Constrained Local Neural Field (CLNF), a novel instance of CLM that deals with the issues of facial tracking in complex scenes. It achieves this through the use of a novel landmark detector and a novel CLM fitting algorithm. CLNF outperforms state-of-the-art models for facial tracking in presence of difficult illumination and varying pose. Lastly, I demonstrate how tracked facial expressions can be used for emotion inference from videos. I also show how the tools developed for facial tracking can be applied to emotion inference in music.
10

The Use of the CAfFEINE Framework in a Step-by-Step Assembly Guide

Ketchum, Devin Kyle 29 January 2020 (has links)
Today's technology is becoming more interactive with voice assistants like Siri. However, interactive systems such as Siri make mistakes. The purpose of this thesis is to explore using affect as an implicit feedback channel so that such mistakes would be easily corrected in real time. The CAfFEINE Framework, which was created by Dr. Saha, is a context-aware affective feedback loop in an intelligent environment. For the research described in this thesis, the focus will be on analyzing a user's physiological response to the service provided by an intelligent environment. To test this feedback loop, an experiment was constructed using an on-screen, step-by-step assembly guide for a Tangram puzzle. To categorize the user's response to the experiment, baseline readings were gathered for a user's stressed and non-stressed state. The Paced Stroop Test and two other baseline tests were conducted to gather these two states. The data gathered in the baseline tests was then used to train a support vector machine to predict the user's response to the Tangram experiment. During the data analysis phase of the research, the results for the predictions on the Tangram experiment were not as expected. Multiple trials of training data for the support vector machine were explored, but the data gathered throughout this research was not enough to draw proper conclusions. More focus was then given to analyzing the pre-processed data of the baseline tests in an attempt to find a factor or group of factors to determine if the user's physiological responses would be useful to train the Support Vector Machine. There were trends found when comparing the area under the curves of the Paced Stroop Test phasic driver plots. It was found that these comparison factors might be a useful approach for differentiating users based upon their physiological responses during the Paced Stroop Test. / Master of Science / The purpose of this thesis was to use the CAfFEINE Framework, proposed by Dr. Saha, in a real-world environment. Dr. Saha's Framework utilizes a user's physical responses, i.e. heart rate, in a smart environment to give information to the smart devices. For example, if Siri were to give a user directions to someone's home and told that user to turn right when the user knew they needed to turn left. That user would have a physical reaction as in their heart rate would increase. If the user were wearing a smart watch, Siri would be able to see the heart rate increase and realize, from past experiences with that user, that the information she gave to the user was incorrect. Then she would be able to correct herself. My research focused on measuring user reaction to a smart service provided in a real-world situation using a Tangram puzzle as a mock version of an industrial assembly situation. The users were asked to follow on-screen instructions to assemble the Tangram puzzle. Their reactions were recorded through a smart watch and analyzed post-experiment. Based on the results of a Paced Stroop Test they took before the experiment, a computer algorithm would predict their stress levels for each service provided by the step-by-step instruction guide. However, the results did not turn out as expected. Therefore, the rest of the research focused more on why the results did not support Dr. Saha's previous Framework results.

Page generated in 0.0988 seconds