Spelling suggestions: "subject:"61effective computing"" "subject:"c.affective computing""
1 |
WASABI: affect simulation for agents with believable interactivity /Becker-Asano, Christian. January 2008 (has links)
Zugl.: Bielefeld, Univ., Diss., 2008.
|
2 |
Modélisation et détection des émotions à partir de données expressives et contextuelles / Emotion modelization and detection from expressive and contextual dataBerthelon, Franck 16 December 2013 (has links)
Nous proposons un modèle informatique pour la détection des émotions basé sur le comportement humain. Pour ce travail, nous utilisons la théorie des deux facteurs de Schachter et Singer pour reproduire dans notre architecture le comportement naturel en utilisant à la fois des données expressives et contextuelles. Nous concentrons nos efforts sur l’interprétation d’expressions en introduisant les Cartes Émotionnelles Personnalisées (CEPs) et sur la contextualisation des émotions via une ontologie du contexte émotionnel(EmOCA). Les CEPs sont motivées par le modèle complexe de Scherer et représentent les émotions déterminées par de multiple capteurs. Les CEPs sont calibrées individuellement, puis un algorithme de régression les utilises pour définir le ressenti émotionnel à partir des mesures des expressions corporelles. L’objectif de cette architecture est de séparer l’interprétation de la capture des expressions, afin de faciliter le choix des capteurs. De plus, les CEPs peuvent aussi être utilisées pour la synthétisation des expressions émotionnelles. EmOCA utilise le contexte pour simuler la modulation cognitive et pondérer l’émotion prédite. Nous utilisons pour cela un outil de raisonnement interopérable, une ontologie, nous permettant de décrire et de raisonner sur les philies et phobies pour pondérer l’émotion calculée à partir des expressions. Nous présentons également un prototype utilisant les expressions faciales pour évaluer la reconnaissance des motions en temps réel à partir de séquences vidéos. De plus, nous avons pu remarquer que le système décrit une sorte d’hystérésis lors du changement émotionnel comme suggéré par Scherer pour son modèle psychologique. / We present a computational model for emotion detection based on human behavioural expression. For this work, we use the two-factor theory of Schachter and Singer to map our architecture onto natural behavior, using both expressive and contextual data to build our emotion detector. We focus our effort on expression interpretation by introducing Personalized Emotion Maps (PEMs), and on emotion contextualisation via an Emotion Ontology for Contex Awareness (EmOCA). PEMs are motivated by Scherer’s complex system model of emotions and represent emotion values determined from multiple sensors. PEMs are calibrated to individuals, then a regression algorithm uses individual-specific PEMs to determine a person’s emotional feeling from sensor measurements of their bodily expressions. The aim of this architecture is to dissociate expression interpretation from sensor measurements, thus allowing flexibility in the choice of sensors. Moreover, PEMs can also be used in facial expression synthesis. EmOCA brings context into the emotion-modulating cognitive input to weight predicted emotion. We use a well known interoperable reasoning tool, an ontology, allowing us to describe and to reason about philia and phobia in order to modulate emotion determined from expression. We present a prototype using facial expressions to evaluate emotion recognition from real-time video sequences. Moreover, we note that, interestingly, the system detects the sort of hysteresis phenomenon in changing emotion state as suggested by Scherer’s psychological model.
|
3 |
Identifying emotional states through keystroke dynamicsEpp, Clayton Charles 09 September 2010
The ability to recognize emotions is an important part of building intelligent computers. Extracting the emotional aspects of a situation could provide computers with a rich context to make appropriate decisions about how to interact with the user or adapt the system response. The problem that we address in this thesis is that the current methods of determining user emotion have two issues: the equipment that is required is expensive, and the majority of these sensors are invasive to the user. These problems limit the real-world applicability of existing emotion-sensing methods because the equipment costs limit the availability of the technology, and the obtrusive nature of the sensors are not realistic in typical home or office settings. Our solution is to determine user emotions by analyzing the rhythm of an individuals typing patterns on a standard keyboard. Our keystroke dynamics approach would allow for the uninfluenced determination of emotion using technology that is in widespread use today. We conducted a field study where participants keystrokes were collected in situ and their emotional states were recorded via self reports. Using various data mining techniques, we created models based on 15 different emotional states. With the results from our cross-validation, we identify our best-performing emotional state models as well as other emotional states that can be explored in future studies. We also provide a set of recommendations for future analysis on the existing data set as well as suggestions for future data collection and experimentation.
|
4 |
MoodScope: Building a Mood Sensor from Smartphone Usage PatternsLi Kam Wa, Robert 06 September 2012 (has links)
MoodScope is a first-of-its-kind smartphone software system that learns the mood of its user based on how the smartphone is used. While commonly available sensors on smartphones measure physical properties, MoodScope is a sensor that measures an important mental state of the user and brings mood as an important context into context-aware computing.
We design MoodScope using a formative study with 32 participants and collect mood journals and usage data from them over two months. Through the study, we find that by analyzing communication history and application usage patterns, we can statistically infer a user’s daily mood average with 93% accuracy after a two-month training period. To a lesser extent, we can also estimate Sudden Mood Change events with reasonable accuracy (74%). Motivated by these results, we build a service, MoodScope, which analyzes usage history to act as a sensor of the user’s mood. We provide a MoodScope API for developers to use our system to create mood-enabled applications and create and deploy sample applications.
|
5 |
Identifying emotional states through keystroke dynamicsEpp, Clayton Charles 09 September 2010 (has links)
The ability to recognize emotions is an important part of building intelligent computers. Extracting the emotional aspects of a situation could provide computers with a rich context to make appropriate decisions about how to interact with the user or adapt the system response. The problem that we address in this thesis is that the current methods of determining user emotion have two issues: the equipment that is required is expensive, and the majority of these sensors are invasive to the user. These problems limit the real-world applicability of existing emotion-sensing methods because the equipment costs limit the availability of the technology, and the obtrusive nature of the sensors are not realistic in typical home or office settings. Our solution is to determine user emotions by analyzing the rhythm of an individuals typing patterns on a standard keyboard. Our keystroke dynamics approach would allow for the uninfluenced determination of emotion using technology that is in widespread use today. We conducted a field study where participants keystrokes were collected in situ and their emotional states were recorded via self reports. Using various data mining techniques, we created models based on 15 different emotional states. With the results from our cross-validation, we identify our best-performing emotional state models as well as other emotional states that can be explored in future studies. We also provide a set of recommendations for future analysis on the existing data set as well as suggestions for future data collection and experimentation.
|
6 |
Using Music and Emotion to Enable Effective Affective ComputingBortz, Brennon Christopher 02 July 2019 (has links)
The computing devices with which we interact daily continue to become ever smaller, intelligent, and pervasive. Not only are they becoming more intelligent, but some are developing awareness of a user's affective state. Affective computing—computing that in some way senses, expresses, or modifies affect—is still a field very much in its youth. While progress has been made, the field is still limited by the need for larger sets of diverse, naturalistic, and multimodal data.
This work first considers effective strategies for designing psychophysiological studies that permit the assembly of very large samples that cross numerous demographic boundaries, data collection in naturalistic environments, distributed study locations, rapid iterations on study designs, and the simultaneous investigation of multiple research questions. It then explores how commodity hardware and general-purpose software tools can be used to record, represent, store, and disseminate such data. As a realization of these strategies, this work presents a new database from the Emotion in Motion (EiM) study of human psychophysiological response to musical affective stimuli comprising over 23,000 participants and nearly 67,000 psychophysiological responses.
Because music presents an excellent tool for the investigation of human response to affective stimuli, this work uses this wealth of data to explore how to design more effective affective computing systems by characterizing the strongest responses to musical stimuli used in EiM. This work identifies and characterizes the strongest of these responses, with a focus on modeling the characteristics of listeners that make them more or less prone to demonstrating strong physiological responses to music stimuli.
This dissertation contributes the findings from a number of explorations of the relationships between strong reactions to music and the characteristics and self-reported affect of listeners. It demonstrates not only that such relationships do exist, but takes steps toward automatically predicting whether or not a listener will exhibit such exceptional responses. Second, this work contributes a flexible strategy and functional system for both successfully executing large-scale, distributed studies of psychophysiology and affect; and for synthesizing, managing, and disseminating the data collected through such efforts. Finally, and most importantly, this work presents the EiM database itself. / Doctor of Philosophy / The computing devices with which we interact daily continue to become ever smaller, intelligent, and pervasive. Not only are they becoming more intelligent, but some are developing awareness of a user’s affective state. Affective computing—computing that in some way senses, expresses, or modifies affect—is still a field very much in its youth. While progress has been made, the field is still limited by the need for larger sets of diverse, naturalistic, and multimodal data. This dissertation contributes the findings from a number of explorations of the relationships between strong reactions to music and the characteristics and self-reported affect of listeners. It demonstrates not only that such relationships do exist, but takes steps toward automatically predicting whether or not a listener will exhibit such exceptional responses. Second, this work contributes a flexible strategy and functional system for both successfully executing large-scale, distributed studies of psychophysiology and affect; and for synthesizing, managing, and disseminating the data collected through such efforts. Finally, and most importantly, this work presents the Emotion in Motion (EiM) (a study of human affective/psychophysiological response to musical stimuli) database comprising over 23,000 participants and nearly 67,000 psychophysiological responses.
|
7 |
Software support for experience samplingLippold, Mike 25 February 2011
User interface design is becoming more reliant on user emotional states to improve usability, adapt to the users state, and allow greater expressiveness. Historically, usability has relied on performance metrics for evaluation, but user experience, with an emphasis on aesthetics and emotions, has become recognized as important for improving user interfaces. Research is ongoing into systems that automatically adapt to users states such as expertise or physical impairments and emotions are the next frontier for adaptive user interfaces. Improving the emotional expressiveness of computers adds a missing element that exists in human face-to-face interactions. The first step of incorporating users emotions into usability evaluation, adaptive interfaces, and expressive interfaces is to sense and gather the users emotional responses. Affective computing research has used predictive modeling to determine user emotional states, but studies are usually performed in controlled laboratory settings and lack realism. Field studies can be conducted to improve realism, but there are a number of logistical challenges with field studies: user activity data is difficult to gather, emotional state ground truth is difficult to collect, and relating the two is difficult. In this thesis, we describe a software solution that addresses the logistical issues of conducting affective computing field studies and we also describe an evaluation of the software using a field study. Based on the results of our study, we found that a software solution can reduce the logistical issues of conducting an affective computing field study and we provide some suggestions for future affective computing field studies.
|
8 |
Software support for experience samplingLippold, Mike 25 February 2011 (has links)
User interface design is becoming more reliant on user emotional states to improve usability, adapt to the users state, and allow greater expressiveness. Historically, usability has relied on performance metrics for evaluation, but user experience, with an emphasis on aesthetics and emotions, has become recognized as important for improving user interfaces. Research is ongoing into systems that automatically adapt to users states such as expertise or physical impairments and emotions are the next frontier for adaptive user interfaces. Improving the emotional expressiveness of computers adds a missing element that exists in human face-to-face interactions. The first step of incorporating users emotions into usability evaluation, adaptive interfaces, and expressive interfaces is to sense and gather the users emotional responses. Affective computing research has used predictive modeling to determine user emotional states, but studies are usually performed in controlled laboratory settings and lack realism. Field studies can be conducted to improve realism, but there are a number of logistical challenges with field studies: user activity data is difficult to gather, emotional state ground truth is difficult to collect, and relating the two is difficult. In this thesis, we describe a software solution that addresses the logistical issues of conducting affective computing field studies and we also describe an evaluation of the software using a field study. Based on the results of our study, we found that a software solution can reduce the logistical issues of conducting an affective computing field study and we provide some suggestions for future affective computing field studies.
|
9 |
Automatic facial expression analysisBaltrušaitis, Tadas January 2014 (has links)
Humans spend a large amount of their time interacting with computers of one type or another. However, computers are emotionally blind and indifferent to the affective states of their users. Human-computer interaction which does not consider emotions, ignores a whole channel of available information. Faces contain a large portion of our emotionally expressive behaviour. We use facial expressions to display our emotional states and to manage our interactions. Furthermore, we express and read emotions in faces effortlessly. However, automatic understanding of facial expressions is a very difficult task computationally, especially in the presence of highly variable pose, expression and illumination. My work furthers the field of automatic facial expression tracking by tackling these issues, bringing emotionally aware computing closer to reality. Firstly, I present an in-depth analysis of the Constrained Local Model (CLM) for facial expression and head pose tracking. I propose a number of extensions that make location of facial features more accurate. Secondly, I introduce a 3D Constrained Local Model (CLM-Z) which takes full advantage of depth information available from various range scanners. CLM-Z is robust to changes in illumination and shows better facial tracking performance. Thirdly, I present the Constrained Local Neural Field (CLNF), a novel instance of CLM that deals with the issues of facial tracking in complex scenes. It achieves this through the use of a novel landmark detector and a novel CLM fitting algorithm. CLNF outperforms state-of-the-art models for facial tracking in presence of difficult illumination and varying pose. Lastly, I demonstrate how tracked facial expressions can be used for emotion inference from videos. I also show how the tools developed for facial tracking can be applied to emotion inference in music.
|
10 |
An Interview Study for Developing Subjective Measures to Record Self-Reported Mood in Older Adults: Implications for Assistive Technology DevelopmentBhardwaj, Devvrat 14 June 2023 (has links)
Increased life expectancy has led to a 15% growth in the population of seniors (aged 65 and above) in Canada, in the last 5 years and this trend is expected to grow. However, the provision of personalized care is bottlenecked, due a severe shortage of formal caregivers in the healthcare industry. Technological solutions are proposed to supplement or replace human care, but have not been widely accepted due to their inability of dynamically adapting to user needs and context of respective situations. Affective data (i.e., emotions and moods of individuals), can be utilized to induce context-awareness and artificial emotional intelligence in such technological solutions, and thereby provide personalized support. Moreover, the capacity of brain to process affective phenomenon can serve as an indicator of onsetting neuro-degenerative diseases. This research thoroughly investigated what affect is, and how it can be used in computing in real-life scenarios. Particularly, evidence was obtained on which biological signals collected using a wearable sensor device were capable of capturing the arousal dimension of affective states (emotions and moods) of individuals. Furthermore, a qualitative study was conducted with older adults using semi-structured interviews, to determine the feasibility and acceptability of different self-report measures of mood, which are crucial to capture the valence dimension of affect. As the hypothesis that older adults would prefer a pictorial measure to self-report their mood failed, we proposed an adjective-based mood reporting instrument prototype, and laid down implications for future research.
|
Page generated in 0.0988 seconds