• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 33
  • 11
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 132
  • 132
  • 42
  • 41
  • 34
  • 31
  • 25
  • 23
  • 22
  • 22
  • 20
  • 19
  • 19
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Emotional design : an investigation into designers' perceptions of incorporating emotions in software

Gutica, Mirela 11 1900 (has links)
In my teaching and software development practice, I realized that most applications with human-computer interaction do not respond to usersâ emotional needs. The dualism of reason and emotion as two fairly opposite entities that dominated Western philosophy was also reflected in software design. Computing was originally intended to provide applications for military and industrial activities and was primarily associated with cognition and rationality. Today, more and more computer applications interact with users in very complex and sophisticated ways. In human-computer interaction, attention is given to issues of usability and user modeling, but techniques to emotionally engage users or respond to their emotional needs have not been fully developed, even as specialists like Klein, Norman and Picard argued that machines that recognize and express emotions respond better and more appropriately to user interaction (Picard, 1997; Picard & Klein, 2002; Norman, 2004). This study investigated emotion from designersâ perspectives and tentatively concludes that there is little awareness and involvement in emotional design in the IT community. By contrast, participants in this study (36 IT specialists from various fields) strongly supported the idea of emotional design and confirmed the need for methodologies and theoretical models to research emotional design. Based on a review of theory, surveys and interviews, I identified a set of themes for heuristics of emotional design and recommended future research directions. Attention was given to consequences; participants in this study raised issues of manipulation, ethical responsibilities of designers, and the need for regulations, and recommended that emotional design should carry standard ethical guidelines for games and any other applications. The research design utilized a mixed QUAN-qual methodological model proposed by Creswell (2003) and Gay, Mills, and Airasian (2006), which was modified to equally emphasize both quantitative and qualitative stages. An instrument in the form of a questionnaire was designed, tested and piloted in this study and will be improved and used in future research. / Education, Faculty of / Curriculum and Pedagogy (EDCP), Department of / Graduate
62

Emotion Recognition from EEG Signals using Machine Learning

Moshfeghi, Mohammadshakib, Bartaula, Jyoti Prasad, Bedasso, Aliye Tuke January 2013 (has links)
The beauty of affective computing is to make machine more emphatic to the user. Machines with the capability of emotion recognition can actually look inside the user’s head and act according to observed mental state. In this thesis project, we investigate different features set to build an emotion recognition system from electroencephalographic signals. We used pictures from International Affective Picture System to motivate three emotional states: positive valence (pleasant), neutral, negative valence (unpleasant) and also to induce three sets of binary states: positive valence, not positive valence; negative valence, not negative valence; and neutral, not neutral. This experiment was designed with a head cap with six electrodes at the front of the scalp which was used to record data from subjects. To solve the recognition task we developed a system based on Support Vector Machines (SVM) and extracted the features, some of them we got from literature study and some of them proposed by ourselves in order to rate the recognition of emotional states. With this system we were able to achieve an average recognition rate up to 54% for three emotional states and an average recognition rate up to 74% for the binary states, solely based on EEG signals.
63

The Affective PDF Reader

Radits, Markus January 2010 (has links)
The Affective PDF Reader is a PDF Reader combined with affect recognition systems. The aim of the project is to research a way to provide the reader of a PDF with real - time visual feedback while reading the text to influence the reading experience in a positive way. The visual feedback is given in accordance to analyzed emotional states of the person reading the text - this is done by capturing and interpreting affective information with a facial expression recognition system. Further enhancements would also include analysis of voice in the computation as well as gaze tracking software to be able to use the point of gaze when rendering the visualizations.The idea of the Affective PDF Reader mainly arose in admitting that the way we read text on computers, mostly with frozen and dozed off faces, is somehow an unsatisfactory state or moreover a lonesome process and a poor communication. This work is also inspired by the significant progress and efforts in recognizing emotional states from video and audio signals and the new possibilities that arise from.The prototype system was providing visualizations of footprints in different shapes and colours which were controlled by captured facial expressions to enrich the textual content with affective information. The experience showed that visual feedback controlled by utterances of facial expressions can bring another dimension to the reading experience if the visual feedback is done in a frugal and non intrusive way and it showed that the evolvement of the users can be enhanced.
64

Färgens påverkan på mänsklig emotion vid gränssnittsdesign

Haglund, Sonja January 2004 (has links)
Dagens teknologiska samhälle ställer höga krav på människan, bland annat gällande att processa information. Vid utformning av system tas det numera vanligtvis hänsyn till människa-datorinteraktionen (MDI) för att erhålla en så hög användbarhet som möjligt. Affektiv Informatik, som är ett utvecklat sätt att förhålla sig till MDI, talar för att utveckla system som både kan uppfatta och förmedla emotioner till användaren. Fokus i rapporten är hur ett system kan förmedla emotioner, via dess färgsättning, och därmed påverka användarens emotionella tillstånd. En kvantitativ undersökning har utförts för att ta reda på hur färger kan användas i ett system för att förmedla känslouttryck till användare. Vidare har en jämförelse gjorts mellan undersökningens resultat och tidigare teorier om hur färg påverkar människans emotioner för att ta reda på huruvida de är lämpliga att tillämpa vid gränssnittsdesign. Resultatet pekade på en samständighet med de tidigare teorierna, men med endast en statistisk signifikant skillnad mellan blått och gult gällande behagligheten.
65

Words have power: Speech recognition in interactive jewelry : a case study with newcome LGBT+ immigrants

Poikolainen Rosén, Anton January 2017 (has links)
This paper addresses a design exploration focusing on interactive jewelry conducted with newcome LGBT+ immigrants in Sweden, leading to a necklace named PoWo that is “powered” by the spoken word through a mobile application that reacts to customizable keywords triggering LED-lights in the necklace. Interactive jewelry is in this paper viewed as a medium with a simultaneous relation to wearer and spectator thus affording use on the themes of symbolism, emotion, body and communication. These themes are demonstrated through specific use scenarios of the necklace relating to the participants of the design exploration e.g. addressing consent, societal issues, meeting situations and expressions of love and sexuality.  The potential of speech based interactive jewelry is investigated in this paper e.g. finding speech recognition in LED-jewelry to act as an amplifier of spoken words, actions and meaning; and as a visible extension of the smartphone and human body. In addition use qualities of visibility, ambiguity, continuity and fluency are discussed in relation to speech based LED-jewelry.
66

Affect-based Modeling and its Application in Multimedia Analysis Problems

Bhattacharya, Abhishek 13 July 2012 (has links)
The multimedia domain is undergoing a rapid development phase with transition in audio, image, and video systems such as VoIP, Telepresence, Live/On-Demand Internet Streaming, SecondLife, and many more. In such a situation, the analysis of multimedia systems, like retrieval, quality evaluation, enhancement, summarization, and re-targeting applications, from various context is becoming critical. Current methods for solving the above-mentioned analysis problems do not consider the existence of humans and their affective characteristics in the design methodology. This contradicts the fact that most of the digital media is consumed only by the human end-users. We believe incorporating human feedback during the design and adaptation stage is key to the building process of multimedia systems. In this regard, we observe that affect is an important indicator of human perception and experience. This can be exploited in various ways for designing effective systems that will adapt more closely to the human response. We advocate an affect-based modeling approach for solving multimedia analysis problems by exploring new directions. In this dissertation, we select two representative multimedia analysis problems, e.g. Quality-of-Experience (QoE) evaluation and Image Enhancement in order to derive solutions based on affect-based modeling techniques. We formulate specific hypothesis for them by correlating system parameters to user's affective response, and investigate their roles under varying conditions for each respective scenario. We conducted extensive user studies based on human-to-human interaction through an audio conferencing system.We also conducted user studies based on affective enhancement of images and evaluated the effectiveness of our proposed approaches. Moving forward, multimedia systems will become more media-rich, interactive, and sophisticated and therefore effective solutions for quality, retrieval, and enhancement will be more challenging. Our work thus represents an important step towards the application of affect-based modeling techniques for the future generation of multimedia systems.
67

Vers des agents conversationnels capables de réguler leurs émotions : un modèle informatique des tendances à l’action / Towards conversational agents with emotion regulation abilities : a computational model of action tendencies

Yacoubi, Alya 14 November 2019 (has links)
Les agents virtuels conversationnels ayant un comportement social reposent souvent sur au moins deux disciplines différentes : l’informatique et la psychologie. Dans la plupart des cas, les théories psychologiques sont converties en un modèle informatique afin de permettre aux agents d’adopter des comportements crédibles. Nos travaux de thèse se positionnent au croisement de ces deux champs disciplinaires. Notre objectif est de renforcer la crédibilité des agents conversationnels. Nous nous intéressons aux agents conversationnels orientés tâche, qui sont utilisés dans un contexte professionnel pour produire des réponses à partir d’une base de connaissances métier. Nous proposons un modèle affectif pour ces agents qui s’inspire des mécanismes affectifs chez l’humain. L’approche que nous avons choisie de mettre en œuvre dans notre modèle s’appuie sur la théorie des Tendances à l’Action en psychologie. Nous avons proposé un modèle des émotions en utilisant un formalisme inspiré de la logique BDI pour représenter les croyances et les buts de l’agent. Ce modèle a été implémenté dans une architecture d’agent conversationnel développée au sein de l’entreprise DAVI. Afin de confirmer la pertinence de notre approche, nous avons réalisé plusieurs études expérimentales. La première porte sur l’évaluation d’expressions verbales de la tendance à l’action. La deuxième porte sur l’impact des différentes stratégies de régulation possibles sur la perception de l’agent par l’utilisateur. Enfin, la troisième étude porte sur l’évaluation des agents affectifs en interaction avec des participants. Nous montrons que le processus de régulation que nous avons implémenté permet d’augmenter la crédibilité et le professionnalisme perçu des agents, et plus généralement qu’ils améliorent l’interaction. Nos résultats mettent ainsi en avant la nécessité de prendre en considération les deux mécanismes émotionnels complémentaires : la génération et la régulation des réponses émotionnelles. Ils ouvrent des perspectives sur les différentes manières de gérer les émotions et leur impact sur la perception de l’agent. / Conversational virtual agents with social behavior are often based on at least two different disciplines : computer science and psychology. In most cases, psychological findings are converted into computational mechanisms in order to make agents look and behave in a believable manner. In this work, we aim at increasing conversational agents’ belivielibity and making human-agent interaction more natural by modelling emotions. More precisely, we are interested in task-oriented conversational agents, which are used as a custumer-relationship channel to respond to users request. We propose an affective model of emotional responses’ generation and control during a task-oriented interaction. Our proposed model is based, on one hand, on the theory of Action Tendencies (AT) in psychology to generate emotional responses during the interaction. On the other hand, the emotional control mechanism is inspired from social emotion regulation in empirical psychology. Both mechanisms use agent’s goals, beliefs and ideals. This model has been implemented in an agent architecture endowed with a natural language processing engine developed by the company DAVI. In order to confirm the relevance of our approach, we realized several experimental studies. The first was about validating verbal expressions of action tendency in a human-agent dialogue. In the second, we studied the impact of different emotional regulation strategies on the agent perception by the user. This study allowed us to design a social regulation algorithm based on theoretical and empirical findings. Finally, the third study focuses on the evaluation of emotional agents in real-time interactions. Our results show that the regulation process contributes in increasing the credibility and perceived competence of agents as well as in improving the interaction. Our results highlight the need to take into consideration of the two complementary emotional mechanisms : the generation and regulation of emotional responses. They open perspectives on different ways of managing emotions and their impact on the perception of the agent.
68

A Novel Deep Learning Approach for Emotion Classification

Ayyalasomayajula, Satya Chandrashekhar 14 February 2022 (has links)
Neural Networks are at the core of computer vision solutions for various applications. With the advent of deep neural networks Facial Expression Recognition (FER) has been a very ineluctable and challenging task in the field of computer vision. Micro-expressions (ME) have been quite prominently used in security, psychotherapy, neuroscience and have a wide role in several related disciplines. However, due to the subtle movements of facial muscles, the micro-expressions are difficult to detect and identify. Due to the above, emotion detection and classification have always been hot research topics. The recently adopted networks to train FERs are yet to focus on issues caused due to overfitting, effectuated by insufficient data for training and expression unrelated variations like gender bias, face occlusions and others. Association of FER with the Speech Emotion Recognition (SER) triggered the development of multimodal neural networks for emotion classification in which the application of sensors played a significant role as they substantially increased the accuracy by providing high quality inputs, further elevating the efficiency of the system. This thesis relates to the exploration of different principles behind application of deep neural networks with a strong focus towards Convolutional Neural Networks (CNN) and Generative Adversarial Networks (GAN) in regards to their applications to emotion recognition. A Motion Magnification algorithm for ME's detection and classification was implemented for applications requiring near real-time computations. A new and improved architecture using a Multimodal Network was implemented. In addition to the motion magnification technique for emotion classification and extraction, the Multimodal algorithm takes the audio-visual cues as inputs and reads the MEs on the real face of the participant. This feature of the above architecture can be deployed while administering interviews, or supervising ICU patients in hospitals, in the auto industry, and many others. The real-time emotion classifier based on state-of-the-art Image-Avatar Animation model was tested on simulated subjects. The salient features of the real-face are mapped on avatars that are build with a 3D scene generation platform. In pursuit of the goal of emotion classification, the Image Animation model outperforms all baselines and prior works. Extensive tests and results obtained demonstrate the validity of the approach.
69

Start Your EM(otion En)gine: Towards Computational Models of Emotion for Improving the Believability of Video Game Non-Player Characters / Start Your EMgine

Smith, Geneva January 2023 (has links)
Believable Non-Player Characters (NPCs) help motivate player engagement with narrative-driven games. An important aspect of believable characters is their contextually-relevant reactions to changing situations, which emotion often drives in humans. Therefore, giving NPCs "emotion" should enhance their believability. For adoption in industry, it is important to create processes for developing tools to build NPCs "with emotion" that fit with current development practices. Psychological validity—the grounding in affective science—is a necessary quality for plausible emotion-driven NPC behaviours. Computational Models of Emotion (CMEs) are one solution because they use at least one affective theory/model in their design. However, CME development tends to be insufficiently documented such that its processes seem unsystematic and poorly defined. This makes it difficult to reuse a CME’s components, extend or scale them, or compare it to other CMEs. This work draws from software engineering to propose three methods for acknowledging and limiting subjectivity in CME development to improve their reusability, maintainability, and verifiability: a systematic, document analysis-based methodology for choosing a CME’s underlying affective theories/models using its high-level design goals and design scope, which critically influence a CME’s functional requirements; an approach for transforming natural language descriptions of affective theories into a type-based formal model using an intermediate, second natural language description refining the original descriptions and showing where and what assumptions informed the formalization; and a literary character analysis-based methodology for developing acceptance test cases with known believable characters from professionally-crafted stories that do not rely on specific CME designs. Development of EMgine, a game development CME for generating NPC emotions, shows these methods in practice. / Dissertation / Doctor of Philosophy (PhD) / Video games can deeply engage players using characters that appear to have emotionally-driven behaviours. One way that developers encode and carry knowledge between projects is by creating development tools, allowing them to focus on how they use that knowledge and create new knowledge. This work draws from software engineering to propose three methods for creating development tools for game characters “with emotion”: a process for analyzing academic emotion literature so that the tool’s functions are plausible with respect to real-life emotion; a process for translating academic emotion literature into mathematical notation; and a process for creating tests to evaluate these kinds of development tools using narrative characters. The development of an example tool for creating game characters "with emotion", EMgine, demonstrates these methods and serves as an example of good development practices.
70

Adaptive Intelligent User Interfaces With Emotion Recognition

Nasoz, Fatma 01 January 2004 (has links)
The focus of this dissertation is on creating Adaptive Intelligent User Interfaces to facilitate enhanced natural communication during the Human-Computer Interaction by recognizing users' affective states (i.e., emotions experienced by the users) and responding to those emotions by adapting to the current situation via an affective user model created for each user. Controlled experiments were designed and conducted in a laboratory environment and in a Virtual Reality environment to collect physiological data signals from participants experiencing specific emotions. Algorithms (k-Nearest Neighbor [KNN], Discriminant Function Analysis [DFA], Marquardt-Backpropagation [MBP], and Resilient Backpropagation [RBP]) were implemented to analyze the collected data signals and to find unique physiological patterns of emotions. Emotion Elicitation with Movie Clips Experiment was conducted to elicit Sadness, Anger, Surprise, Fear, Frustration, and Amusement from participants. Overall, the three algorithms: KNN, DFA, and MBP, could recognize emotions with 72.3%, 75.0%, and 84.1% accuracy, respectively. Driving Simulator experiment was conducted to elicit driving-related emotions and states (panic/fear, frustration/anger, and boredom/sleepiness). The KNN, MBP and RBP Algorithms were used to classify the physiological signals by corresponding emotions. Overall, KNN could classify these three emotions with 66.3%, MBP could classify them with 76.7% and RBP could classify them with 91.9% accuracy. Adaptation of the interface was designed to provide multi-modal feedback to the users about their current affective state and to respond to users' negative emotional states in order to decrease the possible negative impacts of those emotions. Bayesian Belief Networks formalization was employed to develop the User Model to enable the intelligent system to appropriately adapt to the current context and situation by considering user-dependent factors, such as: personality traits and preferences.

Page generated in 0.0782 seconds