• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 33
  • 11
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 132
  • 132
  • 42
  • 41
  • 34
  • 31
  • 25
  • 23
  • 22
  • 22
  • 20
  • 19
  • 19
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The Use of the CAfFEINE Framework in a Step-by-Step Assembly Guide

Ketchum, Devin Kyle 29 January 2020 (has links)
Today's technology is becoming more interactive with voice assistants like Siri. However, interactive systems such as Siri make mistakes. The purpose of this thesis is to explore using affect as an implicit feedback channel so that such mistakes would be easily corrected in real time. The CAfFEINE Framework, which was created by Dr. Saha, is a context-aware affective feedback loop in an intelligent environment. For the research described in this thesis, the focus will be on analyzing a user's physiological response to the service provided by an intelligent environment. To test this feedback loop, an experiment was constructed using an on-screen, step-by-step assembly guide for a Tangram puzzle. To categorize the user's response to the experiment, baseline readings were gathered for a user's stressed and non-stressed state. The Paced Stroop Test and two other baseline tests were conducted to gather these two states. The data gathered in the baseline tests was then used to train a support vector machine to predict the user's response to the Tangram experiment. During the data analysis phase of the research, the results for the predictions on the Tangram experiment were not as expected. Multiple trials of training data for the support vector machine were explored, but the data gathered throughout this research was not enough to draw proper conclusions. More focus was then given to analyzing the pre-processed data of the baseline tests in an attempt to find a factor or group of factors to determine if the user's physiological responses would be useful to train the Support Vector Machine. There were trends found when comparing the area under the curves of the Paced Stroop Test phasic driver plots. It was found that these comparison factors might be a useful approach for differentiating users based upon their physiological responses during the Paced Stroop Test. / Master of Science / The purpose of this thesis was to use the CAfFEINE Framework, proposed by Dr. Saha, in a real-world environment. Dr. Saha's Framework utilizes a user's physical responses, i.e. heart rate, in a smart environment to give information to the smart devices. For example, if Siri were to give a user directions to someone's home and told that user to turn right when the user knew they needed to turn left. That user would have a physical reaction as in their heart rate would increase. If the user were wearing a smart watch, Siri would be able to see the heart rate increase and realize, from past experiences with that user, that the information she gave to the user was incorrect. Then she would be able to correct herself. My research focused on measuring user reaction to a smart service provided in a real-world situation using a Tangram puzzle as a mock version of an industrial assembly situation. The users were asked to follow on-screen instructions to assemble the Tangram puzzle. Their reactions were recorded through a smart watch and analyzed post-experiment. Based on the results of a Paced Stroop Test they took before the experiment, a computer algorithm would predict their stress levels for each service provided by the step-by-step instruction guide. However, the results did not turn out as expected. Therefore, the rest of the research focused more on why the results did not support Dr. Saha's previous Framework results.
12

Characterising action potential in virtual game worlds applied with the mind module

Eladhari, Mirjam Palosaari January 2010 (has links)
Because games set in persistent virtual game worlds (VGWs) have massive numbers of players, these games need methods of characterisation for playable characters (PCs) that differ from the methods used in traditional narrative media. VGWs have a number of particularly interesting qualities. Firstly, VGWs are places where players interact with and create elements carrying narrative potential. Secondly, players add goals, motives and driving forces to the narrative potential of a VGW, which sometimes originates from the ordinary world. Thirdly, the protagonists of the world are real people, and when acting in the world their characterisation is not carried out by an author, but expressed by players characterising their PCs. How they can express themselves in ways that characterise them depend on what they can do, and how they can do it, and this characterising action potential (CAP) is defined by the game design of particular VGWs. In this thesis, two main questions are explored. Firstly, how can CAP be designed to support players in expressing consistent characters in VGWs? Secondly, how can VGWs support role-play in their rule-systems? By using iterative design, I explore the design space of CAP by building a semiautonomous agent structure, the Mind Module (MM) and apply it in five experimental prototypes where the design of CAP and other game features is derived from the MM. The term semiautonomy is used because the agent structure is designed to be used by a PC, and is thus partly controlled by the system and partly by the player. The MM models a PC’s personality as a collection of traits, maintains dynamic emotional state as a function of interactions with objects in the environment, and summarises a PC’s current emotional state in terms of ‘mood’. The MM consists of a spreading-activation network of affect nodes that are interconnected by weighted relationships. There are four types of affect node: personality trait nodes, emotion nodes, mood nodes, and sentiment nodes. The values of the nodes defining the personality traits of characters govern an individual PC’s state of mind through these weighted relationships, resulting in values characterising for a PC’s personality. The sentiment nodes constitute emotionally valenced connections between entities. For example, a PC can ‘feel’ anger toward another PC. This thesis also describes a guided paper-prototype play-test of the VGW prototype World of Minds, in which the game mechanics build upon the MM’s model of personality and emotion. In a case study of AI-based game design, lessons learned from the test are presented. The participants in the test were able to form and communicate mental models of the MM and game mechanics, validating the design and giving valuable feedback for further development. Despite the constrained scenarios presented to test players, they discovered interesting, alternative strategies, indicating that for game design the ‘mental physics’ of the MM may open up new possibilities. The results of the play-test influenced the further development of the MM as it was used in the digital VGW prototype the Pataphysic Institute. In the Pataphysic Institute the CAP of PCs is largely governed by their mood. Depending on which mood PCs are in they can cast different ‘spells’, which affect values such as mental energy, resistance and emotion in their targets. The mood also governs which ‘affective actions’ they can perform toward other PCs and what affective actions they are receptive to. By performing affective actions on each other PCs can affect each others’ emotions, which - if they are strong - may result in sentiments toward each other. PCs’ personalities govern the individual fluctuations of mood and emotions, and define which types of spell PCs can cast. Formalised social relationships such as friendships affect CAP, giving players more energy, resistance, and other benefits. PCs’ states of mind are reflected in the VGW in the form of physical manifestations that emerge if an emotion is very strong. These manifestations are entities which cast different spells on PCs in close proximity, depending on the emotions that the manifestations represent. PCs can also partake in authoring manifestations that become part of the world and the game-play in it. In the Pataphysic Institute potential story structures are governed by the relations the sentiment nodes constitute between entities.
13

Artificial intelligence system for continuous affect estimation from naturalistic human expressions

Abd Gaus, Yona Falinie January 2018 (has links)
The analysis and automatic affect estimation system from human expression has been acknowledged as an active research topic in computer vision community. Most reported affect recognition systems, however, only consider subjects performing well-defined acted expression, in a very controlled condition, so they are not robust enough for real-life recognition tasks with subject variation, acoustic surrounding and illumination change. In this thesis, an artificial intelligence system is proposed to continuously (represented along a continuum e.g., from -1 to +1) estimate affect behaviour in terms of latent dimensions (e.g., arousal and valence) from naturalistic human expressions. To tackle the issues, feature representation and machine learning strategies are addressed. In feature representation, human expression is represented by modalities such as audio, video, physiological signal and text modality. Hand- crafted features is extracted from each modality per frame, in order to match with consecutive affect label. However, the features extracted maybe missing information due to several factors such as background noise or lighting condition. Haar Wavelet Transform is employed to determine if noise cancellation mechanism in feature space should be considered in the design of affect estimation system. Other than hand-crafted features, deep learning features are also analysed in terms of the layer-wise; convolutional and fully connected layer. Convolutional Neural Network such as AlexNet, VGGFace and ResNet has been selected as deep learning architecture to do feature extraction on top of facial expression images. Then, multimodal fusion scheme is applied by fusing deep learning feature and hand-crafted feature together to improve the performance. In machine learning strategies, two-stage regression approach is introduced. In the first stage, baseline regression methods such as Support Vector Regression are applied to estimate each affect per time. Then in the second stage, subsequent model such as Time Delay Neural Network, Long Short-Term Memory and Kalman Filter is proposed to model the temporal relationships between consecutive estimation of each affect. In doing so, the temporal information employed by a subsequent model is not biased by high variability present in consecutive frame and at the same time, it allows the network to exploit the slow changing dynamic between emotional dynamic more efficiently. Following of two-stage regression approach for unimodal affect analysis, fusion information from different modalities is elaborated. Continuous emotion recognition in-the-wild is leveraged by investigating mathematical modelling for each emotion dimension. Linear Regression, Exponent Weighted Decision Fusion and Multi-Gene Genetic Programming are implemented to quantify the relationship between each modality. In summary, the research work presented in this thesis reveals a fundamental approach to automatically estimate affect value continuously from naturalistic human expression. The proposed system, which consists of feature smoothing, deep learning feature, two-stage regression framework and fusion using mathematical equation between modalities is demonstrated. It offers strong basis towards the development artificial intelligent system on estimation continuous affect estimation, and more broadly towards building a real-time emotion recognition system for human-computer interaction.
14

On the Selection of Just-in-time Interventions

Jaimes, Luis Gabriel 20 March 2015 (has links)
A deeper understanding of human physiology, combined with improvements in sensing technologies, is fulfilling the vision of affective computing, where applications monitor and react to changes in affect. Further, the proliferation of commodity mobile devices is extending these applications into the natural environment, where they become a pervasive part of our daily lives. This work examines one such pervasive affective computing application with significant implications for long-term health and quality of life adaptive just-in-time interventions (AJITIs). We discuss fundamental components needed to design AJITIs based for one kind of affective data, namely stress. Chronic stress has significant long-term behavioral and physical health consequences, including an increased risk of cardiovascular disease, cancer, anxiety and depression. This dissertation presents the state-of-the-art of Just-in-time interventions for stress. It includes a new architecture. that is used to describe the most important issues in the design, implementation, and evaluation of AJITIs. Then, the most important mechanisms available in the literature are described, and classified. The dissertation also presents a simulation model to study and evaluate different strategies and algorithms for interventions selection. Then, a new hybrid mechanism based on value iteration and monte carlo simulation method is proposed. This semi-online algorithm dynamically builds a transition probability matrix (TPM) which is used to obtain a new policy for intervention selection. We present this algorithm in two different versions. The first version uses a pre-determined number of stress episodes as a training set to create a TPM, and then to generate the policy that will be used to select interventions in the future. In the second version, we use each new stress episode to update the TPM, and a pre-determined number of episodes to update our selection policy for interventions. We also present a completely online learning algorithm for intervention selection based on Q-learning with eligibility traces. We show that this algorithm could be used by an affective computing system to select and deliver in mobile environments. Finally, we conducts posthoc experiments and simulations to demonstrate feasibility of both real-time stress forecasting and stress intervention adaptation and optimization.
15

Adaptive Affective Computing: Countering User Frustration

Aghaei, Behzad 28 February 2013 (has links)
With the rise of mobile computing and an ever-growing variety of ubiquitous sensors, computers are becoming increasingly context-aware. A revolutionary step in this process that has seen much progress will be user-awareness: the ability of a computing device to infer its user's emotions. This research project attempts to study the effectiveness of enabling a computer to adapt its visual interface to counter user frustration. A two-group experiment was designed to engage participants in a goal-oriented task disguised as a simple usability study with a performance incentive. Five frustrating stimuli were triggered throughout a single 15-minute task in the form of complete system unresponsiveness or delay. An algorithm was implemented to attempt to detect sudden rises in user arousal measured via a skin conductance sensor. Following a successful detection, or otherwise a maximum of a 10-second delay, the application resumed responsiveness. In the control condition, participants were exposed to a “please wait” pop-up near the end of the delay whereas those in the adaption condition were exposed to an additional visual transition to a user interface with calming colours and larger touch targets. This proposed adaptive condition was hypothesized to reduce the recovery time associated with the frustration response. The experiment was successfully able to induce frustration (via measurable skin conductance responses) in the majority of trials. The mean recovery half-time of participants in the first trial adaptive condition was significantly longer than that of the control. This was attributed to a possibility of a large chromatic difference between the adaptive and control colour schemes, habituation and prediction, causal association of adaptation to the frustrating stimulus, as well as insufficient subtlety in the transition and visual look of the adaptive interface. The study produced findings and guidelines that will be crucial in the future design of adaptive affective user interfaces.
16

The impact of Feedback Tone, Grammatical Person and Presentation Mode on Performance and Preference in a Computer-based Learning Task.

Thomas, Sebastian 16 September 2013 (has links)
Politeness is a part of student-tutor interactions and research in affective computing has shown that this social convention may also be applicable when a computer plays the role of tutor. This study sought to build on previous work that examined the effect of the politeness of computer feedback through the application of social and cognitive theories. Employing a mixed-factor design, a sample of 150 college students completed a multiple cue probability learning task (MCPL) on a computer that provided feedback phrased in one of three different tonal styles (joint-goal, student-goal and baldon- record). Feedback tone was a within-subjects factor. Subjects received feedback as either text or as audio. Audio feedback was a between-subjects factor and was delivered in one of four different modes male/female human voice or a male/female synthesized voice. The study found gender differences in tone preference as well as a possible impact of the Tone x Mode interaction on learning. Specifically, men were more likely than women to prefer the student-goal style feedback prompts. It is hoped that this research can provide additional insight to designers of learning applications when they are designing the feedback mechanisms that these systems should employ.
17

Färgens påverkan på mänsklig emotion vid gränssnittsdesign

Haglund, Sonja January 2004 (has links)
<p>Dagens teknologiska samhälle ställer höga krav på människan, bland annat gällande att processa information. Vid utformning av system tas det numera vanligtvis hänsyn till människa-datorinteraktionen (MDI) för att erhålla en så hög användbarhet som möjligt. Affektiv Informatik, som är ett utvecklat sätt att förhålla sig till MDI, talar för att utveckla system som både kan uppfatta och förmedla emotioner till användaren. Fokus i rapporten är hur ett system kan förmedla emotioner, via dess färgsättning, och därmed påverka användarens emotionella tillstånd. En kvantitativ undersökning har utförts för att ta reda på hur färger kan användas i ett system för att förmedla känslouttryck till användare. Vidare har en jämförelse gjorts mellan undersökningens resultat och tidigare teorier om hur färg påverkar människans emotioner för att ta reda på huruvida de är lämpliga att tillämpa vid gränssnittsdesign. Resultatet pekade på en samständighet med de tidigare teorierna, men med endast en statistisk signifikant skillnad mellan blått och gult gällande behagligheten.</p>
18

Recognition of Human Emotion in Speech Using Modulation Spectral Features and Support Vector Machines

Wu, Siqing 09 September 2009 (has links)
Automatic recognition of human emotion in speech aims at recognizing the underlying emotional state of a speaker from the speech signal. The area has received rapidly increasing research interest over the past few years. However, designing powerful spectral features for high-performance speech emotion recognition (SER) remains an open challenge. Most spectral features employed in current SER techniques convey short-term spectral properties only while omitting useful long-term temporal modulation information. In this thesis, modulation spectral features (MSFs) are proposed for SER, with support vector machines used for machine learning. By employing an auditory filterbank and a modulation filterbank for speech analysis, an auditory-inspired long-term spectro-temporal (ST) representation is obtained, which captures both acoustic frequency and temporal modulation frequency components. The MSFs are then extracted from the ST representation, thereby conveying information important for human speech perception but missing from conventional short-term spectral features (STSFs). Experiments show that the proposed features outperform features based on mel-frequency cepstral coefficients and perceptual linear predictive coefficients, two commonly used STSFs. The MSFs further render a substantial improvement in recognition performance when used to augment the extensively used prosodic features, and recognition accuracy above 90% is accomplished for classifying seven emotion categories. Moreover, the proposed features in combination with prosodic features attain estimation performance comparable to human evaluation for recognizing continuous emotions. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2009-09-08 13:01:54.941
19

Adaptive Affective Computing: Countering User Frustration

Aghaei, Behzad 28 February 2013 (has links)
With the rise of mobile computing and an ever-growing variety of ubiquitous sensors, computers are becoming increasingly context-aware. A revolutionary step in this process that has seen much progress will be user-awareness: the ability of a computing device to infer its user's emotions. This research project attempts to study the effectiveness of enabling a computer to adapt its visual interface to counter user frustration. A two-group experiment was designed to engage participants in a goal-oriented task disguised as a simple usability study with a performance incentive. Five frustrating stimuli were triggered throughout a single 15-minute task in the form of complete system unresponsiveness or delay. An algorithm was implemented to attempt to detect sudden rises in user arousal measured via a skin conductance sensor. Following a successful detection, or otherwise a maximum of a 10-second delay, the application resumed responsiveness. In the control condition, participants were exposed to a “please wait” pop-up near the end of the delay whereas those in the adaption condition were exposed to an additional visual transition to a user interface with calming colours and larger touch targets. This proposed adaptive condition was hypothesized to reduce the recovery time associated with the frustration response. The experiment was successfully able to induce frustration (via measurable skin conductance responses) in the majority of trials. The mean recovery half-time of participants in the first trial adaptive condition was significantly longer than that of the control. This was attributed to a possibility of a large chromatic difference between the adaptive and control colour schemes, habituation and prediction, causal association of adaptation to the frustrating stimulus, as well as insufficient subtlety in the transition and visual look of the adaptive interface. The study produced findings and guidelines that will be crucial in the future design of adaptive affective user interfaces.
20

Affective Intelligence in Built Environments

Yates, Heath January 1900 (has links)
Doctor of Philosophy / Department of Computer Science / William H. Hsu / The contribution of the proposed dissertation is the application of affective intelligence in human-developed spaces where people live, work, and recreate daily, also known as built environments. Built environments have been known to influence and impact individual affective responses. The implications of built environments on human well-being and mental health necessitate the need to develop new metrics to measure and detect how humans respond subjectively in built environments. Detection of arousal in built environments given biometric data and environmental characteristics via a machine learning-centric approach provides a novel and new capability to measure human responses to built environments. Work was also conducted on experimental design methodologies for multiple sensor fusion and detection of affect in built environments. These contributions include exploring new methodologies in applying supervised machine learning algorithms, such as logistic regression, random forests, and artificial neural networks, in the detection of arousal in built environments. Results have shown a machine learning approach can not only be used to detect arousal in built environments but also for the construction of novel explanatory models of the data.

Page generated in 0.1272 seconds