• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 33
  • 11
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 132
  • 132
  • 42
  • 41
  • 34
  • 31
  • 25
  • 23
  • 22
  • 22
  • 20
  • 19
  • 19
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Zenth : An Affective Technology for Stress Relief

Chayleva, Aleksandra January 2022 (has links)
This master's thesis presents a research-through-design process that explores how can affective, context-aware systems support mental health and minimize stress in young adults during exam periods. This is achieved by designing an interactive system for stress recognition and relief. Biosensors embedded into existing wearable smart devices are used to infer stress-related mental states from a multimodal set of sensory data. The information is used to increase emotional awareness, provide recommendations for stress management, and enhance the users’ home environment. Two main challenges are addressed within this paper - detecting stress using easily available unobtrusive sensors and the output modalities supporting the human-computer interaction. Zenth has been developed through an iterative process, based on relevant literature and works in the field of affective computing, technology, and stress detection and recognition.
72

Decoding facial expressions that produce emotion valence ratings with human-like accuracy

Haines, Nathaniel January 2017 (has links)
No description available.
73

Frontal Alpha Asymmetry Interaction with an Experimental Story EEG Brain-Computer Interface

Claudia M Krogmeier (6632114) 03 November 2022 (has links)
<p> Although interest in brain-computer interfaces (BCIs) from researchers and consumers continues to increase, many BCIs lack the complexity and imaginative properties thought to guide users towards successful brain activity modulation. In this research, an experimental story brain-computer interface (ES-BCI) was developed, with which users could interact using cognitive strategies; specifically, thinking about the story and engaging with the main character of the story through their thought processes. In this system, the user’s frontal alpha asymmetry (FAA) measured with electroencephalography (EEG) was linearly mapped to the color saturation of the main character in the story. Therefore, the color saturation of the main character increased as FAA recorded from the participant’s brain activity increased above the FAA threshold required to receive visual feedback. A user-friendly experimental design was implemented using a comfortable EEG device and short neurofeedback (NF) training protocol. Eight distinct story scenes, each with a View and Engage NF component were created, and are referred to as blocks. In this system, seven out of 19 participants successfully increased FAA during the course of the study, for a total of ten successful blocks out of 152. Results concerning left (Lact) and right (Ract) prefrontal cortical activity contributions to FAA in both successful and unsuccessful blocks were examined to understand FAA measurements in greater detail. Additionally, electrodermal activity data (EDA) and self-reported questionnaire data were investigated to understand the user experience with this ES-BCI. Results suggest the potential of ES-BCI environments for engaging users and allowing for FAA modulation. New research directions for artistic BCIs investigating affect are discussed. </p>
74

Classification of affect using novel voice and visual features

Kim, Jonathan Chongkang 07 January 2016 (has links)
Emotion adds an important element to the discussion of how information is conveyed and processed by humans; indeed, it plays an important role in the contextual understanding of messages. This research is centered on investigating relevant features for affect classification, along with modeling the multimodal and multitemporal nature of emotion. The use of formant-based features for affect classification is explored. Since linear predictive coding (LPC) based formant estimators often encounter problems with modeling speech elements, such as nasalized phonemes and give inconsistent results for bandwidth estimation, a robust formant-tracking algorithm was introduced to better model the formant and spectral properties of speech. The algorithm utilizes Gaussian mixtures to estimate spectral parameters and refines the estimates using maximum a posteriori (MAP) adaptation. When the method was used for features extraction applied to emotion classification, the results indicate that an improved formant-tracking method will also provide improved emotion classification accuracy. Spectral features contain rich information about expressivity and emotion. However, most of the recent work in affective computing has not progressed beyond analyzing the mel-frequency cepstral coefficients (MFCC’s) and their derivatives. A novel method for characterizing spectral peaks was introduced. The method uses a multi-resolution sinusoidal transform coding (MRSTC). Because of MRSTC’s high precision in representing spectral features, including preservation of high frequency content not present in the MFCC’s, additional resolving power was demonstrated. Facial expressions were analyzed using 53 motion capture (MoCap) markers. Statistical and regression measures of these markers were used for emotion classification along the voice features. Since different modalities use different sampling frequencies and analysis window lengths, a novel classifier fusion algorithm was introduced. This algorithm is intended to integrate classifiers trained at various analysis lengths, as well as those obtained from other modalities. Classification accuracy was statistically significantly improved using a multimodal-multitemporal approach with the introduced classifier fusion method. A practical application of the techniques for emotion classification was explored using social dyadic plays between a child and an adult. The Multimodal Dyadic Behavior (MMDB) dataset was used to automatically predict young children’s levels of engagement using linguistic and non-linguistic vocal cues along with visual cues, such as direction of a child’s gaze or a child’s gestures. Although this and similar research is limited by inconsistent subjective boundaries, and differing theoretical definitions of emotion, a significant step toward successful emotion classification has been demonstrated; key to the progress has been via novel voice and visual features and a newly developed multimodal-multitemporal approach.
75

Avaliação da influência de emoções na tomada de decisão de sistemas computacionais. / Evaluation of the influence of emotions in decision-taking of computer systems.

Gracioso, Ana Carolina Nicolosi da Rocha 17 March 2016 (has links)
Este trabalho avalia a influência das emoções humanas expressas pela mímica da face na tomada de decisão de sistemas computacionais, com o objetivo de melhorar a experiência do usuário. Para isso, foram desenvolvidos três módulos: o primeiro trata-se de um sistema de computação assistiva - uma prancha de comunicação alternativa e ampliada em versão digital. O segundo módulo, aqui denominado Módulo Afetivo, trata-se de um sistema de computação afetiva que, por meio de Visão Computacional, capta a mímica da face do usuário e classifica seu estado emocional. Este segundo módulo foi implementado em duas etapas, as duas inspiradas no Sistema de Codificação de Ações Faciais (FACS), que identifica expressões faciais com base no sistema cognitivo humano. Na primeira etapa, o Módulo Afetivo realiza a inferência dos estados emocionais básicos: felicidade, surpresa, raiva, medo, tristeza, aversão e, ainda, o estado neutro. Segundo a maioria dos pesquisadores da área, as emoções básicas são inatas e universais, o que torna o módulo afetivo generalizável a qualquer população. Os testes realizados com o modelo proposto apresentaram resultados 10,9% acima dos resultados que usam metodologias semelhantes. Também foram realizadas análises de emoções espontâneas, e os resultados computacionais aproximam-se da taxa de acerto dos seres humanos. Na segunda etapa do desenvolvimento do Módulo Afetivo, o objetivo foi identificar expressões faciais que refletem a insatisfação ou a dificuldade de uma pessoa durante o uso de sistemas computacionais. Assim, o primeiro modelo do Módulo Afetivo foi ajustado para este fim. Por fim, foi desenvolvido um Módulo de Tomada de Decisão que recebe informações do Módulo Afetivo e faz intervenções no Sistema Computacional. Parâmetros como tamanho do ícone, arraste convertido em clique e velocidade de varredura são alterados em tempo real pelo Módulo de Tomada de Decisão no sistema computacional assistivo, de acordo com as informações geradas pelo Módulo Afetivo. Como o Módulo Afetivo não possui uma etapa de treinamento para inferência do estado emocional, foi proposto um algoritmo de face neutra para resolver o problema da inicialização com faces contendo emoções. Também foi proposto, neste trabalho, a divisão dos sinais faciais rápidos entre sinais de linha base (tique e outros ruídos na movimentação da face que não se tratam de sinais emocionais) e sinais emocionais. Os resultados dos Estudos de Caso realizados com os alunos da APAE de Presidente Prudente demonstraram que é possível melhorar a experiência do usuário, configurando um sistema computacional com informações emocionais expressas pela mímica da face. / The influence of human emotions expressed by facial mimics in decision-taking of computer systems is analyzed to improve user´s experience. Three modules were developed: the first module comprises a system of assistive computation - a digital alternative and amplified communication board. The second module, called the Affective Module, is a system of affective computation which, through a Computational Vision, retrieves the user\'s facial mimic and classifies their emotional state. The second module was implemented in two stages derived from the Facial Action Codification System (FACS) which identifies facial expressions based on the human cognitive system. In the first stage, the Affective Module infers the basic emotional stages, namely, happiness, surprise, anger, fear, sadness, disgust, and the neutral state. According to most researchers, basic emotions are innate and universal. Thus, the affective module is common to any population. Tests undertaken with the suggested model provided results which were 10.9% above those that employ similar methodologies. Spontaneous emotions were also undertaken and computer results were close to human score rates. The second stage for the development of the Affective Module, facial expressions that reflect dissatisfaction or difficulties during the use of computer systems were identified. The first model of the Affective Module was adjusted to this end. A Decision-taking Module which receives information from the Affective Module and intervenes in the Computer System was developed. Parameters such as icon size, draw transformed into a click, and scanning speed are changed in real time by the Decision-taking Module in the assistive computer system, following information by the Affective Module. Since the Affective Module does not have a training stage to infer the emotional stage, a neutral face algorithm has been suggested to solve the problem of initialing with emotion-featuring faces. Current assay also suggests the distinction between quick facial signals among the base signs (a click or any other sound in the face movement which is not an emotional sign) and emotional signs. Results from Case Studies with APAE children in Presidente Prudente SP Brazil showed that user´s experience may be improved through a computer system with emotional information expressed by facial mimics.
76

Classificando emoções em processos de reabilitação robótica / Classifying emotions in rehabilitation robotics based on facial skin temperature

Appel, Viviane Cristina Roma 27 August 2014 (has links)
Reabilitação robótica tem um papel importante em exercícios terapêuticos ao combinar robôs com jogos sérios de computador em uma atraente plataforma terapêutica. Entretanto, a tarefa de medir o grau de adesão do paciente ao tratamento não é trivial. A dificuldade de aplicar técnicas baseadas em questionários e entrevistas, particularmente em pacientes que tiveram a fala comprometida por acidente vascular encefálico (AVE), nos inspirou a investigar técnicas não verbais e não invasivas para classificar emoções. Com este propósito, uma rede neural supervisionada foi projetada para interpretar imagens térmicas infravermelhas faciais de indivíduos realizando terapia robótica de reabilitação integrada com os jogos. Uma base de dados contendo imagens de 8 voluntários foi criada e contém reações emocionais espontâneas e provocadas. No total, foram analizadas 2445 imagens térmicas faciais, em média 100 imagens por pessoa por 3 categorias de emoções (neutra, motivado e sobrecarregado). Baseado em análise de matriz de confusão, os resultados experimentais se correlacionaram muito bem com as estimativas manuais, produzindo um desempenho global de 92,6%. / Rehabilitation robotic plays an important role in therapeutic exercises by combining robots with computer serious games into an attractive therapeutic platform. However, measuring the degree of engagement of the user is not a trivial task. The difficulty of applying question-based techniques, particularly for patients who have the speech capacity compromised due to cerebrovascular accidents, has inspired us to investigate noninvasive and nonverbal techniques aiming to classifying emotions. For this purpose, a supervised artificial neural network interprets facial infrared thermal images of individuals performing rehabilitation robotic therapy integrated with games. A database containing images of 8 users was generated by combining evoked and spontaneous emotional reactions. In total, 2445 facial thermal images with an average of 100 images per person for three categories of emotions (neutral, motivated, and overstressed) were analyzed. Based on confusion matrix analysis, the experimental results correlated very well with manual estimates, producing an overall performance of 92.6%.
77

Um sistema de interferência de expressões faciais emocionais orientado no modelo de emoções básicas

Oliveira, Eduardo de 22 March 2011 (has links)
Submitted by Maicon Juliano Schmidt (maicons) on 2015-03-30T13:52:38Z No. of bitstreams: 1 Eduardo de Oliveira.pdf: 5953304 bytes, checksum: 4371bfccf262cad294bf159405027f64 (MD5) / Made available in DSpace on 2015-03-30T13:52:39Z (GMT). No. of bitstreams: 1 Eduardo de Oliveira.pdf: 5953304 bytes, checksum: 4371bfccf262cad294bf159405027f64 (MD5) Previous issue date: 2011-01-31 / Banco Santander / Banespa / Este trabalho apresenta um sistema que realiza automaticamente a inferência das emoções básicas (alegria, tristeza, raiva, medo, repulsa e surpresa) pelas expressões da face de um usuário de computador, através de imagens capturadas por uma webcam. A aplicação desenvolvida baseia-se no sistema de codificação facial FACS, que classifica as ações faciais em códigos específicos, conhecidos como AUs (Action Units). A abordagem utilizada consiste em coletar dados de movimentações da boca, olhos e sobrancelhas para classificar, via redes neurais, os códigos AUs presentes nas expressões faciais executadas. Por meio de árvore de decisão, conjunto de regras ou rede neural, as emoções dos AUs, anteriormente classificados, são inferidas. O sistema construído foi avaliado sobre três cenários diferentes: (1) utilizando amostras de bases de faces para avaliação de reconhecimento de AUs e emoções; (2) com amostras de bases de faces para avaliação de reconhecimento de emoções por rede neural (abordagem alternativa); (3) utilizando uma amostra composta por imagens capturadas por webcam para avaliação de emoção, por árvore de decisão e rede neural. Como resultados, foi obtida uma taxa de reconhecimento sobre AUs de 53,83%, implicando em 28,57% de reconhecimento de emoções pelo inferidor da árvore de decisão - Cenário 1. Já, a inferência de emoção pela rede neural obteve como melhor resultado 63,33% de taxa de reconhecimento - Cenários 2 e 3. O trabalho desenvolvido pode ser utilizado para ajustar o comportamento do computador ao estado afetivo do usuário ou fornecer dados para outros softwares, como sistemas tutores inteligentes. / This work presents a system that automatically performs the inference of basic emotions (happiness, sadness, anger, fear, disgust and surprise) through facial expressions from a computer user, using images captured by a webcam. The developed application is based on facial coding system FACS, that classifies specific facial actions, known as AUs (Action Units). The proposed approach consists in collecting moviment data of mouth, eyes and eyebrows to classify, by neural networks, AUs codes presents in performed facial expressions. With decision tree, ruleset or neural network, the emotions of AUs, previously classified, are inferred. The designed system was evaluated in three different scenarios: (1) using samples of faces bases to evaluate the recognition of AUs and emotions; (2) with samples of face bases to evaluate emotion recognition by neural network (alternative approach); (3) using a sample of images captured by webcam for evaluation of emotion in decision tree and neural network. As results, was obtained 53.83% of recognition rate over AUs, which implicating 28.57% of emotions recognition with decision tree - Scenario 1. The emotion inference by neural network achieve, 63.33% of recognition rate as the best result - Scenarios 2 and 3. The developed paper can be used to adjust computer?s behavior to address user?s affective state, or provides data to other softwares, such as intelligent tutoring systems.
78

Automatic detection of visual cues associated to depression / Détection automatique des repères visuels associés à la dépression

Pampouchidou, Anastasia 08 November 2018 (has links)
La dépression est le trouble de l'humeur le plus répandu dans le monde avec des répercussions sur le bien-être personnel, familial et sociétal. La détection précoce et précise des signes liés à la dépression pourrait présenter de nombreux avantages pour les cliniciens et les personnes touchées. Le présent travail visait à développer et à tester cliniquement une méthodologie capable de détecter les signes visuels de la dépression afin d’aider les cliniciens dans leur décision.Plusieurs pipelines d’analyse ont été mis en œuvre, axés sur les algorithmes de représentation du mouvement, via des changements de textures ou des évolutions de points caractéristiques du visage, avec des algorithmes basés sur les motifs binaires locaux et leurs variantes incluant ainsi la dimension temporelle (Local Curvelet Binary Patterns-Three Orthogonal Planes (LCBP-TOP), Local Curvelet Binary Patterns- Pairwise Orthogonal Planes (LCBP-POP), Landmark Motion History Images (LMHI), and Gabor Motion History Image (GMHI)). Ces méthodes de représentation ont été combinées avec différents algorithmes d'extraction de caractéristiques basés sur l'apparence, à savoir les modèles binaires locaux (LBP), l'histogramme des gradients orientés (HOG), la quantification de phase locale (LPQ) et les caractéristiques visuelles obtenues après transfert de modèle issu des apprentissage profonds (VGG). Les méthodes proposées ont été testées sur deux ensembles de données de référence, AVEC et le Wizard of Oz (DAICWOZ), enregistrés à partir d'individus non diagnostiqués et annotés à l'aide d'instruments d'évaluation de la dépression. Un nouvel ensemble de données a également été développé pour inclure les patients présentant un diagnostic clinique de dépression (n = 20) ainsi que les volontaires sains (n = 45).Deux types différents d'évaluation de la dépression ont été testés sur les ensembles de données disponibles, catégorique (classification) et continue (régression). Le MHI avec VGG pour l'ensemble de données de référence AVEC'14 a surpassé l'état de l’art avec un F1-Score de 87,4% pour l'évaluation catégorielle binaire. Pour l'évaluation continue des symptômes de dépression « autodéclarés », LMHI combinée aux caractéristiques issues des HOG et à celles issues du modèle VGG ont conduit à des résultats comparatifs aux meilleures techniques de l’état de l’art sur le jeu de données AVEC'14 et sur notre ensemble de données, avec une erreur quadratique moyenne (RMSE) et une erreur absolue moyenne (MAE) de 10,59 / 7,46 et 10,15 / 8,48 respectivement. La meilleure performance de la méthodologie proposée a été obtenue dans la prédiction des symptômes d'anxiété auto-déclarés sur notre ensemble de données, avec une RMSE/MAE de 9,94 / 7,88.Les résultats sont discutés en relation avec les limitations cliniques et techniques et des améliorations potentielles pour des travaux futurs sont proposées. / Depression is the most prevalent mood disorder worldwide having a significant impact on well-being and functionality, and important personal, family and societal effects. The early and accurate detection of signs related to depression could have many benefits for both clinicians and affected individuals. The present work aimed at developing and clinically testing a methodology able to detect visual signs of depression and support clinician decisions.Several analysis pipelines were implemented, focusing on motion representation algorithms, including Local Curvelet Binary Patterns-Three Orthogonal Planes (LCBP-TOP), Local Curvelet Binary Patterns- Pairwise Orthogonal Planes (LCBP-POP), Landmark Motion History Images (LMHI), and Gabor Motion History Image (GMHI). These motion representation methods were combined with different appearance-based feature extraction algorithms, namely Local Binary Patterns (LBP), Histogram of Oriented Gradients (HOG), Local Phase Quantization (LPQ), as well as Visual Graphic Geometry (VGG) features based on transfer learning from deep learning networks. The proposed methods were tested on two benchmark datasets, the AVEC and the Distress Analysis Interview Corpus - Wizard of Oz (DAICWOZ), which were recorded from non-diagnosed individuals and annotated based on self-report depression assessment instruments. A novel dataset was also developed to include patients with a clinical diagnosis of depression (n=20) as well as healthy volunteers (n=45).Two different types of depression assessment were tested on the available datasets, categorical (classification) and continuous (regression). The MHI with VGG for the AVEC’14 benchmark dataset outperformed the state-of-the-art with 87.4% F1-Score for binary categorical assessment. For continuous assessment of self-reported depression symptoms, MHI combined with HOG and VGG performed at state-of-the-art levels on both the AVEC’14 dataset and our dataset, with Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) of 10.59/7.46 and 10.15/8.48, respectively. The best performance of the proposed methodology was achieved in predicting self-reported anxiety symptoms in our dataset, with RMSE/MAE of 9.94/7.88.Results are discussed in relation to clinical and technical limitations and potential improvements in future work.
79

Analisi componenziale dell'esperienza emotiva: studio delle componenti espressiva e fisiologica e implicazioni per l'affective computing / Componential Analysis of Emotional Experience: Study of Physiological and Expressive Components and Significance for Affective Computing

MORTILLARO, MARCELLO 28 February 2007 (has links)
Nonostante una lunga tradizione di studi, le emozioni costituiscono ancora oggi un oggetto per molti aspetti poco definito. In particolare, pochi risultati confermati sono disponibili per l'espressione vocale delle emozioni e per i suoi aspetti fisiologici. Questa assenza di risultati può essere spiegata attraverso l'adozione della teoria processuale componenziale di Scherer. Secondo questo modello l'emozione sarebbe un processo che si sviluppa in e attraverso alcune componenti, tra cui quella espressiva e quella fisiologica. Pertanto una comprensione delle emozioni è possibile solo attraverso un approccio che sia multi-componenziale. Tre studi sono stati condotti. Il primo ha indagato l'espressione delle emozioni, identificando alcune delle previsioni del modello componenziale per la produzione vocale. Il secondo ha analizzato in termini di sistema nervoso autonomo gli aspetti fisiologici dell'esperienza emotiva, sostenendo la funzione di mobilitazione delle risorse della componente. Il terzo studio ha posto in relazione queste due componenti cercando di identificare alcuni aspetti del loro funzionamento integrato e interdipendente. Infine, è suggerita l'adozione di un modello processuale componenziale alla tematica del riconoscimento emotivo automatico, inerente al tema dell'affective computing. / Even if emotion has been studied for many years, it still remains quite unknown in some aspects. Among others, vocal expression and physiology of emotions produced very few widely accepted results. Such an outcome can be explained through the adoption of the component process model of emotion by Scherer. In his theory emotions are processes in which a number of different components are involved, among others expressive and physiological ones. As a consequence emotions can be explained only through a multi-component approach. Three studies are performed. The first investigated emotional expression, finding some correspondences with component predictive model for vocal expression. The second analyzed autonomic activity of emotions, sustaining its function of resources mobilization. The third combined the two components, finding some aspects of their integration and inter-dependency. Finally, concerning affective computing paradigm, a componential approach to emotion automatic recognition is suggested.
80

Avaliação da influência de emoções na tomada de decisão de sistemas computacionais. / Evaluation of the influence of emotions in decision-taking of computer systems.

Ana Carolina Nicolosi da Rocha Gracioso 17 March 2016 (has links)
Este trabalho avalia a influência das emoções humanas expressas pela mímica da face na tomada de decisão de sistemas computacionais, com o objetivo de melhorar a experiência do usuário. Para isso, foram desenvolvidos três módulos: o primeiro trata-se de um sistema de computação assistiva - uma prancha de comunicação alternativa e ampliada em versão digital. O segundo módulo, aqui denominado Módulo Afetivo, trata-se de um sistema de computação afetiva que, por meio de Visão Computacional, capta a mímica da face do usuário e classifica seu estado emocional. Este segundo módulo foi implementado em duas etapas, as duas inspiradas no Sistema de Codificação de Ações Faciais (FACS), que identifica expressões faciais com base no sistema cognitivo humano. Na primeira etapa, o Módulo Afetivo realiza a inferência dos estados emocionais básicos: felicidade, surpresa, raiva, medo, tristeza, aversão e, ainda, o estado neutro. Segundo a maioria dos pesquisadores da área, as emoções básicas são inatas e universais, o que torna o módulo afetivo generalizável a qualquer população. Os testes realizados com o modelo proposto apresentaram resultados 10,9% acima dos resultados que usam metodologias semelhantes. Também foram realizadas análises de emoções espontâneas, e os resultados computacionais aproximam-se da taxa de acerto dos seres humanos. Na segunda etapa do desenvolvimento do Módulo Afetivo, o objetivo foi identificar expressões faciais que refletem a insatisfação ou a dificuldade de uma pessoa durante o uso de sistemas computacionais. Assim, o primeiro modelo do Módulo Afetivo foi ajustado para este fim. Por fim, foi desenvolvido um Módulo de Tomada de Decisão que recebe informações do Módulo Afetivo e faz intervenções no Sistema Computacional. Parâmetros como tamanho do ícone, arraste convertido em clique e velocidade de varredura são alterados em tempo real pelo Módulo de Tomada de Decisão no sistema computacional assistivo, de acordo com as informações geradas pelo Módulo Afetivo. Como o Módulo Afetivo não possui uma etapa de treinamento para inferência do estado emocional, foi proposto um algoritmo de face neutra para resolver o problema da inicialização com faces contendo emoções. Também foi proposto, neste trabalho, a divisão dos sinais faciais rápidos entre sinais de linha base (tique e outros ruídos na movimentação da face que não se tratam de sinais emocionais) e sinais emocionais. Os resultados dos Estudos de Caso realizados com os alunos da APAE de Presidente Prudente demonstraram que é possível melhorar a experiência do usuário, configurando um sistema computacional com informações emocionais expressas pela mímica da face. / The influence of human emotions expressed by facial mimics in decision-taking of computer systems is analyzed to improve user´s experience. Three modules were developed: the first module comprises a system of assistive computation - a digital alternative and amplified communication board. The second module, called the Affective Module, is a system of affective computation which, through a Computational Vision, retrieves the user\'s facial mimic and classifies their emotional state. The second module was implemented in two stages derived from the Facial Action Codification System (FACS) which identifies facial expressions based on the human cognitive system. In the first stage, the Affective Module infers the basic emotional stages, namely, happiness, surprise, anger, fear, sadness, disgust, and the neutral state. According to most researchers, basic emotions are innate and universal. Thus, the affective module is common to any population. Tests undertaken with the suggested model provided results which were 10.9% above those that employ similar methodologies. Spontaneous emotions were also undertaken and computer results were close to human score rates. The second stage for the development of the Affective Module, facial expressions that reflect dissatisfaction or difficulties during the use of computer systems were identified. The first model of the Affective Module was adjusted to this end. A Decision-taking Module which receives information from the Affective Module and intervenes in the Computer System was developed. Parameters such as icon size, draw transformed into a click, and scanning speed are changed in real time by the Decision-taking Module in the assistive computer system, following information by the Affective Module. Since the Affective Module does not have a training stage to infer the emotional stage, a neutral face algorithm has been suggested to solve the problem of initialing with emotion-featuring faces. Current assay also suggests the distinction between quick facial signals among the base signs (a click or any other sound in the face movement which is not an emotional sign) and emotional signs. Results from Case Studies with APAE children in Presidente Prudente SP Brazil showed that user´s experience may be improved through a computer system with emotional information expressed by facial mimics.

Page generated in 0.0671 seconds