Spelling suggestions: "subject:"coacial expression"" "subject:"cracial expression""
301 |
The Effects of Facial Cues on Consumer Judgment and Decision-MakingLiu, Fan 01 January 2015 (has links)
This dissertation investigates the roles of facial cues in consumer behavior. Specifically, the research examines the effect of facial structural resemblance, facial expressions, and other perceptual cues—in both individual and group settings—on consumer judgment and decision-making. Essay 1 examines the influence of facial resemblance on consumers* product purchase likelihood. This effect is moderated by consumers* mental construal, such that the effect of increased facial resemblance on product purchase likelihood occurs among consumers with high-level construals but not among those with low-level construals. Results of three experimental studies show that increased facial resemblance among team members enhances the perceived entitativity of the group, which in turn leads to more favorable intention of purchasing the product offered by the group. Essay 2 investigates the differential effects of recipients* group entitativity on two types of donation (time vs. money). Through three studies, the research demonstrates that high (versus low) group entitativity among the recipients increases donation of time but decreases donation of money. Such differential effects on donation of time versus money are driven by consumers* emotional or cognitive well-being associated with time or money donations. In essay 3, the effect of smile intensity on customer behavior is shown to be moderated by power and salience of ulterior motive. When employees* ulterior motive is not salient to customers, low-power customers evaluate the employee with intensified smiles more favorably compared to high-power customers. In contrast, when ulterior motive is made salient, high-power rather than low-power customers react more positively to smile intensity. Results show that the interactive effects between smile, power, and ulterior motive are driven by customers* warmth and competence perceptions. Collectively, this dissertation focuses on consumers* face-based judgments of individuals and teams, and investigates how such facial cues might influence consumers* attitude, purchase intention, and prosocial behavior.
|
302 |
Image Emotion Analysis: Facial Expressions vs. Perceived ExpressionsAyyalasomayajula, Meghana 20 December 2022 (has links)
No description available.
|
303 |
Rozpoznávání výrazu tváře / Facial expression recognitionVránová, Markéta January 2016 (has links)
This project deals with automatic recognition of facial expression in colour pictures. At first, the colour-based face detection is accomplished, three colour spaces are used: RGB, HSV and YCbcCr. As next, the pictures are automatically cropped so that only the face region is present. It is accomplished by computing the borders of the face region based on knowledge of position of eyes, nose and mouth. From the face region, the feature vector is obtained using a bank of Gabor filters. The project introduces two different kinds of Gabor filters and proposes a new bank of filters. The feature vector is used as an input to the neural network. The neural network was trained on a set of pictures from AR database created for facial expression recognition. The output of the network is the facial expression the input picture was assigned to. This project mentions the testing for different settings of the neural network and presents and discuss the recognition results of the network.
|
304 |
基於方向性邊緣特徵之即時物件偵測與追蹤 / Real-Time Object Detection and Tracking using Directional Edge Maps王財得, Wang, Tsai-Te Unknown Date (has links)
在電腦視覺的研究之中,有關物件的偵測與追蹤應用在速度及可靠性上的追求一直是相當具有挑戰性的問題,而現階段發展以視覺為基礎互動式的應用,所使用到技術諸如:類神經網路、SVM及貝氏網路等。
本論文中我們持續深入此領域,並提出及發展一個方向性邊緣特徵集(DEM)與修正後的AdaBoost訓練演算法相互結合,期能有效提高物件偵測與識別的速度及準確性,在實際驗證中,我們將之應用於多種角度之人臉偵測,以及臉部表情識別等兩個主要問題之上;在人臉偵測的應用中,我們使用CMU的臉部資料庫並與Viola & Jones方法進行分析比較,在準確率上,我們的方法擁有79% 的recall及90% 的precision,而Viola & ones的方法則分別為81%及77%;在運算速度上,同樣處理512x384的影像,相較於Viola & Jones需時132ms,我們提出的方法則有較佳的82ms。
此外,於表情識別的應用中,我們結合運用Component-based及Action-unit model 兩種方法。前者的優勢在於提供臉部細節特徵的定位及追蹤變化,後者主要功用則為進行情緒表情的分類。我們對於四種不同情緒表情的辨識準確度如下:高興(83.6%)、傷心(72.7%)、驚訝(80%) 、生氣(78.1%)。在實驗中,可以發現生氣及傷心兩種情緒較難區分,而高興與驚訝則較易識別。 / Rapid and robust detection and tracking of objects is a challenging problem in computer vision research. Techniques such as artificial neural networks, support vector machine and Bayesian networks have been developed to enable interactive vision-based applications. In this thesis, we tackle this issue by devising a novel feature descriptor named directional edge maps (DEM). When combined with a modified AdaBoost training algorithm, the proposed descriptor can produce effective results in many object detection and recognition tasks.
We have applied the newly developed method to two important object recognition problems, namely, face detection and facial expression recognition. The DEM-based methodology conceived in this thesis is capable of detecting faces of multiple views. To test the efficacy of our face detection mechanism, we have performed a comparative analysis with the Viola and Jones algorithm using Carnegie Mellon University face database. The recall and precision using our approach is 79% and 90%, respectively, compared to 81% and 77% using Viola and Jones algorithm. Our algorithm is also more efficient, requiring only 82 ms (compared to 132 ms by Viola and Jones) for processing a 512x384 image.
To achieve robust facial expression recognition, we have combined component-based methods and action-unit model-based approaches. The component-based method is mainly utilized to locate important facial features and track their deformations. Action-unit model-based approach is then employed to carry out expression recognition. The accuracy of classifying different emotion type is as follows: happiness 83.6%, sadness 72.7%, surprise 80%, and anger 78.1%. It turns out that anger and sadness are more difficult to distinguish, whereas happiness and surprise expression have higher recognition rates.
|
305 |
De la maltraitance à l’enfance aux comportements d’agression à l’âge adulte : quel est le rôle de la réactivité émotionnelle et comportementale?Laurin, Mélissa 04 1900 (has links)
En dépit des efforts déployés pour diminuer la prévalence de la maltraitance à l’enfance, celle-ci serait associée à des difficultés non négligeables, dont la manifestation d’agression. La réactivité émotionnelle et comportementale, incluant la colère, la peur et l’évitement, est proposée comme mécanisme expliquant la relation unissant la maltraitance à l’agression. Quatre objectifs sont poursuivis à cette fin, soit d’examiner la relation notée entre: (1) la maltraitance et l’agression, (2) la maltraitance et la colère, la peur, ainsi que l’évitement, (3) la colère, la peur, ainsi que l’évitement et l’agression et (4) tester formellement le rôle médiateur et modérateur de la colère, la peur et l’évitement à cette relation. Les données de 160 hommes âgés de 18 à 35 ans ayant été exposés ou non à de la maltraitance ont été colligées par le biais de questionnaires et d’une tâche de provocation sociale permettant de mesurer les expressions faciales de colère et de peur, ainsi que les comportements d’évitement. Les résultats suggèrent que la maltraitance et les comportements d’évitement sont associés à l’agression. La maltraitance ne serait toutefois pas liée à la colère, à la peur et à l’évitement. Alors que les résultats suggèrent que ces indices n’aient pas de rôles médiateurs dans la relation entre la maltraitance et l’agression, la réactivité aux plans de la colère et de l’évitement magnifierait cette relation. Ainsi, les résultats invitent à prendre en compte les expériences de maltraitance et l’intensité de la réactivité émotionnelle et comportementale dans les interventions afin de cibler les individus plus à risque d’avoir recours à l’agression. / Despite efforts to reduce the prevalence of childhood maltreatment, it is known to be associated with a variety of physical and mental health difficulties, including the manifestation of aggression. Emotional and behavioral reactivity, including anger, fear and avoidance, is proposed as a mechanism to explain the relationship uniting maltreatment and aggression. Four objectives are pursued to this end: to examine the relationship found between (1) maltreatment and aggression, (2) maltreatment and anger, fear and avoidance, (3) anger, fear and avoidance and aggression and (4) to formally test the mediator and moderator role of anger, fear and avoidance in this relationship. Data from 160 men aged between 18 and 35 and who were exposed or unexposed to maltreatment were compiled through questionnaires and a social provocation task measuring the facial expressions of anger and fear as well as avoidance behaviors. Results suggest that maltreatment experiences and avoidance behaviors are associated with aggression. Maltreatment experiences are however not linked to anger, fear and avoidance. While the results suggest that these indicators have no mediator roles in the relationship between maltreatment and aggression, responsiveness to anger and avoidance would magnify this relationship. Thus, results suggest that maltreatment experiences and the intensity of emotional and behavioral reactivity should be taken into account in interventions in order to target those most at risk of using aggression.
|
306 |
Identifikace emočního výrazu se zaměřením na porovnávání slyšících, neslyšících a nedoslýchavých / Identification of emotional expression with a focus on the comparison of deaf, hearing and hearing-impairedDoubková, Alžběta January 2014 (has links)
This thesis is focused on the identification of emotional expressions in the human face. The theoretical part includes an introduction to the issue of emotions, the history research on the identification of emotional expressions, the description of the various expressions of basic emotions and their recognition followed with the characteristics of the group of hearing-impaired. In the empirical part the research on the recognition identification of emotions in the face from the portraits is described. The aim of this thesis is to compare the accuracy of identification of emotional expressions among the groups of hearing, deaf and hard of hearing, and their development across all age categories. In my research I focus on the seven basic emotions (fear, anger, sadness, surprise, hapiness, disgust and contempt) and one social emotions (shame). The research did not confirm my assumptions. No statistically significant difference among the three groups in the overall identification of emotional expression was proved. The only difference was in the recognition of disgust which resulted in favor of the hearing. In the general comparison of ages between the hearing impaired (the deaf and the hard of hearing together) any significant differences were not discovered, either. Nevertheless, within each age...
|
307 |
Vision based facial emotion detection using deep convolutional neural networksJulin, Fredrik January 2019 (has links)
Emotion detection, also known as Facial expression recognition, is the art of mapping an emotion to some sort of input data taken from a human. This is a powerful tool to extract valuable information from individuals which can be used as data for many different purposes, ranging from medical conditions such as depression to customer feedback. To be able to solve the problem of facial expression recognition, smaller subtasks are required and all of them together form the complete system to the problem. Breaking down the bigger task at hand, one can think of these smaller subtasks in the form of a pipeline that implements the necessary steps for classification of some input to then give an output in the form of emotion. In recent time with the rise of the art of computer vision, images are often used as input for these systems and have shown great promise to assist in the task of facial expression recognition as the human face conveys the subjects emotional state and contain more information than other inputs, such as text or audio. Many of the current state-of-the-art systems utilize computer vision in combination with another rising field, namely AI, or more specifically deep learning. These proposed methods for deep learning are in many cases using a special form of neural network called convolutional neural network that specializes in extracting information from images. Then performing classification using the SoftMax function, acting as the last part before the output in the facial expression pipeline. This thesis work has explored these methods of utilizing convolutional neural networks to extract information from images and builds upon it by exploring a set of machine learning algorithms that replace the more commonly used SoftMax function as a classifier, in attempts to further increase not only the accuracy but also optimize the use of computational resources. The work also explores different techniques for the face detection subtask in the pipeline by comparing two approaches. One of these approaches is more frequently used in the state-of-the-art and is said to be more viable for possible real-time applications, namely the Viola-Jones algorithm. The other is a deep learning approach using a state-of-the-art convolutional neural network to perform the detection, in many cases speculated to be too computationally intense to run in real-time. By applying a state-of-the-art inspired new developed convolutional neural network together with the SoftMax classifier, the final performance did not reach state-of-the-art accuracy. However, the machine-learning classifiers used shows promise and bypass the SoftMax function in performance in several cases when given a massively smaller number of samples as training. Furthermore, the results given from implementing and testing a pure deep learning approach, using deep learning algorithms for both the detection and classification stages of the pipeline, shows that deep learning might outperform the classic Viola-Jones algorithm in terms of both detection rate and frames per second.
|
308 |
Contributions à l'analyse de visages en 3D : approche régions, approche holistique et étude de dégradationsLemaire, Pierre 29 March 2013 (has links)
Historiquement et socialement, le visage est chez l'humain une modalité de prédilection pour déterminer l'identité et l'état émotionnel d'une personne. Il est naturellement exploité en vision par ordinateur pour les problèmes de reconnaissance de personnes et d'émotions. Les algorithmes d'analyse faciale automatique doivent relever de nombreux défis : ils doivent être robustes aux conditions d'acquisition ainsi qu'aux expressions du visage, à l'identité, au vieillissement ou aux occultations selon le scénario. La modalité 3D a ainsi été récemment investiguée. Elle a l'avantage de permettre aux algorithmes d'être, en principe, robustes aux conditions d'éclairage ainsi qu'à la pose. Cette thèse est consacrée à l'analyse de visages en 3D, et plus précisément la reconnaissance faciale ainsi que la reconnaissance d'expressions faciales en 3D sans texture. Nous avons dans un premier temps axé notre travail sur l'apport que pouvait constituer une approche régions aux problèmes d'analyse faciale en 3D. L'idée générale est que le visage, pour réaliser les expressions faciales, est déformé localement par l'activation de muscles ou de groupes musculaires. Il est alors concevable de décomposer le visage en régions mimiques et statiques, et d'en tirer ainsi profit en analyse faciale. Nous avons proposé une paramétrisation spécifique, basée sur les distances géodésiques, pour rendre la localisation des régions mimiques et statiques le plus robustes possible aux expressions. Nous avons également proposé une approche régions pour la reconnaissance d'expressions du visage, qui permet de compenser les erreurs liées à la localisation automatique de points d'intérêt. Les deux approches proposées dans ce chapitre ont été évaluées sur des bases standards de l'état de l'art. Nous avons également souhaité aborder le problème de l'analyse faciale en 3D sous un autre angle, en adoptant un système de cartes de représentation de la surface 3D. Nous avons ainsi proposé de projeter sur le plan 2D des informations liées à la topologie de la surface 3D, à l'aide d'un descripteur géométrique inspiré d'une mesure de courbure moyenne. Les problèmes de reconnaissance faciale et de reconnaissance d'expressions 3D sont alors ramenés à ceux de l'analyse faciale en 2D. Nous avons par exemple utilisé SIFT pour l'extraction puis l'appariement de points d'intérêt en reconnaissance faciale. En reconnaissance d'expressions, nous avons utilisé une méthode de description des visages basée sur les histogrammes de gradients orientés, puis classé les expressions à l'aide de SVM multi-classes. Dans les deux cas, une méthode de fusion simple permet l'agrégation des résultats obtenus à différentes échelles. Ces deux propositions ont été évaluées sur la base BU-3DFE, montrant de bonnes performances tout en étant complètement automatiques. Enfin, nous nous sommes intéressés à l'impact des dégradations des modèles 3D sur les performances des algorithmes d'analyse faciale. Ces dégradations peuvent avoir plusieurs origines, de la capture physique du visage humain au traitement des données en vue de leur interprétation par l'algorithme. Après une étude des origines et une théorisation des types de dégradations potentielles, nous avons défini une méthodologie permettant de chiffrer leur impact sur des algorithmes d'analyse faciale en 3D. Le principe est d'exploiter une base de données considérée sans défauts, puis de lui appliquer des dégradations canoniques et quantifiables. Les algorithmes d'analyse sont alors testés en comparaison sur les bases dégradées et originales. Nous avons ainsi comparé le comportement de 4 algorithmes de reconnaissance faciale en 3D, ainsi que leur fusion, en présence de dégradations, validant par la diversité des résultats obtenus la pertinence de ce type d'évaluation. / Historically and socially, the human face is one of the most natural modalities for determining the identity and the emotional state of a person. It has been exploited by computer vision scientists within the automatic facial analysis domain. Still, proposed algorithms classically encounter a number of shortcomings. They must be robust to varied acquisition conditions. Depending on the scenario, they must take into account intra-class variations such as expression, identity (for facial expression recognition), aging, occlusions. Thus, the 3D modality has been suggested as a counterpoint for a number of those issues. In principle, 3D views of an object are insensitive to lightning conditions. They are, theoretically, pose-independant as well. The present thesis work is dedicated to 3D Face Analysis. More precisely, it is focused on non-textured 3D Face Recognition and 3D Facial Expression Recognition. In the first instance, we have studied the benefits of a region-based approach to 3D Face Analysis problems. The general concept is that a face, when performing facial expressions, is deformed locally by the activation of muscles or groups of muscles. We then assumed that it was possible to decompose the face into several regions of interest, assumed to be either mimic or static. We have proposed a specific facial surface parametrization, based upon geodesic distance. It is designed to make region localization as robust as possible regarding expression variations. We have also used a region-based approach for 3D facial expression recognition, which allows us to compensate for errors relative to automatic landmark localization. We also wanted to experiment with a Representation Map system. Here, the main idea is to project 3D surface topology data on the 2D plan. This translation to the 2D domain allows us to benefit from the large amount of related works in the litterature. We first represent the face as a set of maps representing different scales, with the help of a geometric operator inspired by the Mean Curvature measure. For Facial Recognition, we perform a SIFT keypoints extraction. Then, we match extracted keypoints between corresponding maps. As for Facial Expression Recognition, we normalize and describe every map thanks to the Histograms of Oriented Gradients algorithm. We further classify expressions using multi-class SVM. In both cases, a simple fusion step allows us to aggregate the results obtained on every single map. Finally, we have studied the impact of 3D models degradations over the performances of 3D facial analysis algorithms. A 3D facial scan may be an altered representation of its real life model, because of several reasons, which range from the physical caption of the human model to data processing. We propose a methodology that allows us to quantify the impact of every single type of degradation over the performances of 3D face analysis algorithms. The principle is to build a database regarded as free of defaults, then to apply measurable degradations to it. Algorithms are further tested on clean and degraded datasets, which allows us to quantify the performance loss caused by degradations. As an experimental proof of concept, we have tested four different algorithms, as well as their fusion, following the aforementioned protocol. With respect to the various types of contemplated degradations, the diversity of observed behaviours shows the relevance of our approach.
|
309 |
3D face analysis : landmarking, expression recognition and beyond / Reconnaissance de l'expression du visageZhao, Xi 13 September 2010 (has links)
Cette thèse de doctorat est dédiée à l’analyse automatique de visages 3D, incluant la détection de points d’intérêt et la reconnaissance de l’expression faciale. En effet, l’expression faciale joue un rôle important dans la communication verbale et non verbale, ainsi que pour exprimer des émotions. Ainsi, la reconnaissance automatique de l’expression faciale offre de nombreuses opportunités et applications, et est en particulier au coeur d’interfaces homme-machine "intelligentes" centrées sur l’être humain. Par ailleurs, la détection automatique de points d’intérêt du visage (coins de la bouche et des yeux, ...) permet la localisation d’éléments du visage qui est essentielle pour de nombreuses méthodes d’analyse faciale telle que la segmentation du visage et l’extraction de descripteurs utilisée par exemple pour la reconnaissance de l’expression. L’objectif de cette thèse est donc d’élaborer des approches de détection de points d’intérêt sur les visages 3D et de reconnaissance de l’expression faciale pour finalement proposer une solution entièrement automatique de reconnaissance de l’activité faciale incluant l’expression et les unités d’action (ou Action Units). Dans ce travail, nous avons proposé un réseau de croyance bayésien (Bayesian Belief Network ou BBN) pour la reconnaissance d’expressions faciales ainsi que d’unités d’action. Un modèle statistique de caractéristiques faciales (Statistical Facial feAture Model ou SFAM) a également été élaboré pour permettre la localisation des points d’intérêt sur laquelle s’appuie notre BBN afin de permettre la mise en place d’un système entièrement automatique de reconnaissance de l’expression faciale. Nos principales contributions sont les suivantes. Tout d’abord, nous avons proposé un modèle de visage partiel déformable, nommé SFAM, basé sur le principe de l’analyse en composantes principales. Ce modèle permet d’apprendre à la fois les variations globales de la position relative des points d’intérêt du visage (configuration du visage) et les variations locales en terme de texture et de forme autour de chaque point d’intérêt. Différentes instances de visages partiels peuvent ainsi être produites en faisant varier les valeurs des paramètres du modèle. Deuxièmement, nous avons développé un algorithme de localisation des points d’intérêt du visage basé sur la minimisation d’une fonction objectif décrivant la corrélation entre les instances du modèle SFAM et les visages requête. Troisièmement, nous avons élaboré un réseau de croyance bayésien (BBN) dont la structure décrit les relations de dépendance entre les sujets, les expressions et les descripteurs faciaux. Les expressions faciales et les unités d’action sont alors modélisées comme les états du noeud correspondant à la variable expression et sont reconnues en identifiant le maximum de croyance pour tous les états. Nous avons également proposé une nouvelle approche pour l’inférence des paramètres du BBN utilisant un modèle de caractéristiques faciales pouvant être considéré comme une extension de SFAM. Finalement, afin d’enrichir l’information utilisée pour l’analyse de visages 3D, et particulièrement pour la reconnaissance de l’expression faciale, nous avons également élaboré un descripteur de visages 3D, nommé SGAND, pour caractériser les propriétés géométriques d’un point par rapport à son voisinage dans le nuage de points représentant un visage 3D. L’efficacité de ces méthodes a été évaluée sur les bases FRGC, BU3DFE et Bosphorus pour la localisation des points d’intérêt ainsi que sur les bases BU3DFE et Bosphorus pour la reconnaissance des expressions faciales et des unités d’action. / This Ph.D thesis work is dedicated to automatic facial analysis in 3D, including facial landmarking and facial expression recognition. Indeed, facial expression plays an important role both in verbal and non verbal communication, and in expressing emotions. Thus, automatic facial expression recognition has various purposes and applications and particularly is at the heart of "intelligent" human-centered human/computer(robot) interfaces. Meanwhile, automatic landmarking provides aprior knowledge on location of face landmarks, which is required by many face analysis methods such as face segmentation and feature extraction used for instance for expression recognition. The purpose of this thesis is thus to elaborate 3D landmarking and facial expression recognition approaches for finally proposing an automatic facial activity (facial expression and action unit) recognition solution.In this work, we have proposed a Bayesian Belief Network (BBN) for recognizing facial activities, such as facial expressions and facial action units. A StatisticalFacial feAture Model (SFAM) has also been designed to first automatically locateface landmarks so that a fully automatic facial expression recognition system can be formed by combining the SFAM and the BBN. The key contributions are the followings. First, we have proposed to build a morphable partial face model, named SFAM, based on Principle Component Analysis. This model allows to learn boththe global variations in face landmark configuration and the local ones in terms of texture and local geometry around each landmark. Various partial face instances can be generated from SFAM by varying model parameters. Secondly, we have developed a landmarking algorithm based on the minimization an objective function describing the correlation between model instances and query faces. Thirdly, we have designed a Bayesian Belief Network with a structure describing the casual relationships among subjects, expressions and facial features. Facial expression oraction units are modelled as the states of the expression node and are recognized by identifying the maximum of beliefs of all states. We have also proposed a novel method for BBN parameter inference using a statistical feature model that can beconsidered as an extension of SFAM. Finally, in order to enrich information usedfor 3D face analysis, and particularly 3D facial expression recognition, we have also elaborated a 3D face feature, named SGAND, to characterize the geometry property of a point on 3D face mesh using its surrounding points.The effectiveness of all these methods has been evaluated on FRGC, BU3DFEand Bosphorus datasets for facial landmarking as well as BU3DFE and Bosphorus datasets for facial activity (expression and action unit) recognition.
|
310 |
Multi-Modal Technology for User Interface Analysis including Mental State Detection and Eye Tracking AnalysisHusseini Orabi, Ahmed January 2017 (has links)
We present a set of easy-to-use methods and tools to analyze human attention, behaviour, and physiological responses. A potential application of our work is evaluating user interfaces being used in a natural manner. Our approach is designed to be scalable and to work remotely on regular personal computers using expensive and noninvasive equipment.
The data sources our tool processes are nonintrusive, and captured from video; i.e. eye tracking, and facial expressions. For video data retrieval, we use a basic webcam. We investigate combinations of observation modalities to detect and extract affective and mental states.
Our tool provides a pipeline-based approach that 1) collects observational, data 2) incorporates and synchronizes the signal modality mentioned above, 3) detects users' affective and mental state, 4) records user interaction with applications and pinpoints the parts of the screen users are looking at, 5) analyzes and visualizes results.
We describe the design, implementation, and validation of a novel multimodal signal fusion engine, Deep Temporal Credence Network (DTCN). The engine uses Deep Neural Networks to provide 1) a generative and probabilistic inference model, and 2) to handle multimodal data such that its performance does not degrade due to the absence of some modalities. We report on the recognition accuracy of basic emotions for each modality. Then, we evaluate our engine in terms of effectiveness of recognizing basic six emotions and six mental states, which are agreeing, concentrating, disagreeing, interested, thinking, and unsure.
Our principal contributions include the implementation of a 1) multimodal signal fusion engine, 2) real time recognition of affective and primary mental states from nonintrusive and inexpensive modality, 3) novel mental state-based visualization techniques, 3D heatmaps, 3D scanpaths, and widget heatmaps that find parts of the user interface where users are perhaps unsure, annoyed, frustrated, or satisfied.
|
Page generated in 0.0637 seconds