• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 11
  • 11
  • 11
  • 11
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Spatio-temporal representation and analysis of facial expressions with varying intensities

Sariyanidi, Evangelos January 2017 (has links)
Facial expressions convey a wealth of information about our feelings, personality and mental state. In this thesis we seek efficient ways of representing and analysing facial expressions of varying intensities. Firstly, we analyse state-of-the-art systems by decomposing them into their fundamental components, in an effort to understand what are the useful practices common to successful systems. Secondly, we address the problem of sequence registration, which emerged as an open issue in our analysis. The encoding of the (non-rigid) motions generated by facial expressions is facilitated when the rigid motions caused by irrelevant factors, such as camera movement, are eliminated. We propose a sequence registration framework that is based on pre-trained regressors of Gabor motion energy. Comprehensive experiments show that the proposed method achieves very high registration accuracy even under difficult illumination variations. Finally, we propose an unsupervised representation learning framework for encoding the spatio-temporal evolution of facial expressions. The proposed framework is inspired by the Facial Action Coding System (FACS), which predates computer-based analysis. FACS encodes an expression in terms of localised facial movements and assigns an intensity score for each movement. The framework we propose mimics those two properties of FACS. Specifically, we propose to learn from data a linear transformation that approximates the facial expression variation in a sequence as a weighted sum of localised basis functions, where the weight of each basis function relates to movement intensity. We show that the proposed framework provides a plausible description of facial expressions, and leads to state-of-the-art performance in recognising expressions across intensities; from fully blown expressions to micro-expressions.
2

POKERFACE: EMOTION BASED GAME-PLAY TECHNIQUES FOR COMPUTER POKER PLAYERS

Cockerham, Lucas 01 January 2004 (has links)
Numerous algorithms/methods exist for creating computer poker players. This thesis comparesand contrasts them. A set of poker agents for the system PokerFace are then introduced. A surveyof the problem of facial expression recognition is included in the hopes it may be used to build abetter computer poker player.
3

Recognition of facial action units from video streams with recurrent neural networks : a new paradigm for facial expression recognition

Vadapalli, Hima Bindu January 2011 (has links)
Philosophiae Doctor - PhD / This research investigated the application of recurrent neural networks (RNNs) for recognition of facial expressions based on facial action coding system (FACS). Support vector machines (SVMs) were used to validate the results obtained by RNNs. In this approach, instead of recognizing whole facial expressions, the focus was on the recognition of action units (AUs) that are defined in FACS. Recurrent neural networks are capable of gaining knowledge from temporal data while SVMs, which are time invariant, are known to be very good classifiers. Thus, the research consists of four important components: comparison of the use of image sequences against single static images, benchmarking feature selection and network optimization approaches, study of inter-AU correlations by implementing multiple output RNNs, and study of difference images as an approach for performance improvement. In the comparative studies, image sequences were classified using a combination of Gabor filters and RNNs, while single static images were classified using Gabor filters and SVMs. Sets of 11 FACS AUs were classified by both approaches, where a single RNN/SVM classifier was used for classifying each AU. Results indicated that classifying FACS AUs using image sequences yielded better results than using static images. The average recognition rate (RR) and false alarm rate (FAR) using image sequences was 82.75% and 7.61%, respectively, while the classification using single static images yielded a RR and FAR of 79.47% and 9.22%, respectively. The better performance by the use of image sequences can be at- tributed to RNNs ability, as stated above, to extract knowledge from time-series data. Subsequent research then investigated benchmarking dimensionality reduction, feature selection and network optimization techniques, in order to improve the performance provided by the use of image sequences. Results showed that an optimized network, using weight decay, gave best RR and FAR of 85.38% and 6.24%, respectively. The next study was of the inter-AU correlations existing in the Cohn-Kanade database and their effect on classification models. To accomplish this, a model was developed for the classification of a set of AUs by a single multiple output RNN. Results indicated that high inter-AU correlations do in fact aid classification models to gain more knowledge and, thus, perform better. However, this was limited to AUs that start and reach apex at almost the same time. This suggests the need for availability of a larger database of AUs, which could provide both individual and AU combinations for further investigation. The final part of this research investigated use of difference images to track the motion of image pixels. Difference images provide both noise and feature reduction, an aspect that was studied. Results showed that the use of difference image sequences provided the best results, with RR and FAR of 87.95% and 3.45%, respectively, which is shown to be significant when compared to use of normal image sequences classified using RNNs. In conclusion, the research demonstrates that use of RNNs for classification of image sequences is a new and improved paradigm for facial expression recognition.
4

Method of modelling facial action units using partial differential equations

Ugail, Hassan, Ismail, N.B. January 2016 (has links)
No / In this paper we discuss a novel method of mathematically modelling facial action units for accurate representation of human facial expressions in 3- dimensions. Our method utilizes the approach of Facial Action Coding System (FACS). It is based on a boundary-value approach, which utilizes a solution to a fourth order elliptic Partial Differential Equation (PDE) subject to a suitable set of boundary conditions. Here the PDE surface generation method for human facial expressions is utilized in order to generate a wide variety of facial expressions in an efficient and realistic way. For this purpose, we identify a set of boundary curves corresponding to the key features of the face which in turn define a given facial expression in 3-dimensions. The action units (AUs) relating to the FACS are then efficiently represented in terms of Fourier coefficients relating to the boundary curves which enables us to store both the face and the facial expressions in an efficient way.
5

Facial Analysis for Real-Time Application: A Review in Visual Cues Detection Techniques

Yap, Moi Hoon, Ugail, Hassan, Zwiggelaar, R. 30 August 2012 (has links)
Yes / Emerging applications in surveillance, the entertainment industry and other human computer interaction applications have motivated the development of real-time facial analysis research covering detection, tracking and recognition. In this paper, the authors present a review of recent facial analysis for real-time applications, by providing an up-to-date review of research efforts in human computing techniques in the visible domain. The main goal is to provide a comprehensive reference source for researchers, regardless of specific research areas, involved in real-time facial analysis. First, the authors undertake a thorough survey and comparison in face detection techniques. In this survey, they discuss some prominent face detection methods presented in the literature. The performance of the techniques is evaluated by using benchmark databases. Subsequently, the authors provide an overview of the state-of-the-art of facial expressions analysis and the importance of psychology inherent in facial expression analysis. During the last decades, facial expressions analysis has slowly evolved into automatic facial expressions analysis due to the popularity of digital media and the maturity of computer vision. Hence, the authors review some existing automatic facial expressions analysis techniques. Finally, the authors provide an exemplar for the development of a facial analysis real-time application and propose a model for facial analysis. This review shows that facial analysis for real-time application involves multi-disciplinary aspects and it is important to take all domains into account when building a reliable system.
6

Application of Automated Facial Expression Analysis and Facial Action Coding System to Assess Affective Response to Consumer Products

Clark, Elizabeth A. 17 March 2020 (has links)
Sensory and consumer sciences seek to comprehend the influences of sensory perception on consumer behaviors such as product liking and purchase. The food industry assesses product liking through hedonic testing but often does not capture affectual response as it pertains to product-generated (PG) and product-associated (PA) emotions. This research sought to assess the application of PA and PG emotion methodology to better understand consumer experiences. A systematic review of the existing literature was performed that focused on the Facial Action Coding System (FACS) and its use to investigate consumer affect and characterize human emotional response to product-based stimuli, which revealed inconsistencies in how FACS is carried out as well as how emotional response is inferred from Action Unit (AU) activation. Automatic Facial Expression Analysis (AFEA), which automates FACS and translates the facial muscular positioning into the basic universal emotions, was then used in a two-part study. In the first study (n=50 participants), AFEA, a Check-All-That-Apply (CATA) emotions questionnaire, and a Single-Target Implicit Association Test (ST-IAT) were used to characterize the relationship between PA as well as PG emotions and consumer behavior (acceptability, purchase intent) towards milk in various types of packaging (k=6). The ST-IAT did not yield significant PA emotions for packaged milk (p>0.05), but correspondence analysis of CATA data produced PA emotion insights including term selection based on arousal and underlying approach/withdrawal motivation related to packaging pigmentation. Time series statistical analysis of AFEA data provided increased insights on significant emotion expression, but the lack of difference (p>0.05) between certain expressed emotions that maintain no related AUs, such as happy and disgust, indicates that AFEA software may not be identifying AUs and determining emotion-based inferences in agreement with FACS. In the second study, AFEA data from the sensory evaluation (n=48 participants) of light-exposed milk stimuli (k=4) stored in packaging with various light-blocking properties) underwent time series statistical analysis to determine if the sensory-engaging nature of control stimuli could impact time series statistical analysis of AFEA data. When compared against the limited sensory engaging (blank screen) control, contempt, happy, and angry were expressed more intensely (p<0.025) and with greater incidence for the light-exposed milk stimuli; neutral was expressed exclusively in the same manner for the blank screen. Comparatively, intense neutral expression (p<0.025) was brief, fragmented, and often accompanied by intense (albeit fleeting) expressions of happy, sad, or contempt for the sensory engaging control (water); emotions such as surprised, scared, and sad were expressed similarly for the light-exposed milk stimuli. As such, it was determined that care should be taken while comparing the control and experimental stimuli in time series analysis as facial activation of muscles/AUs related to sensory perception (e.g., chewing, smelling) can impact the resulting interpretation. Collectively, the use of PA and PG emotion methodology provided additional insights on consumer-product related behaviors. However, it is hard to conclude whether AFEA is yielding emotional interpretations based on true facial expression of emotion or facial actions related to sensory perception for consumer products such as foods and beverages. / Doctor of Philosophy / Sensory and consumer sciences seek to comprehend the influences of sensory perception on consumer behaviors such as product liking and purchase. The food industry assesses product liking through consumer testing but often does not capture consumer response as it pertains to emotions such as those experienced while directly interacting with a product (i.e., product-generated emotions, PG) or those attributed to the product based on external information such as branding, marketing, nutrition, social environment, physical environment, memories, etc.( product-associated emotions, PA). This research investigated the application of PA and PG emotion methodology to better understand consumer experiences. A systematic review of the existing scientific literature was performed that focused on the Facial Action Coding System (FACS), a process used determine facially expressed emotion from facial muscular positioning, and its use to investigate consumer behavior and characterize human emotional response to product-based stimuli; the review revealed inconsistencies in how FACS is carried out as well as how emotional response is determined from facial muscular activation. Automatic Facial Expression Analysis (AFEA), which automates FACS, was then used in a two-part study. In the first study (n=50 participants), AFEA, a Check-All-That-Apply (CATA) emotions questionnaire, and a Single-Target Implicit Association Test (ST-IAT) were used to characterize the relationship between PA as well as PG emotions and consumer behavior (acceptability, purchase intent) towards milk in various types of packaging (k=6). While the ST-IAT did not yield significant results (p>0.05), CATA data produced illustrated term selection based on motivation to approach and/or withdrawal from milk based on packaging color. Additionally, the lack of difference (p>0.05) between emotions that do not produce similar facial muscle activations, such as happy and disgust, indicates that AFEA software may not be determining emotions as outlined in the established FACS procedures. In the second study, AFEA data from the sensory evaluation (n=48 participants) of light-exposed milk stimuli (k=4) stored in packaging with various light blocking properties underwent time series statistical analysis to determine if the nature of the control stimulus itself could impact the analysis of AFEA data. When compared against the limited sensory engaging control (a blank screen), contempt, happy, and angry were expressed more intensely (p<0.025) and consistently for the light-exposed milk stimuli; neutral was expressed exclusively in the same manner for the blank screen. Comparatively, intense neutral expression (p<0.025) was brief, fragmented, and often accompanied by intense (although fleeting) expressions of happy, sad, or contempt for the sensory engaging control (water); emotions such as surprised, scared, and sad were expressed similarly for the light-exposed milk stimuli. As such, it was determined that care should be taken as facial activation of muscles/AUs related to sensory perception (e.g., chewing, smelling) can impact the resulting interpretation. Collectively, the use of PA and PG emotion methodology provided additional insights to consumer-product related behaviors. However, it is hard to conclude whether AFEA is yielding emotional interpretations based on true facial expression of emotion or facial actions related to sensory perception for sensory engaging consumer products such as foods and beverages.
7

FACIAL EXPRESSION DISCRIMINATES BETWEEN PAIN AND ABSENCE OF PAIN IN THE NON-COMMUNICATIVE, CRITICALLY ILL ADULT PATIENT

Arif-Rahu, Mamoona 03 December 2010 (has links)
BACKGROUND: Pain assessment is a significant challenge in critically ill adults, especially those unable to communicate their pain level. At present there is no universally accepted pain scale for use in the non-communicative (cognitively impaired, sedated, paralyzed or mechanically ventilated) patient. Facial expressions are considered among the most reflexive and automatic nonverbal indices of pain. The facial expression component of pain assessment tools include a variety of facial descriptors (wincing, frowning, grimacing, smile/relaxed) with inconsistent pain intensity ratings or checklists of behaviors. The lack of consistent facial expression description and quantification of pain intensity makes standardization of pain evaluation difficult. Although use of facial expression is an important behavioral measure of pain intensity, precise and accurate methods for interpreting the specific facial actions of pain in critically ill adults has not been identified. OBJECTIVE: The three specific aims of this prospective study were: 1) to describe facial actions during pain in non-communicative critically ill patients; 2) to determine facial actions that characterize the pain response; 3) to describe the effect of patient factors on facial actions during the pain response. DESIGN: Descriptive, correlational, comparative. SETTING: Two adult critical care units (Surgical Trauma ICU-STICU and Medical Respiratory ICU-MRICU) at an urban university medical center. SUBJECTS: A convenience sample of 50 non-communicative critically ill intubated, mechanically ventilated adult patients. Fifty-two percent were male, 48% Euro-American, with mean age 52.5 years (±17. 2). METHODS: Subjects were video-recorded while in an intensive care unit at rest (baseline phase) and during endotracheal suctioning (procedure phase). Observer-based pain ratings were gathered using the Behavioral Pain Scale. Facial actions were coded from video using the Facial Action Coding System (FACS) over a 30 second time period for each phase. Pain scores were calculated from FACS action units (AU) following Prkachin and Solomon metric. RESULTS: Fourteen facial action units were associated with pain response and found to occur more frequently during the noxious procedure than during baseline. These included areas of brow raiser, brow lower, orbit tightening, eye closure, head movements, mouth opening, nose wrinkling, and nasal dilatation, and chin raise. The sum of intensity of the 14 AUs was correlated with BPS (r=0.70, P<0.0001) and with the facial expression component of BPS (r=0.58, P<0.0001) during procedure. A stepwise multivariate analysis predicted 5 pain-relevant facial AUs [brow raiser (AU 1), brow lower (AU 4), nose wrinkling (AU 9), head turned right (AU 52), and head turned up (AU53)] that accounted for 71% of the variance (Adjusted R2=0.682) in pain response (F= 21.99, df=49, P<0.0001). The FACS pain intensity score based on 5 pain-relevant facial AUs was associated with BPS (r=0.77, P<0.0001) and with the facial expression component of BPS (r=0.63, P<0.0001) during procedure. Patient factors (e. g., age, gender, race, and diagnosis, duration of endotracheal intubation, ICU length of stay, and analgesic and sedative drug usages, and severity of illness) were not associated with the FACS pain intensity score. CONCLUSIONS: Overall, the FACS pain intensity score composed of inner brow raiser, brow lower, nose wrinkle, and head movements reflected a general pain action in our study. Upper facial expression provides an important behavioral measure of pain which may be used in the clinical evaluation of pain in the non-communicative critically ill patients. These results provide preliminary results that the Facial Action Coding System can discriminate a patient’s acute pain experience.
8

Morphable 3d Facial Animation Based On Thin Plate Splines

Erdogdu, Aysu 01 May 2010 (has links) (PDF)
The aim of this study is to present a novel three dimensional (3D) facial animation method for morphing emotions and facial expressions from one face model to another. For this purpose, smooth and realistic face models were animated with thin plate splines (TPS). Neutral face models were animated and compared with the actual expressive face models. Neutral and expressive face models were obtained from subjects via a 3D face scanner. The face models were preprocessed for pose and size normalization. Then muscle and wrinkle control points were located to the source face with neutral expression according to the human anatomy. Facial Action Coding System (FACS) was used to determine the control points and the face regions in the underlying model. The final positions of the control points after a facial expression were received from the expressive scan data of the source face. Afterwards control points were transferred to the target face using the facial landmarks and TPS as the morphing function. Finally, the neutral target face was animated with control points by TPS. In order to visualize the method, face scans with expressions composed of a selected subset of action units found in Bosphorus Database were used. Five lower-face and three-upper face action units are simulated during this study. For experimental results, the facial expressions were created on the 3D neutral face scan data of a human subject and the synthetic faces were compared to the subject&rsquo / s actual 3D scan data with the same facial expressions taken from the dataset.
9

Multi-Modal Technology for User Interface Analysis including Mental State Detection and Eye Tracking Analysis

Husseini Orabi, Ahmed January 2017 (has links)
We present a set of easy-to-use methods and tools to analyze human attention, behaviour, and physiological responses. A potential application of our work is evaluating user interfaces being used in a natural manner. Our approach is designed to be scalable and to work remotely on regular personal computers using expensive and noninvasive equipment. The data sources our tool processes are nonintrusive, and captured from video; i.e. eye tracking, and facial expressions. For video data retrieval, we use a basic webcam. We investigate combinations of observation modalities to detect and extract affective and mental states. Our tool provides a pipeline-based approach that 1) collects observational, data 2) incorporates and synchronizes the signal modality mentioned above, 3) detects users' affective and mental state, 4) records user interaction with applications and pinpoints the parts of the screen users are looking at, 5) analyzes and visualizes results. We describe the design, implementation, and validation of a novel multimodal signal fusion engine, Deep Temporal Credence Network (DTCN). The engine uses Deep Neural Networks to provide 1) a generative and probabilistic inference model, and 2) to handle multimodal data such that its performance does not degrade due to the absence of some modalities. We report on the recognition accuracy of basic emotions for each modality. Then, we evaluate our engine in terms of effectiveness of recognizing basic six emotions and six mental states, which are agreeing, concentrating, disagreeing, interested, thinking, and unsure. Our principal contributions include the implementation of a 1) multimodal signal fusion engine, 2) real time recognition of affective and primary mental states from nonintrusive and inexpensive modality, 3) novel mental state-based visualization techniques, 3D heatmaps, 3D scanpaths, and widget heatmaps that find parts of the user interface where users are perhaps unsure, annoyed, frustrated, or satisfied.
10

Etude biomécanique de la mimique faciale / Biomechanical study of facial mimics movements

Dakpé, Stéphanie 19 May 2015 (has links)
Ce travail de thèse, inclus dans un projet structurant plus vaste, projet SIMOVI (SImulation des MOuvements du VIsage), s’attache à étudier spécifiquement la mimique faciale en corrélant les déplacements visibles du revêtement cutané et les mouvements musculaires internes à travers le développement de plusieurs méthodologies. L’ensemble de la mimique faciale ne pouvant être étudié, étant donné la multitude d’expressions, les mouvements pertinents à étudier dans nos travaux ont été identifiés. Ces mouvements ont été caractérisés chez 23 sujets jeunes dans une analyse descriptive qualitative et clinique, basée sur une méthodologie s’appuyant sur l’analyse d’enregistrements vidéoscopiques, et le développement d’un codage issu du FACS (Facial Action Coding System). Une cohorte de référence a ainsi été constituée. Après avoir validé notre méthodologie pour la caractérisation externe de la mimique, l’analyse des muscles peauciers par l’IRM a été réalisée sur 10 hémifaces parmi les sujets sains issus de la cohorte. Cette caractérisation a fait appel, à partir d’une anatomie in vivo, à une modélisation de certains muscles peauciers (zygomaticus major en particulier) afin d’extraire des paramètres morphologiques, de réaliser une analyse plus fine de la morphologie musculaire en 3 dimensions, et d’apporter une meilleure compréhension du comportement cinématique du muscle dans différentes positions. Par son intégration dans un questionnement plus vaste :- comment caractériser objectivement la mimique faciale ? - quels sont les indicateurs qualitatifs et quantitatifs de la mimique que nous pouvons recueillir, et comment réaliser ce recueil ? - comment utiliser les développements technologiques dans les applications cliniques ? Ce travail constitue une étape préliminaire à d’autres travaux. Il pourra fournir des données de référence à des fins de modélisation, de simulation de la mimique faciale, ou de développements d’outil de mesures pour le suivi et l’évaluation des déficits de la mimique faciale. / The aim of this research is to study facials mimics movements and to correlate externat soft tissue (i.e., cutaneous) movement during facial mimics with internal (i.e., facial mimic muscle) movement. The entire facial mimicry couldn't be studied, that's why relevant movements had been selected. Those movements were characterised by a clinically qualitative analysis in 23 young healthy volunteers. The analysis was performed with video recordings including scaling derived from the FACS (Facial Action Coding System). After the validation of external characterisation by this method, internal characterisation of the mimic facial muscle was carried out in 10 volunteers. A modelization of selected facial mimic muscle as Zygomaticus Major was achieved. With this work, morphological parameters could be extracted, 3D morphometric data were analysed to provide a better understanding of cinematic behaviour of muscle in different positions.This research is included in the Simovi Project, which aims to determine to what extent a facial mimic can be evaluated objectively, to select the qualitative and quantitative indicators for evaluation of mimic facial disorders, and to transfer our technological developments in clinical field. This research is a first step and provides data for simulation or developments of measurement tools in evaluation and follow-up of mimic facial disorders.

Page generated in 0.1372 seconds