• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 41
  • 41
  • 41
  • 14
  • 13
  • 12
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Emotion Recognition from Eye Region Signals using Local Binary Patterns

Jain, Gaurav 08 December 2011 (has links)
Automated facial expression analysis for Emotion Recognition (ER) is an active research area towards creating socially intelligent systems. The eye region, often considered integral for ER by psychologists and neuroscientists, has received very little attention in engineering and computer sciences. Using eye region as an input signal presents several bene ts for low-cost, non-intrusive ER applications. This work proposes two frameworks towards ER from eye region images. The first framework uses Local Binary Patterns (LBP) as the feature extractor on grayscale eye region images. The results validate the eye region as a signi cant contributor towards communicating the emotion in the face by achieving high person-dependent accuracy. The system is also able to generalize well across di erent environment conditions. In the second proposed framework, a color-based approach to ER from the eye region is explored using Local Color Vector Binary Patterns (LCVBP). LCVBP extend the traditional LBP by incorporating color information extracting a rich and a highly discriminative feature set, thereby providing promising results.
12

A scalable metric learning based voting method for expression recognition

Wan, Shaohua 09 October 2013 (has links)
In this research work, we propose a facial expression classification method using metric learning-based k-nearest neighbor voting. To achieve accurate classification of a facial expression from frontal face images, we first learn a distance metric structure from training data that characterizes the feature space pattern, then use this metric to retrieve the nearest neighbors from the training dataset, and finally output the classification decision accordingly. An expression is represented as a fusion of face shape and texture. This representation is based on registering a face image with a landmarking shape model and extracting Gabor features from local patches around landmarks. This type of representation achieves robustness and effectiveness by using an ensemble of local patch feature detectors at a global shape level. A naive implementation of the metric learning-based k-nearest neighbor would incur a time complexity proportional to the size of the training dataset, which precludes this method being used with enormous datasets. To scale to potential larger databases, a similar approach to that in [24] is used to achieve an approximate yet efficient ML-based kNN voting based on Locality Sensitive Hashing (LSH). A query example is directly hashed to the bucket of a pre-computed hash table where candidate nearest neighbors can be found, and there is no need to search the entire database for nearest neighbors. Experimental results on the Cohn-Kanade database and the Moving Faces and People database show that both ML-based kNN voting and its LSH approximation outperform the state-of-the-art, demonstrating the superiority and scalability of our method. / text
13

Autonomous facial expression recognition using the facial action coding system

de la Cruz, Nathan January 2016 (has links)
>Magister Scientiae - MSc / The South African Sign Language research group at the University of the Western Cape is in the process of creating a fully-edged machine translation system to automatically translate between South African Sign Language and English. A major component of the system is the ability to accurately recognise facial expressions, which are used to convey emphasis, tone and mood within South African Sign Language sentences. Traditionally, facial expression recognition research has taken one of two paths: either recognising whole facial expressions of which there are six i.e. anger, disgust, fear, happiness, sadness, surprise, as well as the neutral expression; or recognising the fundamental components of facial expressions as defined by the Facial Action Coding System in the form of Action Units. Action Units are directly related to the motion of specific muscles in the face, combinations of which are used to form any facial expression. This research investigates enhanced recognition of whole facial expressions by means of a hybrid approach that combines traditional whole facial expression recognition with Action Unit recognition to achieve an enhanced classification approach.
14

An experimental investigation of social cognitive mechanisms in Asperger Syndrome and an exploration of potential links with paranoia

Jänsch, Claire January 2011 (has links)
Background: Social cognitive deficits are considered to be central to the interpersonal problems experienced by individuals with a diagnosis of Asperger syndrome, but existing research evidence regarding mentalising ability and emotion recognition ability is difficult to interpret and inconclusive. Higher levels of mental health problems are experienced in Asperger Syndrome than in the general population, including depression, general anxiety and anxiety-related disorders. Clinical accounts have described symptoms of psychosis in individuals with autism spectrum disorders, including Asperger syndrome, and a number of research studies have reported elevated levels of delusional beliefs in this population. Investigations of social cognition in psychosis have highlighted a number of impairments in abilities such as mentalising and emotion recognition, as well as data-gathering and attribution biases that may be related to delusional beliefs. Similarly, a number of factors, including theory of mind difficulties, self-consciousness and anxiety, have been associated with delusional beliefs in individuals with Asperger syndrome, but there is a lack of agreement in the existing research. A preliminary model of delusional beliefs in Asperger syndrome has previously been proposed, which needs to be tested further and potentially refined. The current study aimed to further investigate social cognitive mechanisms in individuals with Asperger syndrome and to explore potential links with the development of paranoia. Method: Participants with a diagnosis of Asperger syndrome were recruited through a number of voluntary organisations and completed screening measures, the Autism Spectrum Quotient and the Wechsler Abbreviated Scale of Intelligence, to ensure their suitability for the study. Participants in the control group were recruited through the university and local community resources and were matched group-wise with the Asperger syndrome group for age, sex and IQ scores. The study compared the Asperger syndrome group (N=30) with the control group (N= 30) with regard to their performance on four experimental tasks and their responses on a number of self-report questionnaires that were delivered as an online survey. The experimental tasks included two theory of mind measures, one designed to assess mental state decoding ability (The Reading the Mind in the Eyes Test) and one designed to assess mental state reasoning ability (the Hinting Task). The recognition of emotions was evaluated through the Facial Expression Recognition Task. The Beads Task was administered to assess data-gathering style and specifically to test for Jumping to Conclusions biases. The self-report questionnaires were employed to measure levels of depression, general anxiety, social anxiety, self-consciousness and paranoid thoughts. Results: The Asperger syndrome group performed less well than the control group on tasks measuring mental state decoding ability, mental state reasoning ability and the recognition of emotion in facial expressions. Additionally, those with Asperger syndrome tended to make decisions on the basis of less evidence and half of the group demonstrated a Jumping to Conclusions bias. Higher levels of depression, general anxiety, social anxiety and paranoid thoughts were reported in the AS group and levels of depression and general anxiety were found to be associated with levels of paranoid thoughts. Discussion: The results are considered in relation to previous research and revisions are proposed for the existing model of delusional beliefs in Asperger syndrome. A critical analysis of the current study is presented, implications for clinical practice are discussed and suggestions are made for future research.
15

Recognition of facial action units from video streams with recurrent neural networks : a new paradigm for facial expression recognition

Vadapalli, Hima Bindu January 2011 (has links)
Philosophiae Doctor - PhD / This research investigated the application of recurrent neural networks (RNNs) for recognition of facial expressions based on facial action coding system (FACS). Support vector machines (SVMs) were used to validate the results obtained by RNNs. In this approach, instead of recognizing whole facial expressions, the focus was on the recognition of action units (AUs) that are defined in FACS. Recurrent neural networks are capable of gaining knowledge from temporal data while SVMs, which are time invariant, are known to be very good classifiers. Thus, the research consists of four important components: comparison of the use of image sequences against single static images, benchmarking feature selection and network optimization approaches, study of inter-AU correlations by implementing multiple output RNNs, and study of difference images as an approach for performance improvement. In the comparative studies, image sequences were classified using a combination of Gabor filters and RNNs, while single static images were classified using Gabor filters and SVMs. Sets of 11 FACS AUs were classified by both approaches, where a single RNN/SVM classifier was used for classifying each AU. Results indicated that classifying FACS AUs using image sequences yielded better results than using static images. The average recognition rate (RR) and false alarm rate (FAR) using image sequences was 82.75% and 7.61%, respectively, while the classification using single static images yielded a RR and FAR of 79.47% and 9.22%, respectively. The better performance by the use of image sequences can be at- tributed to RNNs ability, as stated above, to extract knowledge from time-series data. Subsequent research then investigated benchmarking dimensionality reduction, feature selection and network optimization techniques, in order to improve the performance provided by the use of image sequences. Results showed that an optimized network, using weight decay, gave best RR and FAR of 85.38% and 6.24%, respectively. The next study was of the inter-AU correlations existing in the Cohn-Kanade database and their effect on classification models. To accomplish this, a model was developed for the classification of a set of AUs by a single multiple output RNN. Results indicated that high inter-AU correlations do in fact aid classification models to gain more knowledge and, thus, perform better. However, this was limited to AUs that start and reach apex at almost the same time. This suggests the need for availability of a larger database of AUs, which could provide both individual and AU combinations for further investigation. The final part of this research investigated use of difference images to track the motion of image pixels. Difference images provide both noise and feature reduction, an aspect that was studied. Results showed that the use of difference image sequences provided the best results, with RR and FAR of 87.95% and 3.45%, respectively, which is shown to be significant when compared to use of normal image sequences classified using RNNs. In conclusion, the research demonstrates that use of RNNs for classification of image sequences is a new and improved paradigm for facial expression recognition.
16

Reconnaissance d'états émotionnels par analyse visuelle du visage et apprentissage machine / Recognition of emotional states by visual facial analysis and machine learning

Lekdioui, Khadija 29 December 2018 (has links)
Dans un contexte présentiel, un acte de communication comprend des expressions orales et émotionnelles. A partir de l’observation, du diagnostic et de l’identification de l’état émotionnel d’un individu, son interlocuteur pourra entreprendre des actions qui influenceraient la qualité de la communication. A cet égard, nous pensons améliorer la manière dont les individus perçoivent leurs échanges en proposant d’enrichir la CEMO (communication écrite médiatisée par ordinateur) par des émotions ressenties par les collaborateurs. Pour ce faire, nous proposons d’intégrer un système de reconnaissance, en temps réel, des émotions (joie, peur, surprise, colère, dégoût, tristesse, neutralité) à la plate-forme pédagogique “Moodle”, à partir de l’analyse des expressions faciales de l’apprenant à distance lors des activités collaboratives. La reconnaissance des expressions faciales se fait en trois étapes. Tout d’abord, le visage et ses composants (sourcils, nez, bouche, yeux) sont détectés à partir de la configuration de points caractéristiques. Deuxièmement, une combinaison de descripteurs hétérogènes est utilisée pour extraire les traits caractéristiques du visage. Finalement, un classifieur est appliqué pour classer ces caractéristiques en six émotions prédéfinies ainsi que l’état neutre. Les performances du système proposé seront évaluées sur des bases publiques d’expressions faciales posées et spontanées telles que Cohn-Kanade (CK), Karolinska Directed Emotional Faces (KDEF) et Facial Expressions and Emotion Database (FEED). / In face-to-face settings, an act of communication includes verbal and emotional expressions. From observation, diagnosis and identification of the individual's emotional state, the interlocutor will undertake actions that would influence the quality of the communication. In this regard, we suggest to improve the way that the individuals perceive their exchanges by proposing to enrich the textual computer-mediated communication by emotions felt by the collaborators. To do this, we propose to integrate a real time emotions recognition system in a platform “Moodle”, to extract them from the analysis of facial expressions of the distant learner in collaborative activities. There are three steps to recognize facial expressions. First, the face and its components (eyebrows, nose, mouth, eyes) are detected from the configuration of facial landmarks. Second, a combination of heterogeneous descriptors is used to extract the facial features. Finally, a classifier is applied to classify these features into six predefined emotions as well as the neutral state. The performance of the proposed system will be assessed on a public basis of posed and spontaneous facial expressions such as Cohn-Kanade (CK), Karolinska Directed Emotional Faces (KDEF) and Facial Expressions and Emotion Database (FEED).
17

Recognition of facial affect in individuals scoring high and low in psychopathic personality characteristics

Ali, Afiya. January 2007 (has links)
Thesis (M.Soc.Sc. Psychology)--University of Waikato, 2007. / Title from PDF cover (viewed April 8, 2008) Includes bibliographical references (p. 70-76)
18

A framework for investigating the use of face features to identify spontaneous emotions

Bezerra, Giuliana Silva 12 December 2014 (has links)
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2016-01-14T18:48:05Z No. of bitstreams: 1 GiulianaSilvaBezerra_DISSERT.pdf: 12899912 bytes, checksum: 413f2be6aef4a909500e6834e7b0ae63 (MD5) / Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2016-01-15T18:57:11Z (GMT) No. of bitstreams: 1 GiulianaSilvaBezerra_DISSERT.pdf: 12899912 bytes, checksum: 413f2be6aef4a909500e6834e7b0ae63 (MD5) / Made available in DSpace on 2016-01-15T18:57:11Z (GMT). No. of bitstreams: 1 GiulianaSilvaBezerra_DISSERT.pdf: 12899912 bytes, checksum: 413f2be6aef4a909500e6834e7b0ae63 (MD5) Previous issue date: 2014-12-12 / Emotion-based analysis has raised a lot of interest, particularly in areas such as forensics, medicine, music, psychology, and human-machine interface. Following this trend, the use of facial analysis (either automatic or human-based) is the most common subject to be investigated once this type of data can easily be collected and is well accepted in the literature as a metric for inference of emotional states. Despite this popularity, due to several constraints found in real world scenarios (e.g. lightning, complex backgrounds, facial hair and so on), automatically obtaining affective information from face accurately is a very challenging accomplishment. This work presents a framework which aims to analyse emotional experiences through naturally generated facial expressions. Our main contribution is a new 4-dimensional model to describe emotional experiences in terms of appraisal, facial expressions, mood, and subjective experiences. In addition, we present an experiment using a new protocol proposed to obtain spontaneous emotional reactions. The results have suggested that the initial emotional state described by the participants of the experiment was different from that described after the exposure to the eliciting stimulus, thus showing that the used stimuli were capable of inducing the expected emotional states in most individuals. Moreover, our results pointed out that spontaneous facial reactions to emotions are very different from those in prototypic expressions due to the lack of expressiveness in the latter. / Emotion-based analysis has raised a lot of interest, particularly in areas such as forensics, medicine, music, psychology, and human-machine interface. Following this trend, the use of facial analysis (either automatic or human-based) is the most common subject to be investigated once this type of data can easily be collected and is well accepted in the literature as a metric for inference of emotional states. Despite this popularity, due to several constraints found in real world scenarios (e.g. lightning, complex backgrounds, facial hair and so on), automatically obtaining affective information from face accurately is a very challenging accomplishment. This work presents a framework which aims to analyse emotional experiences through naturally generated facial expressions. Our main contribution is a new 4-dimensional model to describe emotional experiences in terms of appraisal, facial expressions, mood, and subjective experiences. In addition, we present an experiment using a new protocol proposed to obtain spontaneous emotional reactions. The results have suggested that the initial emotional state described by the participants of the experiment was different from that described after the exposure to the eliciting stimulus, thus showing that the used stimuli were capable of inducing the expected emotional states in most individuals. Moreover, our results pointed out that spontaneous facial reactions to emotions are very different from those in prototypic expressions due to the lack of expressiveness in the latter.
19

Methods for facial expression recognition with applications in challenging situations

Huang, X. (Xiaohua) 01 December 2014 (has links)
Abstract In recent years, facial expression recognition has become a useful scheme for computers to affectively understand the emotional state of human beings. Facial representation and facial expression recognition under unconstrained environments have been two critical issues for facial expression recognition systems. This thesis contributes to the research and development of facial expression recognition systems from two aspects: first, feature extraction for facial expression recognition, and second, applications to challenging conditions. Spatial and temporal feature extraction methods are introduced to provide effective and discriminative features for facial expression recognition. The thesis begins with a spatial feature extraction method. This descriptor exploits magnitude while it improves local quantized pattern using improved vector quantization. It also makes the statistical patterns domain-adaptive and compact. Then, the thesis discusses two spatiotemporal feature extraction methods. The first method uses monogenic signal analysis as a preprocessing stage and extracts spatiotemporal features using local binary pattern. The second method extracts sparse spatiotemporal features using sparse cuboids and spatiotemporal local binary pattern. Both methods increase the discriminative capability of local binary pattern in the temporal domain. Based on feature extraction methods, three practical conditions, including illumination variations, facial occlusion and pose changes, are studied for the applications of facial expression recognition. First, with near-infrared imaging technique, a discriminative component-based single feature descriptor is proposed to achieve a high degree of robustness and stability to illumination variations. Second, occlusion detection is proposed to dynamically detect the occluded face regions. A novel system is further designed for handling effectively facial occlusion. Lastly, multi-view discriminative neighbor preserving embedding is developed to deal with pose change, which formulates multi-view facial expression recognition as a generalized eigenvalue problem. Experimental results on publicly available databases show that the effectiveness of the proposed approaches for the applications of facial expression recognition. / Tiivistelmä Kasvonilmeiden tunnistamisesta on viime vuosina tullut tietokoneille hyödyllinen tapa ymmärtää affektiivisesti ihmisen tunnetilaa. Kasvojen esittäminen ja kasvonilmeiden tunnistaminen rajoittamattomissa ympäristöissä ovat olleet kaksi kriittistä ongelmaa kasvonilmeitä tunnistavien järjestelmien kannalta. Tämä väitöskirjatutkimus myötävaikuttaa kasvonilmeitä tunnistavien järjestelmien tutkimukseen ja kehittymiseen kahdesta näkökulmasta: piirteiden irrottamisesta kasvonilmeiden tunnistamista varten ja kasvonilmeiden tunnistamisesta haastavissa olosuhteissa. Työssä esitellään spatiaalisia ja temporaalisia piirteenirrotusmenetelmiä, jotka tuottavat tehokkaita ja erottelukykyisiä piirteitä kasvonilmeiden tunnistamiseen. Ensimmäisenä työssä esitellään spatiaalinen piirteenirrotusmenetelmä, joka parantaa paikallisia kvantisoituja piirteitä käyttämällä parannettua vektorikvantisointia. Menetelmä tekee myös tilastollisista malleista monikäyttöisiä ja tiiviitä. Seuraavaksi työssä esitellään kaksi spatiotemporaalista piirteenirrotusmenetelmää. Ensimmäinen näistä käyttää esikäsittelynä monogeenistä signaalianalyysiä ja irrottaa spatiotemporaaliset piirteet paikallisia binäärikuvioita käyttäen. Toinen menetelmä irrottaa harvoja spatiotemporaalisia piirteitä käyttäen harvoja kuusitahokkaita ja spatiotemporaalisia paikallisia binäärikuvioita. Molemmat menetelmät parantavat paikallisten binärikuvioiden erottelukykyä ajallisessa ulottuvuudessa. Piirteenirrotusmenetelmien pohjalta työssä tutkitaan kasvonilmeiden tunnistusta kolmessa käytännön olosuhteessa, joissa esiintyy vaihtelua valaistuksessa, okkluusiossa ja pään asennossa. Ensiksi ehdotetaan lähi-infrapuna kuvantamista hyödyntävää diskriminatiivistä komponenttipohjaista yhden piirteen kuvausta, jolla saavutetaan korkea suoritusvarmuus valaistuksen vaihtelun suhteen. Toiseksi ehdotetaan menetelmä okkluusion havainnointiin, jolla dynaamisesti havaitaan peittyneet kasvon alueet. Uudenlainen menetelmä on kehitetty käsittelemään kasvojen okkluusio tehokkaasti. Viimeiseksi työssä on kehitetty moninäkymäinen diskriminatiivisen naapuruston säilyttävään upottamiseen pohjautuva menetelmä käsittelemään pään asennon vaihtelut. Menetelmä kuvaa moninäkymäisen kasvonilmeiden tunnistamisen yleistettynä ominaisarvohajotelmana. Kokeelliset tulokset julkisilla tietokannoilla osoittavat tässä työssä ehdotetut menetelmät suorituskykyisiksi kasvonilmeiden tunnistamisessa.
20

Robust facial expression recognition in the presence of rotation and partial occlusion

Mushfieldt, Diego January 2014 (has links)
>Magister Scientiae - MSc / This research proposes an approach to recognizing facial expressions in the presence of rotations and partial occlusions of the face. The research is in the context of automatic machine translation of South African Sign Language (SASL) to English. The proposed method is able to accurately recognize frontal facial images at an average accuracy of 75%. It also achieves a high recognition accuracy of 70% for faces rotated to 60◦. It was also shown that the method is able to continue to recognize facial expressions even in the presence of full occlusions of the eyes, mouth and left/right sides of the face. The accuracy was as high as 70% for occlusion of some areas. An additional finding was that both the left and the right sides of the face are required for recognition. As an addition, the foundation was laid for a fully automatic facial expression recognition system that can accurately segment frontal or rotated faces in a video sequence.

Page generated in 0.1177 seconds