21 |
Facial Expression Recognition by Using Class Mean Gabor Responses with Kernel Principal Component AnalysisChung, Koon Yin C. 16 April 2010 (has links)
No description available.
|
22 |
Reconhecimento automático de expressões faciais por dispositivos móveisDomingues, Daniel Chinen January 2014 (has links)
Orientador: Prof. Dr. Guiou Kobayashi / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia da Informação, 2014. / A computação atual vem demandando, cada vez mais, formas avançadas de interação com
os computadores. A interface do humano com seus dispositivos móveis carece de métodos
mais avançados, e um recurso automático de reconhecimento de expressões faciais seria
uma maneira de alcançar patamares maiores nessa escala de evolução. A forma como se
dá o reconhecimento de emoções humanas e o que as expressões faciais representam em
uma comunicação face-a-face vem sendo referência no desenvolvimento desses sistemas
computacionais e com isso, pode-se elencar três grandes desafios para implementar o
algoritmo de analise de expressões: Localizar o rosto na imagem, extrair os elementos
faciais relevantes e classificar os estados de emoções. O melhor método de resolução de
cada um desses sub-desafios, que se relacionam fortemente, determinará a viabilidade, a
eficiência e a relevância de um novo sistema de análise de expressões embarcada nos
dispositivos portáteis. Este estudo tem como objetivo avaliar a viabilidade da implantação de
um sistema automático de reconhecimento de expressões faciais por imagens, em
dispositivo móvel, utilizando a plataforma iOS da Apple, integrada com a biblioteca de código
aberto e muito utilizada na comunidade da computação visual, o OpenCV. O algoritmo Local
Binary Pattern, implementado pelo OpenCV, foi escolhido como lógica de rastreamento da
face. Os algorítmos Adaboost e Eigenface foram ,respectivamente, adotados para extração
e classificação da emoção e ambos são também suportados pela mencionada biblioteca. O
Módulo de Classificação Eigenface demandou um treinamento adicional em um ambiente de
maior capacidade de processamento e externo a plataforma móvel; posteriormente, apenas
o arquivo de treino foi exportado e consumido pelo aplicativo modelo. O estudo permitiu
concluir que o Local Binary Pattern é muito robusto a variações de iluminação e muito
eficiente no rastreamento da face; o Adaboost e Eigenface produziram eficiência de
aproximadamente 65% na classificação da emoção, quando utilizado apenas as imagens de
pico no treino do módulo, condição essa, necessária para manutenção do arquivo de treino
em um tamanho compatível com o armazenamento disponível nos dispositivos dessa
categoria. / The actual computing is demanding, more and more, advanced forms of interaction with
computers. The interfacing from man with their mobile devices lacks more advanced
methods, and automatic facial expression recognition would be a way to achieve greater
levels in this scale of evolution. The way how is the human emotion recognition and what
facial expressions represents in a face to face communication is being reference for
development of these computer systems and thus, it can list three major implementation
challenges for algorithm analysis of expressions: location of the face in the image, extracting
the relevant facial features and emotions¿ states classification. The best method to solve
each of these strongly related sub- challenges, determines the feasibility, the efficiency and
the relevance of a new expressions analysis system, embedded in portable devices. To
evaluate the feasibility of developing an automatic recognition of facial expressions in
images, we implemented a mobile system model in the iOS platform with integration to an
open source library that is widely used in visual computing community: the OpenCV. The
Local Binary Pattern algorithm implemented by OpenCV, was chosen as the face tracking
logic; the Eigenface and AdaBoost, respectively, were adopted for extraction and
classification of emotion and both are also supported by the library. The Eigenface
Classification Module was trained in a more robust and external environment to the mobile
platform and subsequently only the training file was exported and consumed by the model
application. With this experiment it was concluded that the Local Binary Pattern is very robust
to lighting variations and very efficient to tracking the face; the Adaboot and Eigenface
resulted in approximately 65% of efficiency when used only maximum emotion images to
training the module, a condition necessary for maintenance of the cascade file in a
compatible size to available storage on the mobile platform.
|
23 |
Robust recognition of facial expressions on noise degraded facial imagesSheikh, Munaf January 2011 (has links)
<p>We investigate the use of noise degraded facial images in the application of facial expression recognition. In particular, we trained Gabor+SVMclassifiers to recognize facial expressions images with various types of noise. We applied Gaussian noise, Poisson noise, varying levels of salt and pepper noise, and speckle noise to noiseless facial images. Classifiers were trained with images without noise and then tested on the images with noise. Next, the classifiers were trained using images with noise, and then on tested both images that had noise, and images that were noiseless. Finally, classifiers were tested on images while increasing the levels of salt and pepper in the test set. Our results reflected distinct degradation of recognition accuracy. We also discovered that certain types of noise, particularly Gaussian and Poisson noise, boost recognition rates to levels greater than would be achieved by normal, noiseless images. We attribute this effect to the Gaussian envelope component of Gabor filters being sympathetic to Gaussian-like noise, which is similar in variance to that of the Gabor filters. Finally, using linear regression, we mapped a mathematical model to this degradation and used it to suggest how recognition rates would degrade further should more noise be added to the images.</p>
|
24 |
Recognition Of Human Face ExpressionsEner, Emrah 01 September 2006 (has links) (PDF)
In this study a fully automatic and scale invariant feature extractor which does not require manual initialization or special equipment is proposed. Face location and size is extracted using skin segmentation and ellipse fitting. Extracted face region is scaled to a predefined size, later upper and lower facial templates are used for feature extraction. Template localization and template parameter calculations are carried out using Principal Component Analysis. Changes in facial feature coordinates between analyzed image and neutral expression image are used for expression classification. Performances of different classifiers are evaluated. Performance of proposed feature extractor is also tested on sample video sequences. Facial features are extracted in the first frame and KLT tracker is used for tracking the extracted features. Lost features are detected using face geometry rules and they are relocated using feature extractor. As an alternative to feature based technique an available holistic method which analyses face without partitioning is implemented. Face images are filtered using Gabor filters tuned to different scales and orientations. Filtered images are combined to form Gabor jets. Dimensionality of Gabor jets is decreased using Principal Component Analysis. Performances of different classifiers on low dimensional Gabor jets are compared. Feature based and holistic classifier performances are compared using JAFFE and AF facial expression databases.
|
25 |
Decisional-Emotional Support System for a Synthetic Agent : Influence of Emotions in Decision-Making Toward the Participation of Automata in SocietyGuerrero Razuri, Javier Francisco January 2015 (has links)
Emotion influences our actions, and this means that emotion has subjective decision value. Emotions, properly interpreted and understood, of those affected by decisions provide feedback to actions and, as such, serve as a basis for decisions. Accordingly, "affective computing" represents a wide range of technological opportunities toward the implementation of emotions to improve human-computer interaction, which also includes insights across a range of contexts of computational sciences into how we can design computer systems to communicate and recognize the emotional states provided by humans. Today, emotional systems such as software-only agents and embodied robots seem to improve every day at managing large volumes of information, and they remain emotionally incapable to read our feelings and react according to them. From a computational viewpoint, technology has made significant steps in determining how an emotional behavior model could be built; such a model is intended to be used for the purpose of intelligent assistance and support to humans. Human emotions are engines that allow people to generate useful responses to the current situation, taking into account the emotional states of others. Recovering the emotional cues emanating from the natural behavior of humans such as facial expressions and bodily kinetics could help to develop systems that allow recognition, interpretation, processing, simulation, and basing decisions on human emotions. Currently, there is a need to create emotional systems able to develop an emotional bond with users, reacting emotionally to encountered situations with the ability to help, assisting users to make their daily life easier. Handling emotions and their influence on decisions can improve the human-machine communication with a wider vision. The present thesis strives to provide an emotional architecture applicable for an agent, based on a group of decision-making models influenced by external emotional information provided by humans, acquired through a group of classification techniques from machine learning algorithms. The system can form positive bonds with the people it encounters when proceeding according to their emotional behavior. The agent embodied in the emotional architecture will interact with a user, facilitating their adoption in application areas such as caregiving to provide emotional support to the elderly. The agent's architecture uses an adversarial structure based on an Adversarial Risk Analysis framework with a decision analytic flavor that includes models forecasting a human's behavior and their impact on the surrounding environment. The agent perceives its environment and the actions performed by an individual, which constitute the resources needed to execute the agent's decision during the interaction. The agent's decision that is carried out from the adversarial structure is also affected by the information of emotional states provided by a classifiers-ensemble system, giving rise to a "decision with emotional connotation" included in the group of affective decisions. The performance of different well-known classifiers was compared in order to select the best result and build the ensemble system, based on feature selection methods that were introduced to predict the emotion. These methods are based on facial expression, bodily gestures, and speech, with satisfactory accuracy long before the final system. / <p>At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 8: Accepted.</p>
|
26 |
Robust recognition of facial expressions on noise degraded facial imagesSheikh, Munaf January 2011 (has links)
<p>We investigate the use of noise degraded facial images in the application of facial expression recognition. In particular, we trained Gabor+SVMclassifiers to recognize facial expressions images with various types of noise. We applied Gaussian noise, Poisson noise, varying levels of salt and pepper noise, and speckle noise to noiseless facial images. Classifiers were trained with images without noise and then tested on the images with noise. Next, the classifiers were trained using images with noise, and then on tested both images that had noise, and images that were noiseless. Finally, classifiers were tested on images while increasing the levels of salt and pepper in the test set. Our results reflected distinct degradation of recognition accuracy. We also discovered that certain types of noise, particularly Gaussian and Poisson noise, boost recognition rates to levels greater than would be achieved by normal, noiseless images. We attribute this effect to the Gaussian envelope component of Gabor filters being sympathetic to Gaussian-like noise, which is similar in variance to that of the Gabor filters. Finally, using linear regression, we mapped a mathematical model to this degradation and used it to suggest how recognition rates would degrade further should more noise be added to the images.</p>
|
27 |
3D face analysis : landmarking, expression recognition and beyondZhao, Xi 13 September 2010 (has links) (PDF)
This Ph.D thesis work is dedicated to automatic facial analysis in 3D, including facial landmarking and facial expression recognition. Indeed, facial expression plays an important role both in verbal and non verbal communication, and in expressing emotions. Thus, automatic facial expression recognition has various purposes and applications and particularly is at the heart of "intelligent" human-centered human/computer(robot) interfaces. Meanwhile, automatic landmarking provides aprior knowledge on location of face landmarks, which is required by many face analysis methods such as face segmentation and feature extraction used for instance for expression recognition. The purpose of this thesis is thus to elaborate 3D landmarking and facial expression recognition approaches for finally proposing an automatic facial activity (facial expression and action unit) recognition solution.In this work, we have proposed a Bayesian Belief Network (BBN) for recognizing facial activities, such as facial expressions and facial action units. A StatisticalFacial feAture Model (SFAM) has also been designed to first automatically locateface landmarks so that a fully automatic facial expression recognition system can be formed by combining the SFAM and the BBN. The key contributions are the followings. First, we have proposed to build a morphable partial face model, named SFAM, based on Principle Component Analysis. This model allows to learn boththe global variations in face landmark configuration and the local ones in terms of texture and local geometry around each landmark. Various partial face instances can be generated from SFAM by varying model parameters. Secondly, we have developed a landmarking algorithm based on the minimization an objective function describing the correlation between model instances and query faces. Thirdly, we have designed a Bayesian Belief Network with a structure describing the casual relationships among subjects, expressions and facial features. Facial expression oraction units are modelled as the states of the expression node and are recognized by identifying the maximum of beliefs of all states. We have also proposed a novel method for BBN parameter inference using a statistical feature model that can beconsidered as an extension of SFAM. Finally, in order to enrich information usedfor 3D face analysis, and particularly 3D facial expression recognition, we have also elaborated a 3D face feature, named SGAND, to characterize the geometry property of a point on 3D face mesh using its surrounding points.The effectiveness of all these methods has been evaluated on FRGC, BU3DFEand Bosphorus datasets for facial landmarking as well as BU3DFE and Bosphorus datasets for facial activity (expression and action unit) recognition.
|
28 |
A Comparison of Machine Learning Techniques for Facial Expression RecognitionDeaney, Mogammat Waleed January 2018 (has links)
Magister Scientiae - MSc (Computer Science) / A machine translation system that can convert South African Sign Language (SASL)
video to audio or text and vice versa would be bene cial to people who use SASL to
communicate. Five fundamental parameters are associated with sign language gestures,
these are: hand location; hand orientation; hand shape; hand movement and facial
expressions.
The aim of this research is to recognise facial expressions and to compare both feature
descriptors and machine learning techniques. This research used the Design Science
Research (DSR) methodology. A DSR artefact was built which consisted of two phases.
The rst phase compared local binary patterns (LBP), compound local binary patterns
(CLBP) and histogram of oriented gradients (HOG) using support vector machines
(SVM). The second phase compared the SVM to arti cial neural networks (ANN) and
random forests (RF) using the most promising feature descriptor|HOG|from the rst
phase. The performance was evaluated in terms of accuracy, robustness to classes,
robustness to subjects and ability to generalise on both the Binghamton University 3D
facial expression (BU-3DFE) and Cohn Kanade (CK) datasets. The evaluation rst
phase showed HOG to be the best feature descriptor followed by CLBP and LBP. The
second showed ANN to be the best choice of machine learning technique closely followed
by the SVM and RF.
|
29 |
Multi-modal expression recognitionChandrapati, Srivardhan January 1900 (has links)
Master of Science / Department of Mechanical and Nuclear Engineering / Akira T. Tokuhiro / Robots will eventually become common everyday items. However before this becomes a reality, robots would need to learn be socially interactive. Since humans communicate much more information through expression than through actual spoken words, expression recognition is an important aspect in the development of social robots. Automatic recognition of emotional expressions has a number of potential applications other than just social robots. It can be used in systems that make sure the operator is alert at all times, or it can be used for psycho-analysis or cognitive studies. Emotional expressions are not always deliberate and can also occur without the person being aware of them. Recognizing these involuntary expressions provide an insight into the persons thought, state of mind and could be used as indicators for a hidden intent. In this research we developed an initial multi-modal emotion recognition system using cues from emotional expressions in face and voice. This is achieved by extracting features from each of the modalities using signal processing techniques, and then classifying these features with the help of artificial neural networks. The features extracted from the face are the eyes, eyebrows, mouth and nose; this is done using image processing techniques such as seeded region growing algorithm, particle swarm optimization and general properties of the feature being extracted. In contrast features of interest in speech are pitch, formant frequencies and mel spectrum along with some statistical properties such as mean and median and also the rate of change of these properties. These features are extracted using techniques such as Fourier transform and linear predictive coding. We have developed a toolbox that can read an audio and/or video file and perform emotion recognition on the face in the video and speech in the audio channel. The features extracted from the face and voices are independently classified into emotions using two separate feed forward type of artificial neural networks. This toolbox then presents the output of the artificial neural networks from one/both the modalities on a synchronized time scale. Some interesting results from this research is consistent misclassification of facial expressions between two databases, suggesting a cultural basis for this confusion. Addition of voice component has been shown to partially help in better classification.
|
30 |
Robust recognition of facial expressions on noise degraded facial imagesSheikh, Munaf January 2011 (has links)
Magister Scientiae - MSc / We investigate the use of noise degraded facial images in the application of facial expression recognition. In particular, we trained Gabor+SVMclassifiers to recognize facial expressions images with various types of noise. We applied Gaussian noise, Poisson noise, varying levels of salt and pepper noise, and speckle noise to noiseless facial images. Classifiers were trained with images without noise and then tested on the images with noise. Next, the classifiers were trained using images with noise, and then on tested both images that had noise, and images that were noiseless. Finally, classifiers were tested on images while increasing the levels of salt and pepper in the test set. Our results reflected distinct degradation of recognition accuracy. We also discovered that certain types of noise, particularly Gaussian and Poisson noise, boost recognition rates to levels greater than would be achieved by normal, noiseless images. We attribute this effect to the Gaussian envelope component of Gabor filters being sympathetic to Gaussian-like noise, which is similar in variance to that of the Gabor filters. Finally, using linear regression, we mapped a mathematical model to this degradation and used it to suggest how recognition rates would degrade further should more noise be added to the images. / South Africa
|
Page generated in 0.0277 seconds