Spelling suggestions: "subject:"coacial expressions"" "subject:"coacial 9expressions""
51 |
Spatio-temporal representation and analysis of facial expressions with varying intensitiesSariyanidi, Evangelos January 2017 (has links)
Facial expressions convey a wealth of information about our feelings, personality and mental state. In this thesis we seek efficient ways of representing and analysing facial expressions of varying intensities. Firstly, we analyse state-of-the-art systems by decomposing them into their fundamental components, in an effort to understand what are the useful practices common to successful systems. Secondly, we address the problem of sequence registration, which emerged as an open issue in our analysis. The encoding of the (non-rigid) motions generated by facial expressions is facilitated when the rigid motions caused by irrelevant factors, such as camera movement, are eliminated. We propose a sequence registration framework that is based on pre-trained regressors of Gabor motion energy. Comprehensive experiments show that the proposed method achieves very high registration accuracy even under difficult illumination variations. Finally, we propose an unsupervised representation learning framework for encoding the spatio-temporal evolution of facial expressions. The proposed framework is inspired by the Facial Action Coding System (FACS), which predates computer-based analysis. FACS encodes an expression in terms of localised facial movements and assigns an intensity score for each movement. The framework we propose mimics those two properties of FACS. Specifically, we propose to learn from data a linear transformation that approximates the facial expression variation in a sequence as a weighted sum of localised basis functions, where the weight of each basis function relates to movement intensity. We show that the proposed framework provides a plausible description of facial expressions, and leads to state-of-the-art performance in recognising expressions across intensities; from fully blown expressions to micro-expressions.
|
52 |
Assessment of fun from the analysis of facial images / Avaliação de diversão a partir da análise de imagens faciaisVieira, Luiz Carlos 16 May 2017 (has links)
This work investigates the feasibility of assessing fun from only the computational analysis of facial images captured from low-cost webcams. The study and development was based on a set of videos recorded from the faces of voluntary participants as they played three different popular independent games (horror, action/platform and puzzle). The participants also self-reported on their levels of frustration, immersion and fun in a discrete range [0,4], and answered the reputed Game Experience Questionnaire (GEQ). The faces were found on the videos collected by a face tracking system, developed with existing implementations of the Viola-Jones algorithm for face detection and a variation of the Active Appearance Model (AAM) algorithm for tracking the facial landmarks. Fun was represented in terms of the prototypic emotions and the levels of frustration and immersion. The prototypic emotions were detected with a Support Vector Machine (SVM) trained from existing datasets, and the frustration, immersion and fun levels were detected with a Structured Perceptron trained from the collected data and the self reported levels of each affect, as well as estimations of the gradient of the distance between the face and the camera and the blink rate measured in blinks per minute. The evaluation was supported by a comparison of the self-reported levels of each affect and the answers to GEQ, and performed with measurements of precision and recall obtained in cross-validation tests. The frustration classifier could not obtain a precision above chance, mainly because the collected data didn\'t have enough variability in the reported levels of this affect. The immersion classifier obtained better precision particularly when trained with the estimated blink rate, with a median value of 0.42 and an Interquartile Range (IQR) varying from 0.12 to 0.73. The fun classifier, trained with the detected prototypic emotions and the reported levels of frustration and immersion, obtained the best precision scores, with a median of 0.58 and IQR varying from 0.28 to 0.84. All classifiers suffered from low recall, what was caused by difficulties in the tracking of landmarks and the fact that the emotion classifier was unbalanced due to existing datasets having more samples of neutral and happiness expressions. Nonetheless, a strong indication of the feasibility of assessing fun from recorded videos is in the pattern of variation of the levels predicted. Apart from the frustration classifier, the immersion and the fun classifier were able to predict the increases and decreases of the respective affect levels with an average error margin close to 1. / Este trabalho investiga a viabilidade de medir a diversão apenas a partir da análise computacional de imagens faciais capturadas de webcams de baixo custo. O estudo e desenvolvimento se baseou em vídeos gravados com as faces de voluntários enquanto jogavam três diferentes jogos populares e independentes (horror, ação/plataforma e puzzle). Os participantes também reportaram seus níveis de frustração, imersão e diversão no intervalo discreto [0, 4], e responderam ao renomado Game Experience Questionnaire (GEQ). Faces foram encontradas nos vídeos coletados utilizando um sistema desenvolvido com implementações existentes do algoritmo de Viola-Jones para a detecção da face e uma variação do algoritmo Active Appearance Model (AAM) para o rastreamento das marcas faciais. A diversão foi representada em termos das emoções prototípicas e dos níveis de frustração e imersão. As emoções prototípicas foram detectadas com uma Máquina de Vetores de Suporte (SVM) treinada com bases de dados existentes, e os níveis de frustração, imersão e diversão foram detectados com um Perceptron Estruturado treinado com os dados coletados e os níveis reportados de cada afeto, com o gradiente da distância entre a face e a câmera, e com a taxa de piscadas por minuto. A avaliação foi apoiada pela comparação dos níveis reportados com as respostas ao GEQ, e executada com métricas de precisão e revocação (recall) obtidas em testes de validação cruzada. O classificador de frustração não obteve uma precisão acima de chance, principalmente porque os dados coletados não tiveram variabilidade suficiente nos níveis reportados desse afeto. O classificador de imersão obteve uma precisão melhor particularmente quando treinado com a taxa de piscadas, com uma média de 0.42 e uma Amplitude Interquartil (IQR) entre 0.12 e 0.73. O classificador de diversão, treinado com as emoções prototípicas e os níveis reportados de frustração e imersão, obteve a melhor precisão, com média de 0.58 e IQR entre 0.28 e 0.84. Todos os classificadores sofreram de baixa revocação, causada por dificuldades no rastreamento das marcas faciais e pelo desbalanceamento do classificador de emoções, cujos dados de treinamento continham mais exemplos de expressões neutras e de felicidade. Ainda assim, um forte indicador da viabilidade de medir diversão a partir de vídeos está nos padrões de variação dos níveis previstos. Com exceção da frustração, os classificadores de imersão e de diversão foram capazes de prever os aumentos e reduções dos níveis dos respectivos afetos com uma margem de erro média próxima de 1.
|
53 |
Reconhecimento de emoções faciais como candidato a marcador endofenótipo no transtorno bipolar / Recognition of facial emotion as a candidate endophenotype marker in bipolar disorderFernandes, Francy de Brito Ferreira 07 April 2014 (has links)
O transtorno bipolar (TB) é um transtorno grave, crônico e recorrente, e com um alto grau de prejuízo social e ocupacional. Pacientes com TB apresentam déficits em funções cognitivas como atenção, memória de trabalho verbal, funcionamento executivo. Estudos recentes têm sugerido que algumas dessas funções cognitivas podem ser candidatas a endofenótipos para o TB. Pacientes com TB também apresentam déficits na função cognitiva de reconhecimento de emoções faciais, mas o papel dessa função cognitiva como candidata a endofenótipo para o TB tem sido pouco estudado. O objetivo deste estudo foi avaliar a existência de déficits no reconhecimento de emoções em pacientes com TB e em seus parentes de primeiro grau quando comparados a um grupo de controles saudáveis. Foram estudados 23 pacientes com TB tipo I, 22 parentes de primeiro grau desses pacientes, e 27 controles saudáveis. Os instrumentos utilizados nas avaliações neuropsicológicas foram: Bateria de Reconhecimento de Emoções da Pennsylvannia (PENNCNP), e os subtestes de Vocabulário e Raciocínio Matricial da Escala Wechsler de Inteligência Abreviada (WASI). A partir dos dados obtidos, realizaram-se análise de variância (ANOVA) para variáveis que seguiam distribuição normal ou teste de Kruskal-Wallis para as demais. Os resultados mostraram que houve diferença estatisticamente significativa no número de respostas corretas para o reconhecimento de emoção tipo medo (p = 0,01) entre os três grupos. Pacientes com TB apresentaram menor número de respostas corretas para a emoção medo quando comparados a seus parentes e a controles saudáveis. Não houve diferença no reconhecimento de emoções faciais para tristeza, felicidade, raiva e neutra. Houve também uma diferença estatisticamente significativa entre os três grupos no tempo médio de resposta para a emoção do tipo felicidade (p = 0,00). Conclui-se assim que distúrbios no reconhecimento de emoções em faces podem não ser candidatos a endofenótipos para o TB tipo I / Bipolar disorder (BD) is a severe, chronic and recurrent disorder, and with a high degree of social and occupational impairment. TB patients have deficits in cognitive functions such as attention, verbal working memory, executive functioning. Recent studies have suggested that some of these cognitive functions may be candidate endophenotypes for TB. TB patients also have deficits in cognitive function of recognition of facial emotions, but the role that cognitive function as a candidate endophenotype for TB has been little studied. The aim of this study was to evaluate the existence of deficits in emotion recognition in patients with TB and their first-degree relatives when compared to a group of healthy controls. 23 patients with BD type I, 22 firstdegree relatives of these patients, and 27 healthy controls were studied. The instruments used in neuropsychological evaluations were: Battery Recognition of Emotions Pennsylvannia (PENNCNP), and subtests Vocabulary and Matrix Reasoning Scale of the Wechsler Abbreviated Intelligence (WAS ). From the data obtained, performed analysis of variance (ANOVA) for variables that followed a normal distribution or the Kruskal - Wallis test to the other. The results showed a statistically significant difference in the number of correct answers for the recognition of emotion fear (p = 0.01) between the three groups type. TB patients had a lower number of correct responses to the emotion fear when compared to their relatives and healthy controls. There was no difference in the recognition of facial emotions sadness, happiness, anger and neutral. There was also a statistically significant difference between groups in the average response time for the emotion of happiness type (p = 0.00). It follows therefore that disturbances in recognizing emotions in faces may not be Candidates for endophenotypes for BD type I
|
54 |
Relational Satisfaction and Perceptions of Nonverbal Communication during ConflictWheeler, Savannah V 01 May 2014 (has links)
The goal of the presented research was to examine the relationship between relational satisfaction and nonverbal interpretation during a conflict. Specifically, we hypothesized that participants who reported being dissatisfied with their closest relationship would be more likely to make negative interpretations of facial expressions during a conflict episode. Participants completed a survey that measured their relationship status, level of satisfaction, and interpretations of descriptions of facial expressions being made during a series of conflict scenarios. Developing a better understanding of the role of nonverbal behaviors may help encourage healthier conflict management
|
55 |
Social SoulAlShammari, Norah 01 January 2018 (has links)
Twitter has over 313 million users, with 500 million tweets produced each day.
Society’s growing dependence on the internet for self-expression shows no
sign of abating. However, recent research warns that social media perpetuates
loneliness, caused by reduced face-to-face interaction. My thesis analyzes and
demonstrates the important role facial expressions play in a conversation’s
progress, impacting how people process and relate to what is being said. My
work critically assesses communication problems associated with Twitter. By
isolating and documenting expressive facial reactions to a curated selection
of tweets, the exhibition creates a commentary on our contemporary digital
existence, specifically articulating how use of social media limits basic social
interaction.
|
56 |
Support Vector Machines for Classification applied to Facial Expression Analysis and Remote Sensing / Support Vector Machines for Classification applied to Facial Expression Analysis and Remote SensingJottrand, Matthieu January 2005 (has links)
<p>The subject of this thesis is the application of Support Vector Machines on two totally different applications, facial expressions recognition and remote sensing.</p><p>The basic idea of kernel algorithms is to transpose input data in a higher dimensional space, the feature space, in which linear operations on the data can be processed more easily. These operations in the feature space can be expressed in terms of input data thanks to the kernel functions. Support Vector Machines is a classifier using this kernel method by computing, in the feature space and on basis of examples of the different classes, hyperplanes that separate the classes. The hyperplanes in the feature space correspond to non linear surfaces in the input space.</p><p>Concerning facial expressions, the aim is to train and test a classifier able to recognise, on basis of some pictures of faces, which emotion (among these six ones: anger, disgust, fear, joy, sad, and surprise) that is expressed by the person in the picture. In this application, each picture has to be seen has a point in an N-dimensional space where N is the number of pixels in the image.</p><p>The second application is the detection of camouflage nets hidden in vegetation using a hyperspectral image taken by an aircraft. In this case the classification is computed for each pixel, represented by a vector whose elements are the different frequency bands of this pixel.</p>
|
57 |
The Social World Through Infants’ Eyes : How Infants Look at Different Social FiguresSchmitow, Clara A. January 2012 (has links)
This thesis aims to study how infants actively look at different social figures: parents and strangers. To study infants’ looking behavior in “live” situations, new methods to record looking behavior were tested. Study 1 developed a method to record looking behavior in “live” situations: a head-mounted camera. This method was calibrated for a number of angles and then used to measure how infants look at faces and objects in two “live” situations, a conversation and a joint action. High reliability was found for the head-mounted camera in horizontal positions and the possibility of using it in a number of “live” situations with infants from 6 to 14 months of age. In Study 2, the head-mounted camera and a static camera and were used in a “live” ambiguous situation to study infants’ preferences to refer to and to use the information from parents and strangers. The results from Experiment 1 of Study 2 showed that if no information is provided in ambiguous situations in the lab, infants at 10 months of age look more at the experimenter than at the parent. Further, Experiment 2 of Study 2 showed that the infants also used more of the emotional information provided by the experimenter than by the parent to regulate their behavior. In Study 3, looking behavior was analyzed in detail when infants looked at pictures of their parents’ and strangers’ emotional facial expressions. Corneal eye tracking was used to record looking. In this study, the influence of identity, gender, emotional expressions and parental leave on looking behavior was analyzed. The results indicated that identity and experience of looking at others influences how infants discriminate emotions in pictures of facial expressions. Fourteen-month-old infants who had been with both parents in parental leave discriminated more emotional expressions in strangers than infants who only had one parent on leave. Further, they reacted with larger pupil dilation toward the parent who was actually in parental leave than to the parent not on leave. Finally, fearful emotional expressions were more broadly scanned than neutral or happy facial expressions. The results of these studies indicate that infants discriminate between mothers’, fathers’ and strangers’ emotional facial expressions and use the other people’s expressions to regulate their behavior. In addition, a new method, a head-mounted camera was shown to capture infants’ looking behavior in “live” situations.
|
58 |
Ethnic and Racial Differences in Emotion PerceptionCheng, Linda 10 October 2007 (has links)
This study analyzed racial differences in the way African Americans and Caucasians perceive emotion from facial expressions and tone of voice. Participants were African American (n=25) and Caucasian (n=26) college students. The study utilizes 56 images of African American and Caucasian faces balanced for race and sex from the NimStim stimulus set (Tottenham, 2006). The study also utilized visual and auditory stimuli form the DANVA2. Participants were asked to judged emotion for each stimulus in the tasks. The BFRT, the WASI, and the Seashore Rhythm test were used as exclusionary criteria. In general the study found few differences in the way African Americans and Caucasians perceived emotion, though racial differences emerged as an interaction with other factors. The results of the study supported the theory of universality of emotion perception and expression though social influences, which may affect emotion perception, is also a possibility. Areas of future research were discussed.
|
59 |
Interpreting Faces with Neurally Inspired Generative ModelsSusskind, Joshua Matthew 31 August 2011 (has links)
Becoming a face expert takes years of learning and development. Many research programs are devoted to studying face perception, particularly given its prerequisite role in social interaction, yet its fundamental neural operations are poorly understood. One reason is that there are many possible explanations for a change in facial appearance, such as lighting, expression, or identity. Despite general agreement that the brain extracts multiple layers of feature detectors arranged into hierarchies to interpret causes of sensory information, very little work has been done to develop computational models of these processes, especially for complex stimuli like faces. The studies presented in this thesis used nonlinear generative models developed within machine learning to solve several face perception problems. Applying a deep hierarchical neural network, we showed that it is possible to learn representations capable of perceiving facial actions, expressions, and identities, better than similar non-hierarchical architectures. We then demonstrated that a generative architecture can be used to interpret high-level neural activity by synthesizing images in a top-down pass. Using this approach we showed that deep layers of a network can be activated to generate faces corresponding to particular categories. To facilitate training models to learn rich and varied facial features, we introduced a new expression database with the largest number of labeled faces collected to date. We found that a model trained on these images learned to recognize expressions comparably to human observers. Next we considered models trained on pairs of images, making it possible to learn how faces change appearance to take on different expressions. Modeling higher-order associations between images allowed us to efficiently match images of the same type according to a learned pairwise similarity measure. These models performed well on several tasks, including matching expressions and identities, and demonstrated performance superior to competing models. In sum, these studies showed that neural networks that extract highly nonlinear features from images using architectures inspired by the brain can solve difficult face perception tasks with minimal guidance by human experts.
|
60 |
An ERP Study of Responses to Emotional Facial Expressions: Morphing Effects on Early-Latency Valence ProcessingRavich, Zoe 01 April 2012 (has links)
Early-latency theories of emotional processing state that at least coarse monitoring of the emotional valence (a pleasure-displeasure continuum) of facial expressions should be both rapid and highly automated (LeDoux, 1995; Russell, 1980). Research has largely substantiated early-latency differential processing of emotional versus non-emotional facial expressions; however, the effect of valence on early-latency processing of emotional facial expression remains unclear. In an effort to delineate the effects of valence on early-latency emotional facial expression processing, the current investigation compared ERP responses to positive (happy and surprise), neutral, and negative (afraid and sad) basic facial expression photographs as well as to positive (happy-surprise), neutral (afraid-surprise, happy-afraid, happy-sad, sad-surprise), and negative (sad-afraid) morph facial expression photographs during a valence-rating task. Morphing manipulations have been shown to decrease the familiarity of facial patterns and thus preclude any overlearned responses to specific facial codes. Accordingly, it was proposed that morph stimuli would disrupt more detailed emotional identification to reveal a valence response independent of a specific identifiable emotion (Balconi & Lucchiari, 2005; Schweinberger, Burton & Kelly, 1999). ERP results revealed early-latency differentiation between positive, neutral, and negative morph facial expressions approximately 108 milliseconds post-stimulus (P1) within the right electrode cluster; negative morph facial expressions continued to elicit significantly smaller ERP amplitudes than other valence categories approximately 164 milliseconds post-stimulus (N170). Consistent with previous imaging research on emotional facial expression processing, source localization revealed substantial dipole activation within regions of the mesolimbic dopamine system. Thus, these findings confirm rapid valence processing of facial expressions and suggest that negative valence processing may continue to modulate subsequent structural facial processing.
|
Page generated in 0.095 seconds