• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 35
  • 16
  • 8
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 176
  • 176
  • 59
  • 36
  • 32
  • 31
  • 30
  • 25
  • 22
  • 22
  • 21
  • 18
  • 18
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Interpreting Faces with Neurally Inspired Generative Models

Susskind, Joshua Matthew 31 August 2011 (has links)
Becoming a face expert takes years of learning and development. Many research programs are devoted to studying face perception, particularly given its prerequisite role in social interaction, yet its fundamental neural operations are poorly understood. One reason is that there are many possible explanations for a change in facial appearance, such as lighting, expression, or identity. Despite general agreement that the brain extracts multiple layers of feature detectors arranged into hierarchies to interpret causes of sensory information, very little work has been done to develop computational models of these processes, especially for complex stimuli like faces. The studies presented in this thesis used nonlinear generative models developed within machine learning to solve several face perception problems. Applying a deep hierarchical neural network, we showed that it is possible to learn representations capable of perceiving facial actions, expressions, and identities, better than similar non-hierarchical architectures. We then demonstrated that a generative architecture can be used to interpret high-level neural activity by synthesizing images in a top-down pass. Using this approach we showed that deep layers of a network can be activated to generate faces corresponding to particular categories. To facilitate training models to learn rich and varied facial features, we introduced a new expression database with the largest number of labeled faces collected to date. We found that a model trained on these images learned to recognize expressions comparably to human observers. Next we considered models trained on pairs of images, making it possible to learn how faces change appearance to take on different expressions. Modeling higher-order associations between images allowed us to efficiently match images of the same type according to a learned pairwise similarity measure. These models performed well on several tasks, including matching expressions and identities, and demonstrated performance superior to competing models. In sum, these studies showed that neural networks that extract highly nonlinear features from images using architectures inspired by the brain can solve difficult face perception tasks with minimal guidance by human experts.
62

Långsammare igenkänning av emotioner i ansiktsuttryck hos individer med utmattningssyndrom : En pilotstudie

Löfdahl, Tomas, Wretman, Mattias January 2012 (has links)
Syftet med denna pilotstudie var att skapa hypoteser om och hur utmattningssyndrom påverkar förmågan att känna igen emotioner i ansiktsuttryck. En grupp patienter med utmattningssyndrom jämfördes med en matchad frisk kontrollgrupp (N=14). Grupperna undersöktes med ett datorbaserat test beståendes av färgbilder av autentiska ansiktsuttryck som gradvis i steg om 10% förändrades från ett neutralt ansiktsuttryck till någon av de fem grundemotionerna ilska, avsky, rädsla, glädje och ledsenhet. Mätningarna gjordes i termer av igenkänningsprecision och responshastighet. Resultatet visade att patientgruppen responderade signifikant långsammare än kontrollgruppen sett över samtliga emotioner i testet. Inga emotionsspecifika skillnader såväl som skillnader i igenkänningsprecision kunde påvisas mellan grupperna. Orsakerna till diskrepansen i responshastighet diskuterades utifrån fyra tänkbara förklaringsområden: ansiktsperceptuell funktion, visuell uppmärksamhet, självfokuserad uppmärksamhet samt noggrannhet/oro. Rekommendationer gjordes till framtida forskning om att utforska dessa områden närmare.
63

Support Vector Machines for Classification applied to Facial Expression Analysis and Remote Sensing / Support Vector Machines for Classification applied to Facial Expression Analysis and Remote Sensing

Jottrand, Matthieu January 2005 (has links)
The subject of this thesis is the application of Support Vector Machines on two totally different applications, facial expressions recognition and remote sensing. The basic idea of kernel algorithms is to transpose input data in a higher dimensional space, the feature space, in which linear operations on the data can be processed more easily. These operations in the feature space can be expressed in terms of input data thanks to the kernel functions. Support Vector Machines is a classifier using this kernel method by computing, in the feature space and on basis of examples of the different classes, hyperplanes that separate the classes. The hyperplanes in the feature space correspond to non linear surfaces in the input space. Concerning facial expressions, the aim is to train and test a classifier able to recognise, on basis of some pictures of faces, which emotion (among these six ones: anger, disgust, fear, joy, sad, and surprise) that is expressed by the person in the picture. In this application, each picture has to be seen has a point in an N-dimensional space where N is the number of pixels in the image. The second application is the detection of camouflage nets hidden in vegetation using a hyperspectral image taken by an aircraft. In this case the classification is computed for each pixel, represented by a vector whose elements are the different frequency bands of this pixel.
64

Detection of facial expressions based on time dependent morphological features

Bozed, Kenz Amhmed January 2011 (has links)
Facial expression detection by a machine is a valuable topic for Human Computer Interaction and has been a study issue in the behavioural science for some time. Recently, significant progress has been achieved in machine analysis of facial expressions but there are still some interestes to study the area in order to extend its applications. This work investigates the theoretical concepts behind facial expressions and leads to the proposal of new algorithms in face detection and facial feature localisation, design and construction of a prototype system to test these algorithms. The overall goals and motivation of this work is to introduce vision based techniques able to detect and recognise the facial expressions. In this context, a facial expression prototype system is developed that accomplishes facial segmentation (i.e. face detection, facial features localisation), facial features extraction and features classification. To detect a face, a new simplified algorithm is developed to detect and locate its presence from the fackground by exploiting skin colour properties which are then used to distinguish between face and non-face regions. This allows facial parts to be extracted from a face using elliptical and box regions whose geometrical relationships are then utilised to determine the positions of the eyes and mouth through morphological operations. The mean and standard deviations of segmented facial parts are then computed and used as features for the face. For images belonging to the same class, thses features are applied to the K-mean algorithm to compute the controid point of each class expression. This is repeated for images in the same expression class. The Euclidean distance is computed between each feature point and its cluster centre in the same expression class. This determines how close a facial expression is to a particular class and can be used as observation vectors for a Hidden Markov Model (HMM) classifier. Thus, an HMM is built to evaluate an expression of a subject as belonging to one of the six expression classes, which are Joy, Anger, Surprise, Sadness, Fear and Disgust by an HMM using distance features. To evaluate the proposed classifier, experiments are conducted on new subjects using 100 video clips that contained a mixture of expressions. The average successful detection rate of 95.6% is measured from a total of 9142 frames contained in the video clips. The proposed prototype system processes facial features parts and presents improved results of facial expressions detection rather than using whole facial features as proposed by previous authors. This work has resulted in four contributions: the Ellipse Box Face Detection Algorithm (EBFDA), Facial Features Distance Algorithm (FFDA), Facial features extraction process, and Facial features classification. These were tested and verified using the prototype system.
65

Children's Self-reported Emotions and Emotional Facial Expressions Following Moral Transgressions

Dys, Sebastian P. 22 November 2013 (has links)
This study examined self-reported emotions and emotional facial expressions following moral transgressions using an ethnically diverse sample of 242 4-, 8-, and 12-year-old children. Self-reported emotions were examined in response to three transgression contexts: an intentional harm, an instance of social exclusion, and an omission of a prosocial duty. Children’s emotional expressions of sadness, happiness, anger, fear and disgust were analyzed immediately after being asked how they would feel if they had committed one of the described transgressions. Emotional expressions were scored using automated emotion recognition software. Four-year-olds reported significantly more happiness as compared to 8- and 12-year-olds. In addition, self-reports of sadness decreased between 8- and 12-year-olds, while self-reported guilt increased between these age groups. Furthermore, 4- and 8-year-olds demonstrated higher levels of facially expressed happiness than 12-year-olds. These findings highlight the role of automatic affective and controlled cognitive processes in the development of children’s emotions following moral transgressions.
66

The Effect of Training on Haptic Classification of Facial Expressions of Emotion in 2D Displays by Sighted and Blind Observers

ABRAMOWICZ, ANETA 23 October 2009 (has links)
Abstract The current study evaluated the effects of training on the haptic classification of culturally universal facial expressions of emotion as depicted in simple 2D raised-line drawings. Blindfolded sighted (N = 60) and blind (N = 4) participants participated in Experiments 1 and 3, respectively. A small vision control study (N = 12) was also conducted (Experiment 2) to compare haptic versus visual learning patterns. A hybrid learning paradigm consisting of pre/post- and old/new-training procedures was used to address the nature of the underlying learning process in terms of token-specific learning and/or generalization. During the Pre-Training phase, participants were tested on their ability to classify facial expressions of emotion using the set with which they would be subsequently trained. During the Post-Training phase, they were tested with the training set (Old) intermixed with a completely novel set (New). For sighted observers, visual classification was more accurate than haptic classification; in addition, two of the three adventitiously blind individuals tended to be at least as accurate as the sighted haptic group. All three groups showed similar learning patterns across the learning stages of the experiment: accuracy improved substantially with training; however, while classification accuracy for the Old set remained high during the Post-Training test stage, learning effects for novel (New) drawings were reduced, if present at all. These results imply that learning by the sighted was largely token-specific for both haptic and visual classification. Additional results from a limited number of blind subjects tentatively suggest that the accuracy with which facial expressions of emotion are classified is not impaired when visual loss occurs later in life. / Thesis (Master, Neuroscience Studies) -- Queen's University, 2009-10-23 12:04:41.133
67

Children's Self-reported Emotions and Emotional Facial Expressions Following Moral Transgressions

Dys, Sebastian P. 22 November 2013 (has links)
This study examined self-reported emotions and emotional facial expressions following moral transgressions using an ethnically diverse sample of 242 4-, 8-, and 12-year-old children. Self-reported emotions were examined in response to three transgression contexts: an intentional harm, an instance of social exclusion, and an omission of a prosocial duty. Children’s emotional expressions of sadness, happiness, anger, fear and disgust were analyzed immediately after being asked how they would feel if they had committed one of the described transgressions. Emotional expressions were scored using automated emotion recognition software. Four-year-olds reported significantly more happiness as compared to 8- and 12-year-olds. In addition, self-reports of sadness decreased between 8- and 12-year-olds, while self-reported guilt increased between these age groups. Furthermore, 4- and 8-year-olds demonstrated higher levels of facially expressed happiness than 12-year-olds. These findings highlight the role of automatic affective and controlled cognitive processes in the development of children’s emotions following moral transgressions.
68

The relationship between depressive symptoms, rumination and sensitivity to emotion specified in facial expressions.

Lang, Charlene Jasmin January 2011 (has links)
In social interactions it is important for perceivers to be able to differentiate between facial expressions of emotion associated with a congruent emotional experience (genuine expressions) and those that are not (posed expressions). This research investigated the sensitivity of participants with a range of depressive symptom severity and varying levels of rumination to the differences between genuine and posed facial expressions The suggested mechanisms underlying impairments in emotion recognition were also investigated; the effect of cognitive load (as a distraction from deliberate processing of stimuli) and attention, and the relationships between mechanisms and sensitivity across a range of depressive symptoms and level of rumination. Participants completed an emotion categorisation task in which they were asked if targets were showing either happiness or sadness, and then if targets were feeling those emotions. Participants also completed the same task under cognitive load. In addition, a recognition task was used to measure attention. Results showed that when making judgements about whether targets were feeling sad lower sensitivity was related to higher levels of depressive symptoms, but contrary to predictions, only when under cognitive load. Depressive symptoms and rumination were not related to higher levels of bias towards sad expressions. Recognition did not show a relationship with sensitivity, rumination or depression scores. Cognitive load did not show the expected effects or improving sensitivity but instead showed lower sensitivity scores in some conditions compared to conditions without load. Implications of results are discussed, as well as directions for future research.
69

Emotion recognition in context

Stanley, Jennifer Tehan 12 June 2008 (has links)
In spite of evidence for increased maintenance and/or improvement of emotional experience in older adulthood, past work suggests that young adults are better able than older adults to identify emotions in others. Typical emotion recognition tasks employ a single-closed-response methodology. Because older adults are more complex in their emotional experience than young adults, they may approach such response-limited emotion recognition tasks in a qualitatively different manner than young adults. The first study of the present research investigated whether older adults were more likely than young adults to interpret emotional expressions (facial task) and emotional situations (lexical task) as representing a mix of different discrete emotions. In the lexical task, older adults benefited more than young adults from the opportunity to provide more than one response. In the facial task, however, there was a cross-over interaction such that older adults benefited more than young adults for anger recognition, whereas young adults benefited more than older adults for disgust recognition. A second study investigated whether older adults benefit more than young adults from contextual cues. The addition of contextual information improved the performance of older adults more than that of young adults. Age differences in anger recognition, however, persisted across all conditions. Overall, these findings are consistent with an age-related increase in the perception of mixed emotions in lexical information. Moreover, they suggest that contextual information can help disambiguate emotional information.
70

Modelling facial action units using partial differential equations

Ismail, Nur Baini Binti January 2015 (has links)
This thesis discusses a novel method for modelling facial action units. It presents facial action units model based on boundary value problems for accurate representation of human facial expression in three-dimensions. In particular, a solution to a fourth order elliptic Partial Differential Equation (PDE) subject to suitable boundary conditions is utilized, where the chosen boundary curves are based on muscles movement defined by Facial Action Coding System (FACS). This study involved three stages: modelling faces, manipulating faces and application to simple facial animation. In the first stage, PDE method is used in modelling and generating a smooth 3D face. The PDE formulation using small sets of parameters contributes to the efficiency of human face representation. In the manipulation stage, a generic PDE face of neutral expression is manipulated to a face with expression using PDE descriptors that uniquely represents an action unit. A combination of the PDE descriptor results in a generic PDE face having an expression, which successfully modelled four basic expressions: happy, sad, fear and disgust. An example of application is given using simple animation technique called blendshapes. This technique uses generic PDE face in animating basic expressions.

Page generated in 0.0854 seconds