• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 190
  • 22
  • 18
  • 9
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 329
  • 329
  • 70
  • 65
  • 64
  • 55
  • 54
  • 52
  • 50
  • 37
  • 32
  • 27
  • 26
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

A facial animation model for expressive audio-visual speech

Somasundaram, Arunachalam. January 2006 (has links)
Thesis (Ph. D.)--Ohio State University, 2006. / Title from first page of PDF file. Includes bibliographical references (p. 131-139).
102

DSP-Based Facial Expression Recognition System

Hsu, Chen-wei 04 July 2005 (has links)
This thesis is based on the DSP to develop a facial expression recognition system. Most facial expression recognition systems suppose that human faces have been found, or the background colors are simple, or the facial feature points are extracted manually. Only few recognition systems are automatic and complete. This thesis is a complete facial expression system. Images are captured by CCD camera. DSP locates the human face, extracts the facial feature points and recognizes the facial expression automatically. The recognition system is divided into four sub-system: Image capture system, Genetic Algorithm human face location system, Facial feature points extraction system, Fuzzy logic facial expression recognition system. Image capture system is using CCD camera to capture the facial expression image which will be recognized in any background, and transmitting the image data to SRAM on DSP through the PPI interface on DSP. Human face location system is using genetic algorithm to find the human face¡¦s position in image by facial skin color and ellipse information, no matter what the size of the human face or the background is simple. Feature points extraction system is finding 16 facial feature points in located human face by many image process skills. Facial expression recognition system is analyzing facial action units by 16 feature points and making them fuzzily. Judging the four facial expression: happiness, anger, surprise and neutral, by fuzzy rule bases.. According to the results of the experiment. The facial expression system has nice performance on recognition rate and recognition speed.
103

Recognition of emotion in facial expressions by children with language impairment /

Stott, Dorthy A., January 2008 (has links) (PDF)
Thesis (M.S.)--Brigham Young University. Dept. of Communication Disorders, 2008. / Includes bibliographical references (p. 37-44).
104

Attention and neural response to gaze and emotion cues in the development of autism and autism spectrum disorders

Davies, Mari Sian, January 2009 (has links)
Thesis (Ph. D.)--UCLA, 2009. / Vita. Description based on print version record. Includes bibliographical references (leaves 84-98).
105

Quantitative analysis of facial reconstructive surgery : facial morphology and expression

Lee, Ju Hun 04 September 2015 (has links)
The face is an integral part of one’s self-concept and unquestionably the most important attribute used to distinguish one's identity. A growing body of literature demonstrates that any condition that results in facial disfigurement can have a profound adverse impact on one's psychological and social functioning. In this respect, patients with facial disfigurements are at higher risk to experience psychosocial difficulties than others. Owing to injuries or illnesses such as cancer, patients undergo reconstructive surgeries both to recover their facial function and to reduce the adverse impact of facial disfigurements on their psychosocial functioning. However, since surgical planning and evaluation of reconstructive outcomes still relies heavily on surgeons' qualitative assessments, it is challenging to measure surgery outcomes and, therefore, difficult to improve surgical practice. Thus, this dissertation research aims to help patients suffering from facial disfigurement by developing quantitative measures that are 1) related to human perception of faces, and 2) that account for patient's internal status (i.e., psychosocial functioning). Such measures can be used to improve surgical practice and assist patients with disfigurement to be psychosocially adjusted. Specifically, this dissertation proposes quantitative measures of facial morphology and expression that are closely related to overall facial attractiveness and a patient's psychosocial functioning. Such measures will allow surgeons to quantitatively plan and evaluate reconstructive surgeries. In addition, this dissertation introduces a modeling technique to simulate disfigurement on novel faces with control on the type, location, and severity of disfigurement. This modeling technique is important since it can help patients with facial disfigurement gain a more accurate understanding of how they are viewed in society, which has a strong potential to facilitate their psychosocial adjustment. This dissertation provides a new perspective on how to help patients with facial disfigurement address challenging problems in facial reconstruction, aesthetic understanding, and psychosocial actualization. It is hoped that this work has shown that multiple benefits could be realized from future studies utilizing the modeling technique to understand human perception of facial disfigurement and thereby to develop quantitative measures that are closely associated with human perception. / text
106

Increasing ecological validity in studies of facial attractiveness : effects of motion and expression on attractiveness judgements

Chang, Helen Yai-Jane January 2005 (has links)
While our understanding of what makes a face attractive has been greatly furthered in recent decades, the stimuli used in much of the foregoing research (static images with neutral expressions) bear little resemblance to the faces with which we nonnally interact. In our social interactions, we frequently evaluate faces that move and are expressive, and thus, it is important to evaluate whether motion and expression influence ratings of attractiveness; this was the central aim of the experiments in this dissertation. Using static and dynamic stimuli with neutral or positive expression, the effects of motion and expression were also tested in combination with other factors known to be relevant to attractiveness judgements: personality attributions, sex-typicality and cultural influence. In general, the results from this set of experiments show that judgements of moving, expressive stimuli do differ, sometimes radically, from judgements made of more traditional types of stimuli. Motion and positive expression were both found to increase ratings of attractiveness reliably in most experiments, as well as across cultures, and in some instances, showed strong sex-specific effects. Intriguing sex differences were also found in personality trait ratings of the stimuli, particularly for male faces; while criteria for female faces remained relatively constant across all conditions, trait ratings associated with attractiveness for male faces were dependent on particular combinations of motion and expression. Finally, in line with previous research, cross-cultural experiments showed general agreement between Japanese and Caucasian raters, but also suggested slight, culture-specific differences in preferences for expression and motion. IV This set of experiments has integrated the factors of motion, expression, sextypicality, personality and cultural influence together in order to bring a greater degree of ecological validity into attractiveness studies. These findings offer major implications for researchers studying attractiveness, particularly that of males, and suggest that motion and expression are important dimensions that should be considered in future research while simultaneously placing a caution on the interpretation of findings made with static stimuli. Suggestions are also made for further research in light of the present findings.
107

The developmental course of children’s free-labeling responses to facial expressions

Widen, Sherrilea E. 11 1900 (has links)
The current study investigated the developmental course of how young children label various facial expressions of emotion. 160 children (2 to 5 years) freely produced labels for six prototypical facial expressions of emotion and six animals. Even 2-year-olds were able to correctly label 5 of 6 animals, but the proportion of correct specific emotion category responses for this age group was < .30 for each of the six facial expressions. The 5-year-olds' proportion of correct specific emotion category labels was at ceiling for the happy and angry faces, but significantly lower for each of the other four facial expressions, and at floor level for the disgust face. The type of errors in labeling facial expressions changed with age: when incorrect, the youngest children produced any emotion label; older children produced labels of the correct valence; and the majority of the 5-year-olds' responses were of the correct specific emotion category. These results indicate that the free-labeling task per se is not too difficult even for 2-year-olds, but that children's use of emotion terms is not initially linked to facial expressions. Thus, the children's production of emotion terms far exceeded their proportion of correct specific emotion category labels. With age, children's implicit definition of emotion terms develops to include the associated facial expression, though this process is not complete for all expressions before the age of 6 years.
108

Reconstruction of Complete Head Models with Consistent Parameterization

Niloofar, Aghayan 16 April 2014 (has links)
This thesis introduces an efficient and robust approach for 3D reconstruction of complete head models with consistent parameterization and personalized shapes from several possible inputs. The system input consists of Cyberware laser-scanned data where we perform scanning task as well as publically available face data where (i) facial expression may or may not exist and (ii) only partial information of head may exist, for instance only front face part without back part of the head. Our method starts with a surface reconstruction approach to either transfer point clouds to a mesh structure or to fill missing points on a triangular mesh. Then, it is followed by a registration process which unifies the representation of all meshes. Afterward, a photo-cloning method is used to extract an adequate set of features in a semi-automatic way on snapshots taken from front and left views of provided range data. We modify Radial Basis Functions (RBFs) deformation so that it would be based on not only distance, but also regional information. Using feature point sets and modified RBFs deformation, a generic mesh can be manipulated in a way that closed eyes and mouth movements like separating upper lip and lower lip can be properly handled. In other word, such mesh modification method makes construction of various facial expressions possible. Moreover, new functions are added where a generic model can be manipulated based on feature point sets to consequently recover missing parts such as ears, back of the head and neck in the input face. After feature-based deformation using modified radial basis functions, a fine mesh modification method based on model points follows to extract the fine details from the available range data. Then, some post refinement procedures employing RBFs deformation and averaging neighboring points are carried out to make the surface of reconstructed 3D head smoother and uniform. Due to existence of flaws and defects on the mesh surface such as flipped triangles, self-intersections or degenerate faces, an automatic repairing approach is leveraged to clean up the entire surface of the mesh. The experiments which are performed on various models show that our method is robust and efficient in terms of accurate full head reconstruction from input data and execution time, respectively. In our method, it is also aimed to use minimum user interaction as much as possible.
109

Decoding Faces: The Contribution of Self-Expressiveness Level and Mimicry Processes to Emotional Understanding

Maas, Casey 01 January 2014 (has links)
Facial expressions provide valuable information in making judgments about internal emotional states. Evaluation of facial expressions can occur through mimicry processes via the mirror neuron system (MNS) pathway, where a decoder mimics a target’s facial expression and proprioceptive perception prompts emotion recognition. Female participants rated emotional facial expressions when mimicry was inhibited by immobilization of facial muscles and when mimicry was uncontrolled, and were evaluated for self-expressiveness level. A mixed ANOVA was conducted to determine how self-expressiveness level and manipulation of facial muscles impacted recognition accuracy for facial expressions. Main effects of self-expressiveness level and facial muscle manipulation were not found to be significant (p > .05), nor did these variables appear to interact (p > .05). The results of this study suggest that an individual’s self-expressiveness level and use of mimicry processes may not play a central role in emotion recognition.
110

Cognitive aspects of emotional expression processing

Le Gal, Patricia Margaret January 1999 (has links)
This thesis investigates the hypothesis that emotions play an influential role in cognition. Interference between facial emotional expression processing and selected tasks is measured using a variety of experimental methods. Prior to the main experimental chapters, the collection and assessment (Chapter 2, Exp. 1) of stimulus materials is described. Experiments 2-11 then concentrate on the likelihood of interference with other types of information from the face. Findings using a Garner design suggest that, although identity processing may be independent of expression variation, expression processing may be influenced by variation in identity (Exps. 2-4). Continued use of this design with sex (Exps. 6-7) and gaze direction (Exps. 9-10) information appears to support the (mutual) independence of these facial dimensions from expression. This is, however, in contrast to studies that indicate the modification of masculinity judgements by expression (Exp. 5), and the interaction of gaze direction and expression when participants rate how interesting they find a face (Exp. 8). Further to this, a search task (Exp. 11) shows that slower responses to an angry (cf. happy) face looking at us, may be due to the presence of an aversive mouth. Experiments 12-15 test for interference in the field of time perception: complex interactions between expression and encoder and decoder sex are indicated. Finally, Experiments 16-17 find that exposure to a sequence in which the majority of faces are angry depresses probability learning, and that prior exposure to varying quantities of angry and happy faces affects our later memory for them. Overall, there is evidence that exposure to emotional expressions may affect other (selected)c ognitive processesd ependingu pon which expressionsa re used and which experimental methods are chosen. It is suggested that future investigations would benefit from techniques that describe the temporal profile of an emotional response.

Page generated in 0.1104 seconds