• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 189
  • 22
  • 18
  • 9
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 327
  • 327
  • 70
  • 65
  • 64
  • 54
  • 54
  • 52
  • 50
  • 37
  • 30
  • 27
  • 26
  • 24
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Gender and the role of hormones in the perception of threatening facial expressions

Goos, Lisa Marie. January 1998 (has links)
Thesis (M.A.)--York University, 1998. Graduate Programme in Psychology. / Typescript. Includes bibliographical references (leaves 49-52). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://wwwlib.umi.com/cr/yorku/fullcit?pMQ39194.
62

Are paranoid schizophrenia patients really more accurate than other people at recognizing spontaneous expressions of negative emotion? : a study of the putative association between emotion recognition and thinking errors in paranoia

St-Hilaire, Annie. January 2008 (has links)
Thesis (Ph.D.)--Kent State University, 2008. / Title from PDF t.p. (viewed Nov. 10, 2009). Advisor: Nancy Docherty. Keywords: schizophrenia, paranoia, emotion recognition, posed expressions, spontaneous expressions, cognition. Includes bibliographical references (p. 122-144).
63

Maternal predictors of children's facial emotions in mother-child interactions

Lusk, Kathryn Renee Preis 28 August 2008 (has links)
This study examined maternal predictors of children's facial expressions of emotion in mother-child interactions. Ninety-four mothers and their 14- to 27-month old toddlers were observed during a 20-minute interaction. Results demonstrated that two different components of maternal sensitivity, supportive behavior and child-oriented motivation, predicted more facial expressions of joy and sadness and less flat affect in children. Maternal autonomy granting, a third component of maternal sensitivity, predicted more facial expressions of anger in children. This study also examined relations between macrosocial variables (i.e., maternal well-being and demographic factors) and children's facial expressions of emotion and how maternal sensitivity mediated such relations. High maternal education was directly related to fewer facial expressions of sadness and anger, high SES was related to more facial expressions of joy, and both greater marital satisfaction and social support were related to more facial expressions of anger. It was also shown that supportive behavior mediated associations between: maternal depressive symptoms and both low joy and high flat affect, marital satisfaction and low flat affect, maternal education and high joy, and family income and high joy. Child-oriented motivation mediated associations between maternal depressive symptoms and both high flat affect and low sadness. Findings suggest that it is important to consider multiple measures of maternal sensitivity and the broader macrosocial context in which the parent-child relationship is embedded when examining children's facial expressions of emotion in mother-child interactions.
64

Facial expression recognition with temporal modeling of shapes

Jain, Suyog Dutt 20 September 2011 (has links)
Conditional Random Fields (CRFs) is a discriminative and supervised approach for simultaneous sequence segmentation and frame labeling. Latent-Dynamic Conditional Random Fields (LDCRFs) incorporates hidden state variables within CRFs which model sub-structure motion patterns and dynamics between labels. Motivated by the success of LDCRFs in gesture recognition, we propose a framework for automatic facial expression recognition from continuous video sequence by modeling temporal variations within shapes using LDCRFs. We show that the proposed approach outperforms CRFs for recognizing facial expressions. Using Principal Component Analysis (PCA) we study the separability of various expression classes in lower dimension projected spaces. By comparing the performance of CRFs and LDCRFs against that of Support Vector Machines (SVMs) and a template based approach, we demonstrate that temporal variations within shapes are crucial in classifying expressions especially for those with small facial motion like anger and sadness. We also show empirically that only using changes in facial appearance over time without using the shape variations fails to obtain high performance for facial expression recognition. This reflects the importance of geometric deformations on face for recognizing expressions. / text
65

Facial emotion recognition ability of children in Hong Kong

Chan, Pui-shan, Vivien. January 2002 (has links)
published_or_final_version / abstract / toc / Clinical Psychology / Master / Master of Social Sciences
66

Facial emotion recognition after subcortical cerebrovascular diseases

張晶凝, Cheung, Ching-ying, Crystal. January 2000 (has links)
published_or_final_version / Psychology / Master / Master of Philosophy
67

Experimenter audience effects on young adults' facial expressions during pain.

Badali, Melanie 05 1900 (has links)
Facial expression has been used as a measure of pain in clinical and experimental studies. The Sociocommunications Model of Pain (T. Hadjistavropoulos, K. Craig, & S. Fuchs-Lacelle, 2004) characterizes facial movements during pain as both expressions of inner experience and communications to other people that must be considered in the social contexts in which they occur. While research demonstrates that specific facial movements may be outward manifestations of pain states, less attention has been paid to the extent to which contextual factors influence facial movements during pain. Experimenters are an inevitable feature of research studies on facial expression during pain and study of their social impact is merited. The purpose of the present study was to investigate the effect of experimenter presence on participants’ facial expressions during pain. Healthy young adults (60 males, 60 females) underwent painful stimulation induced by a cold pressor in three social contexts: alone; alone with knowledge of an experimenter watching through a one-way mirror; and face-to-face with an experimenter. Participants provided verbal self-report ratings of pain. Facial behaviours during pain were coded with the Facial Action Coding System (P. Ekman, W. Friesen, & J. Hager, 2002) and rated by naïve judges. Participants’ facial expressions of pain varied with the context of the pain experience condition but not with verbally self-reported levels of pain. Participants who were alone were more likely to display facial actions typically associated with pain than participants who were being observed by an experimenter who was in another room or sitting across from them. Naïve judges appeared to be influenced by these facial expressions as, on average, they rated the participants who were alone as experiencing more pain than those who were observed. Facial expressions shown by people experiencing pain can communicate the fact that they are feeling pain. However, facial expressions can be influenced by factors in the social context such as the presence of an experimenter. The results suggest that facial expressions during pain made by adults should be viewed at least in part as communications, subject to intrapersonal and interpersonal influences, rather than direct read-outs of experience.
68

A comparison of two computer-based programs designed to improve facial expression understanding in children with autism

Sung, Andrew Nock-Kwan Unknown Date
No description available.
69

Connectionist models of the perception of facial expressions of emotion

Mignault, Alain, 1962- January 1999 (has links)
Two connectionist models are developed that predict humans' categorization of facial expressions of emotion and their judgements of similarity between two facial expressions. For each stimulus, the models predict the subjects' judgement, the entropy of the response, and the mean response time (RT). Both models involve a connectionist component which predicts the response probabilities and a response generator which predicts the mean RT. The input to the categorization model is a preprocessed picture of a facial expression, while the hidden unit representations generated by the first model for two facial expressions constitute the input of the similarity model. The data collected on 45 subjects in a single-session experiment involving a categorization and a similarity task provided the target outputs to train both models. Two response generators are tested. The first, called the threshold model , is a linear integrator with threshold inspired from Lacouture and Marley's (1991) model. The second, called the channel model, constitutes a new approach which assumes a linear relationship between entropy of the response and mean RT. It is inspired by Lachman's (1973) interpretation of Shannon's (1948) entropy equation. The categorization model explains 50% of the variance of mean RT for the training set. It yields an almost perfect categorization of the pure emotional stimuli of the training set and is about 70% correct on the generalization set. A two-dimensional representation of emotions in the hidden unit space reproduces most of the properties of emotional spaces found by multidimensional scaling in this study as well as in other studies (e.g., Alvarado, 1996). The similarity model explains 53% of the variance of mean similarity judgements; it provides a good account of subjects' mean RT; and it even predicts an interesting bow effect that was found in subjects' data.
70

Recognition, expression, and understanding facial expressions of emotion in adolescents with nonverbal and general learning disabilities / Emotion and learning disabilities

Bloom, Elana. January 2005 (has links)
Students with learning disabilities (LD) have been found to exhibit social difficulties compared to those without LD (Wong, 2004). Recognition, expression, and understanding of facial expressions of emotions have been shown to be important for social functioning (Custrini & Feldman, 1989; Philippot & Feldman, 1990). LD subtypes have been studied (Rourke, 1999) and children with nonverbal learning disabilities (NVLD) have been observed to be worse at recognizing facial expressions compared to children with verbal learning disabilities (VLD), no learning disability (NLD; Dimitrovsky, Spector, Levy-Shiff, & Vakil, 1998; Dimitrovsky, Spector, & Levy-Shiff, 2000), and those with psychiatric difficulties without LD controls (Petti, Voelker, Shore, & Hyman-Abello, 2003). However, little has been done in this area with adolescents with NVLD. Recognition, expression and understanding facial expressions of emotion, as well as general social functioning have yet to be studied simultaneously among adolescents with NVLD, NLD, and general learning disabilities (GLD). The purpose of this study was to examine abilities of adolescents with NVLD, GLD, and without LD to recognize, express, and understand facial expressions of emotion, in addition to their general social functioning. / Adolescents aged 12 to 15 were screened for LD and NLD using the Wechsler Intelligence Scale for Children---Third Edition (WISC-III; Weschler, 1991) and the Wide Range Achievement Test---Third Edition (WRAT3; Wilkinson, 1993) and subtyped into NVLD and GLD groups based on the WRAT3. The NVLD ( n = 23), matched NLD (n = 23), and a comparable GLD (n = 23) group completed attention, mood, and neuropsychological measures. The adolescent's ability to recognize (Pictures of Facial Affect; Ekman & Friesen, 1976), express, and understand facial expressions of emotion, and their general social functioning was assessed. Results indicated that the GLD group was significantly less accurate at recognizing and understanding facial expressions of emotion compared to the NVLD and NLD groups, who did not differ from each other. No differences emerged between the NVLD, NLD, and GLD groups on the expression or social functioning tasks. The neuropsychological measures did not account for a significant portion of the variance on the emotion tasks. Implications regarding severity of LD are discussed.

Page generated in 0.1019 seconds