• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 190
  • 22
  • 18
  • 9
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 329
  • 329
  • 70
  • 65
  • 64
  • 55
  • 54
  • 52
  • 50
  • 37
  • 32
  • 27
  • 26
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

The Effect Of Emotional Facial Expressions Of A Virtual Character On People

Karadoganer, Alper 01 September 2010 (has links) (PDF)
This thesis investigates the effect of emotional facial expressions of a virtual character on people&rsquo / s performance for interactive digital tasks. The basic and universal emotions are used in the study. Facial expressions of these emotions are created according to the Facial Action Coding System (FACS), which is a system that describes facial movements in the face. The patterns of cooccurences of Action Units (descriptions of facial movements defined in FACS) for basic emotions are also implemented into emotional facial expressions with regard to findings of the studies in the literature. A study was conducted to validate the recognition of emotion specific facial expressions that are built by Poser software. To investigate the effect of emotional facial expressions on people&rsquo / s performance for digital interactive tasks in a virtual environment, a digital interactive application created by Unity software was used in the final study of the thesis.
172

Recognizing facial expression of virtual agents, synthetic faces, and human faces: the effects of age and character type on emotion recognition

Beer, Jenay Michelle 08 April 2010 (has links)
An agent's facial expression may communicate emotive state to users both young and old. The ability to recognize emotions has been shown to differ with age, with older adults more commonly misidentifying the facial emotions of anger, fear, and sadness. This research study examined whether emotion recognition of facial expressions differed between different types of on-screen agents, and between age groups. Three on-screen characters were compared: a human, a synthetic human, and a virtual agent. In this study 42 younger (age 28-28) and 42 older (age 65-85) adults completed an emotion recognition task with static pictures of the characters demonstrating four basic emotions (anger, fear, happiness, and sadness) and neutral. The human face resulted in the highest proportion match, followed by the synthetic human, then the virtual agent with the lowest proportion match. Both the human and synthetic human faces resulted in age-related differences for the emotions anger, fear, sadness, and neutral, with younger adults showing higher proportion match. The virtual agent showed age-related differences for the emotions anger, fear, happiness, and neutral, with younger adults showing higher proportion match. The data analysis and interpretation of the present study differed from previous work by utilizing two unique approaches to understanding emotion recognition. First, misattributions participants made when identifying emotion were investigated. Second, a similarity index of the feature placement between any two virtual agent emotions was calculated, suggesting that emotions were commonly misattributed as other emotions similar in appearance. Overall, these results suggest that age-related differences transcend human faces to other types of on-screen characters, and differences between older and younger adults in emotion recognition may be further explained by perceptual discrimination between two emotions of similar feature appearance.
173

Human-IntoFace.net : May 6th, 2003 /

Bennett, Troy. January 2003 (has links)
Thesis (M.F.A.)--Rochester Institute of Technology, 2003. / Typescript. Includes bibliographical references (leaves 21-23).
174

Posed and genuine smiles : an evoked response potentials study : a thesis submitted in fulfilment of the requirements for the degree of Master of Science in Psychology at the University of Canterbury /

Ottley, M. C. January 2009 (has links)
Thesis (M. Sc.)--University of Canterbury, 2009. / Typescript (photocopy). Includes bibliographical references (leaves 82-94). Also available via the World Wide Web.
175

Social responses to virtual humans the effect of human-like characteristics /

Park, Sung Jun. January 2009 (has links)
Thesis (Ph.D)--Psychology, Georgia Institute of Technology, 2010. / Committee Chair: Richard Catrambone; Committee Member: Gregory Corso; Committee Member: Jack Feldman; Committee Member: John T. Stasko; Committee Member: Wendy A. Rogers. Part of the SMARTech Electronic Thesis and Dissertation Collection.
176

A scalable metric learning based voting method for expression recognition

Wan, Shaohua 09 October 2013 (has links)
In this research work, we propose a facial expression classification method using metric learning-based k-nearest neighbor voting. To achieve accurate classification of a facial expression from frontal face images, we first learn a distance metric structure from training data that characterizes the feature space pattern, then use this metric to retrieve the nearest neighbors from the training dataset, and finally output the classification decision accordingly. An expression is represented as a fusion of face shape and texture. This representation is based on registering a face image with a landmarking shape model and extracting Gabor features from local patches around landmarks. This type of representation achieves robustness and effectiveness by using an ensemble of local patch feature detectors at a global shape level. A naive implementation of the metric learning-based k-nearest neighbor would incur a time complexity proportional to the size of the training dataset, which precludes this method being used with enormous datasets. To scale to potential larger databases, a similar approach to that in [24] is used to achieve an approximate yet efficient ML-based kNN voting based on Locality Sensitive Hashing (LSH). A query example is directly hashed to the bucket of a pre-computed hash table where candidate nearest neighbors can be found, and there is no need to search the entire database for nearest neighbors. Experimental results on the Cohn-Kanade database and the Moving Faces and People database show that both ML-based kNN voting and its LSH approximation outperform the state-of-the-art, demonstrating the superiority and scalability of our method. / text
177

The influence of stigma of mental illnesses on decoding and encodting of verbal and nonverbal messages

Imai, Tatsuya 25 October 2013 (has links)
Stigmas associated with depression and schizophrenia have been found to negatively impact the communication those with mental illness have with others in face-to-face interactions (e.g., Lysaker, Roe, & Yanos, 2007; Nicholson & Sacco, 1999). This study attempted to specifically examine how stigma affects cognitions, emotions, and behaviors of interactants without a mental illness toward those with a mental illness in online interactions. In this experimental study, 412 participants interacted with a hypothetical target on Facebook, who was believed to have depression, schizophrenia, or a cavity (i.e., the control group). They were asked to read a profile of the target on Facebook, respond to a message from the target, and complete measurements assessing perceived positive and negative face threats in the target's message, perceived facial expressions of the target, induced affect, predicted outcome value, and rejecting attitudes towards the target. Results revealed that the target labeled as schizophrenic was rejected more and perceived to have lower outcome value than the target without a mental illness or labeled as depressive. However, there were no significant differences in any outcomes between the depression and control groups. The mixed results were discussed in relation to methodological limitations and possible modifications of previous theoretical arguments. Theoretical and practical contributions were considered and suggestions for future research were offered. / text
178

Emotion Recognition and Psychosis-Proneness: Neural and Behavioral Perspectives

Germine, Laura Thi 14 September 2012 (has links)
Schizophrenia is associated with deficits in social cognition and emotion processing, but it is not known how these deficits relate to other domains of neurocognition and whether they might contribute to psychosis development. The current dissertation approaches this question by looking at the relationship between psychosis proneness and face emotion recognition ability, a core domain of social-emotional processing. Psychosis proneness was inferred by the presence of psychosis-like characteristics in otherwise healthy individuals, using self-report measures. Face emotion recognition ability was found to be associated with psychosis-proneness across four large web-based samples and one lab sample. These associations were relatively specific, and could not be explained by differences in face processing or IQ. Using functional magnetic resonance imaging (fMRI), psychosis-proneness was linked with reduced neural activity in brain regions that underlie normal face emotion recognition, including regions that are implicated in self-representation. Additional experiments were conducted to explore psychosis-proneness related differences in self-representation, and a relationship was revealed between cognitive-perceptual (positive) dimensions of psychosis-proneness and (1) flexibility in the body representation (as measured by the rubber hand illusion), and (2) self-referential source memory (but not self-referential recognition memory). Neither of these relationships, however, explained the association between psychosis-proneness and face emotion recognition ability. These findings indicate that psychosis vulnerability is related to neural and behavioral differences in face emotion processing, and that these differences are not a secondary characteristic of psychotic illness. Moreover, poorer emotion recognition ability in psychosisprone individuals is not explained by generalized performance, IQ, or face processing deficits. Although some dimensions of psychosis-proneness were related to differences in measures of self-representation, no evidence was found that these abnormalities contribute to psychosisproneness related differences in emotion recognition ability. / Psychology
179

An investigation of young infants’ ability to match phonetic and gender information in dynamic faces and voice

Patterson, Michelle Louise 11 1900 (has links)
This dissertation explores the nature and ontogeny of infants' ability to match phonetic information in comparison to non-speech information in the face and voice. Previous research shows that infants' ability to match phonetic information in face and voice is robust at 4.5 months of age (e.g., Kuhl & Meltzoff, 1982; 1984; 1988; Patterson & Werker, 1999). These findings support claims that young infants can perceive structural correspondences between audio and visual aspects of phonetic input and that speech is represented amodally. It remains unclear, however, specifically what factors allow speech to be perceived amodally and whether the intermodal perception of other aspects of face and voice is like that of speech. Gender is another biologically significant cue that is available in both the face and voice. In this dissertation, nine experiments examine infants' ability to match phonetic and gender information with dynamic faces and voices. Infants were seated in front of two side-by-side video monitors which displayed filmed images of a female or male face, each articulating a vowel sound ( / a / or / i / ) in synchrony. The sound was played through a central speaker and corresponded with one of the displays but was synchronous with both. In Experiment 1,4.5-month-old infants did not look preferentially at the face that matched the gender of the heard voice when presented with the same stimuli that produced a robust phonetic matching effect. In Experiments 2 through 4, vowel and gender information were placed in conflict to determine the relative contribution of each in infants' ability to match bimodal information in the face and voice. The age at which infants do match gender information with my stimuli was determined in Experiments 5 and 6. In order to explore whether matching phonetic information in face and voice is based on featural or configural information, two experiments examined infants' ability to match phonetic information using inverted faces (Experiment 7) and upright faces with inverted mouths (Experiment 8). Finally, Experiment 9 extended the phonetic matching effect to 2-month-old infants. The experiments in this dissertation provide evidence that, at 4.5 months of age, infants are more likely to attend to phonetic information in the face and voice than to gender information. Phonetic information may have a special salience and/or unity that is not apparent in similar but non-phonetic events. The findings are discussed in relation to key theories of perceptual development.
180

The experience of meaning in the care of patients in the terminal stage of dementia of the Alzheimer type : interpretation of non-verbal communication and ethical demands

Asplund, Kenneth January 1991 (has links)
<p>S. 1-45: sammanfattning, s. 49-132, [2] s.: 7 uppsatser</p> / digitalisering@umu

Page generated in 0.0886 seconds