Spelling suggestions: "subject:"coacial expression"" "subject:"cracial expression""
81 |
Representing facial affect representations in the brain and in behavior /Fine, Eric Michael. January 2006 (has links)
Thesis (Ph. D.)--University of California, San Diego and San Diego State University, 2006. / Vita. Includes bibliographical references (leaves 178-193).
|
82 |
Differences in psychophysiological reactivity to static and dynamic displays of facial emotionSpringer, Utaka S. January 2005 (has links)
Thesis (M.S.)--University of Florida, 2005. / Typescript. Title from title page of source document. Document formatted into pages; contains 69 pages. Includes Vita. Includes bibliographical references.
|
83 |
Maternal predictors of children's facial emotions in mother-child interactionsLusk, Kathryn Renee Preis. January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2007. / Vita. Includes bibliographical references.
|
84 |
Development Of a Multisensorial System For Emotions RecognitionFLOR, H. R. 17 March 2017 (has links)
Made available in DSpace on 2018-08-02T00:00:40Z (GMT). No. of bitstreams: 1
tese_10810_Hamilton Rivera Flor20171019-95619.pdf: 4725252 bytes, checksum: 16042ed4abfc5b07268db9f41baa2a83 (MD5)
Previous issue date: 2017-03-17 / Automated reading and analysis of human emotion has the potential to be a powerful tool to develop a wide variety of applications, such as human-computer interaction systems, but, at the same time, this is a very difficult issue because the human communication is very complex. Humans employ multiple sensory systems in emotion recognition. At the same way, an emotionally intelligent machine requires multiples sensors to be able to create an affective interaction with users. Thus, this Master thesis proposes the development of a multisensorial system for automatic emotion recognition.
The multisensorial system is composed of three sensors, which allowed exploring different emotional aspects, as the eye tracking, using the IR-PCR technique, helped conducting studies about visual social attention; the Kinect, in conjunction with the FACS-AU system technique, allowed developing a tool for facial expression recognition; and the thermal camera, using the
FT-RoI technique, was employed for detecting facial thermal variation. When performing the multisensorial integration of the system, it was possible to obtain a more complete and varied analysis of the emotional aspects, allowing evaluate focal attention, valence comprehension, valence expressions, facial expression, valence recognition and arousal recognition. Experiments were performed with sixteen healthy adult volunteers and 105 healthy children
volunteers and the results were the developed system, which was able to detect eye gaze, recognize facial expression and estimate the valence and arousal for emotion recognition, This system also presents the potential to analyzed emotions of people by facial features using contactless sensors in semi-structured environments, such as clinics, laboratories, or classrooms. This system also presents the potential to become an embedded tool in robots to
endow these machines with an emotional intelligence for a more natural interaction with humans.
Keywords: emotion recognition, eye tracking, facial expression, facial thermal variation, integration multisensorial
|
85 |
Experimenter audience effects on young adults' facial expressions during pain.Badali, Melanie 05 1900 (has links)
Facial expression has been used as a measure of pain in clinical and experimental studies. The Sociocommunications Model of Pain (T. Hadjistavropoulos, K. Craig, & S. Fuchs-Lacelle, 2004) characterizes facial movements during pain as both expressions of inner experience and communications to other people that must be considered in the social contexts in which they occur. While research demonstrates that specific facial movements may be outward manifestations of pain states, less attention has been paid to the extent to which contextual factors influence facial movements during pain. Experimenters are an inevitable feature of research studies on facial expression during pain and study of their social impact is merited. The purpose of the present study was to investigate the effect of experimenter presence on participants’ facial expressions during pain. Healthy young adults (60 males, 60 females) underwent painful stimulation induced by a cold pressor in three social contexts: alone; alone with knowledge of an experimenter watching through a one-way mirror; and face-to-face with an experimenter. Participants provided verbal self-report ratings of pain. Facial behaviours during pain were coded with the Facial Action Coding System (P. Ekman, W. Friesen, & J. Hager, 2002) and rated by naïve judges. Participants’ facial expressions of pain varied with the context of the pain experience condition but not with verbally self-reported levels of pain. Participants who were alone were more likely to display facial actions typically associated with pain than participants who were being observed by an experimenter who was in another room or sitting across from them. Naïve judges appeared to be influenced by these facial expressions as, on average, they rated the participants who were alone as experiencing more pain than those who were observed. Facial expressions shown by people experiencing pain can communicate the fact that they are feeling pain. However, facial expressions can be influenced by factors in the social context such as the presence of an experimenter. The results suggest that facial expressions during pain made by adults should be viewed at least in part as communications, subject to intrapersonal and interpersonal influences, rather than direct read-outs of experience. / Arts, Faculty of / Psychology, Department of / Graduate
|
86 |
Recognition, expression, and understanding facial expressions of emotion in adolescents with nonverbal and general learning disabilitiesBloom, Elana. January 2005 (has links)
No description available.
|
87 |
Evaluating Consumer Emotional Response to Beverage Sweeteners through Facial Expression AnalysisLeitch, Kristen Allison 23 June 2015 (has links)
Emotional processing and characterization of internal and external stimuli is believed to play an integral role in consumer acceptance or rejection of food products. In this research three experiments were completed with the ultimate goal of adding to the growing body of research pertaining to food, emotions and acceptance using traditional affective sensory methods in combination with implicit (uncontrollable) and explicit (cognitive) emotional measures. Sweetness equivalence of several artificial (acesulfame potassium, saccharin and sucralose) and natural (42% high fructose corn syrup and honey) sweeteners were established to a 5% sucrose solution. Differences in consumer acceptability and emotional response to sucrose (control) and four equi-sweet alternatives (acesulfame potassium, high fructose corn syrup, honey, and sucralose) in tea were evaluated using a 9-point hedonic scale, check-all-that-apply (CATA) emotion term questionnaire (explicit), and automated facial expression analysis (AFEA) (implicit). Facial expression responses and emotion term categorization based on selection frequencies were able to adequately discern differences in emotional response as it related to hedonic liking between sweetener categories (artificial; natural). The potential influence of varying product information on consumer acceptance and emotional responses was then evaluated in relation to three sweeteners (sucrose, ace-k, HFCS) in tea solutions. Observed differences in liking and emotional term characterizations based on the validity of product information for sweeteners were attributed to cognitive dissonance. False informational cues had an observed dampening effect on the implicit emotional response to alternative sweeteners. Significant moderate correlations between liking and several basic emotions supported the belief that implicit emotions are contextually specific. Limitations pertaining to AFEA data collection and emotional interpretations to sweeteners include high panelist variability (within and across), calibration techniques, video quality, software sensitivity, and a general lack of consistency concerning methods of analysis. When used in conjunction with traditional affective methodology and cognitive emotional characterization, AFEA provides an additional layer of valued information about the consumer food experience. / Master of Science in Life Sciences
|
88 |
THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONSLin, Alice J. 01 January 2011 (has links)
Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects.
A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results.
Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions.
A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant.
A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained.
|
89 |
Grassmannian Learning for Facial Expression Recognition from VideoJanuary 2014 (has links)
abstract: In this thesis we consider the problem of facial expression recognition (FER) from video sequences. Our method is based on subspace representations and Grassmann manifold based learning. We use Local Binary Pattern (LBP) at the frame level for representing the facial features. Next we develop a model to represent the video sequence in a lower dimensional expression subspace and also as a linear dynamical system using Autoregressive Moving Average (ARMA) model. As these subspaces lie on Grassmann space, we use Grassmann manifold based learning techniques such as kernel Fisher Discriminant Analysis with Grassmann kernels for classification. We consider six expressions namely, Angry (AN), Disgust (Di), Fear (Fe), Happy (Ha), Sadness (Sa) and Surprise (Su) for classification. We perform experiments on extended Cohn-Kanade (CK+) facial expression database to evaluate the expression recognition performance. Our method demonstrates good expression recognition performance outperforming other state of the art FER algorithms. We achieve an average recognition accuracy of 97.41% using a method based on expression subspace, kernel-FDA and Support Vector Machines (SVM) classifier. By using a simpler classifier, 1-Nearest Neighbor (1-NN) along with kernel-FDA, we achieve a recognition accuracy of 97.09%. We find that to process a group of 19 frames in a video sequence, LBP feature extraction requires majority of computation time (97 %) which is about 1.662 seconds on the Intel Core i3, dual core platform. However when only 3 frames (onset, middle and peak) of a video sequence are used, the computational complexity is reduced by about 83.75 % to 260 milliseconds at the expense of drop in the recognition accuracy to 92.88 %. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2014
|
90 |
Emotional avatars : choreographing emotional facial expression animationSloan, Robin J. S. January 2011 (has links)
As a universal element of human nature, the experience, expression, and perception of emotions permeate our daily lives. Many emotions are thought to be basic and common to all humanity, irrespective of social or cultural background. Of these emotions, the corresponding facial expressions of a select few are known to be truly universal, in that they can be identified by most observers without the need for training. Facial expressions of emotion are subsequently used as a method of communication, whether through close face-to-face contact, or the use of emoticons online and in mobile texting. Facial expressions are fundamental to acting for stage and screen, and to animation for film and computer games. Expressions of emotion have been the subject of intense experimentation in psychology and computer science research, both in terms of their naturalistic appearance and the virtual replication of facial movements. From this work much is known about expression universality, anatomy, psychology, and synthesis. Beyond the realm of scientific research, animation practitioners have scrutinised facial expressions and developed an artistic understanding of movement and performance. However, despite the ubiquitous quality of facial expressions in life and research, our understanding of how to produce synthetic, dynamic imitations of emotional expressions which are perceptually valid remains somewhat limited. The research covered in this thesis sought to unite an artistic understanding of expression animation with scientific approaches to facial expression assessment. Acting as both an animation practitioner and as a scientific researcher, the author set out to investigate emotional facial expression dynamics, with the particular aim of identifying spatio-temporal configurations of animated expressions that not only satisfied artistic judgement, but which also stood up to empirical assessment. These configurations became known as emotional expression choreographies. The final work presented in this thesis covers the performative, practice-led research into emotional expression choreography, the results of empirical experimentation (where choreographed animations were assessed by observers), and the findings of qualitative studies (which painted a more detailed picture of the potential context of choreographed expressions). The holistic evaluation of expression animation from these three epistemological perspectives indicated that emotional expressions can indeed be choreographed in order to create refined performances which have empirically measurable effects on observers, and which may be contextualised by the phenomenological interpretations of both student animators and general audiences.
|
Page generated in 0.0956 seconds