• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 190
  • 22
  • 18
  • 9
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 329
  • 329
  • 70
  • 65
  • 64
  • 55
  • 54
  • 52
  • 50
  • 37
  • 32
  • 27
  • 26
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Automaticity and Hemispheric Specialization in Emotional Expression Recognition: Examined using a modified Stroop Task

Beall, Paula M. 08 1900 (has links)
The main focus of this investigation was to examine the automaticity of facial expression recognition through valence judgments in a modified photo-word Stroop paradigm. Positive and negative words were superimposed across male and female faces expressing positive (happy) and negative (angry, sad) emotions. Subjects categorized the valence of each stimulus. Gender biases in judgments of expressions (better recognition for male angry and female sad expressions) and the valence hypothesis of hemispheric advantages for emotions (left hemisphere: positive; right hemisphere: negative) were also examined. Four major findings emerged. First, the valence of expressions was processed automatically (robust interference effects). Second, male faces interfered with processing the valence of words. Third, no posers' gender biases were indicated. Finally, the emotionality of facial expressions and words was processed similarly by both hemispheres.
92

Effects of Gender and Self-Monitoring on Observer Accuracy in Decoding Affect Displays

Spencer, R. Keith (Raymond Keith) 12 1900 (has links)
This study examined gender and self-monitoring as separate and interacting variables predicting judgmental accuracy on the part of observers of facial expressions of emotional categories. The main and interaction effects failed to reach significant levels during the preliminary analysis. However, post hoc analyses demonstrated a significant encoder sex variable. Female encoders of emotion were judged more accurately by both sexes. Additionally, when the stimulus was limited to female enactments of emotional categories, the hypothesized main and interaction effects reached significant F levels. This study utilized 100 observers and 10 encoders of seven emotional categories. Methodological considerations and alternatives are examined at length.
93

A study of the temporal relationship between eye actions and facial expressions

Rupenga, Moses January 2017 (has links)
A dissertation submitted in ful llment of the requirements for the degree of Master of Science in the School of Computer Science and Applied Mathematics Faculty of Science August 15, 2017 / Facial expression recognition is one of the most common means of communication used for complementing spoken word. However, people have grown to master ways of ex- hibiting deceptive expressions. Hence, it is imperative to understand di erences in expressions mostly for security purposes among others. Traditional methods employ machine learning techniques in di erentiating real and fake expressions. However, this approach does not always work as human subjects can easily mimic real expressions with a bit of practice. This study presents an approach that evaluates the time related dis- tance that exists between eye actions and an exhibited expression. The approach gives insights on some of the most fundamental characteristics of expressions. The study fo- cuses on nding and understanding the temporal relationship that exists between eye blinks and smiles. It further looks at the relationship that exits between eye closure and pain expressions. The study incorporates active appearance models (AAM) for feature extraction and support vector machines (SVM) for classi cation. It tests extreme learn- ing machines (ELM) in both smile and pain studies, which in turn, attains excellent results than predominant algorithms like the SVM. The study shows that eye blinks are highly correlated with the beginning of a smile in posed smiles while eye blinks are highly correlated with the end of a smile in spontaneous smiles. A high correlation is observed between eye closure and pain in spontaneous pain expressions. Furthermore, this study brings about ideas that lead to potential applications such as lie detection systems, robust health care monitoring systems and enhanced animation design systems among others. / MT 2018
94

Out-of-plane action unit recognition using recurrent neural networks

Trewick, Christine 20 May 2015 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of requirements for the degree of Master of Science. Johannesburg, 2015. / The face is a fundamental tool to assist in interpersonal communication and interaction between people. Humans use facial expressions to consciously or subconsciously express their emotional states, such as anger or surprise. As humans, we are able to easily identify changes in facial expressions even in complicated scenarios, but the task of facial expression recognition and analysis is complex and challenging to a computer. The automatic analysis of facial expressions by computers has applications in several scientific subjects such as psychology, neurology, pain assessment, lie detection, intelligent environments, psychiatry, and emotion and paralinguistic communication. We look at methods of facial expression recognition, and in particular, the recognition of Facial Action Coding System’s (FACS) Action Units (AUs). Movements of individual muscles on the face are encoded by FACS from slightly different, instant changes in facial appearance. Contractions of specific facial muscles are related to a set of units called AUs. We make use of Speeded Up Robust Features (SURF) to extract keypoints from the face and use the SURF descriptors to create feature vectors. SURF provides smaller sized feature vectors than other commonly used feature extraction techniques. SURF is comparable to or outperforms other methods with respect to distinctiveness, robustness, and repeatability. It is also much faster than other feature detectors and descriptors. The SURF descriptor is scale and rotation invariant and is unaffected by small viewpoint changes or illumination changes. We use the SURF feature vectors to train a recurrent neural network (RNN) to recognize AUs from the Cohn-Kanade database. An RNN is able to handle temporal data received from image sequences in which an AU or combination of AUs are shown to develop from a neutral face. We are recognizing AUs as they provide a more fine-grained means of measurement that is independent of age, ethnicity, gender and different expression appearance. In addition to recognizing FACS AUs from the Cohn-Kanade database, we use our trained RNNs to recognize the development of pain in human subjects. We make use of the UNBC-McMaster pain database which contains image sequences of people experiencing pain. In some cases, the pain results in their face moving out-of-plane or some degree of in-plane movement. The temporal processing ability of RNNs can assist in classifying AUs where the face is occluded and not facing frontally for some part of the sequence. Results are promising when tested on the Cohn-Kanade database. We see higher overall recognition rates for upper face AUs than lower face AUs. Since keypoints are globally extracted from the face in our system, local feature extraction could provide improved recognition results in future work. We also see satisfactory recognition results when tested on samples with out-of-plane head movement, showing the temporal processing ability of RNNs.
95

Assimetrias nos reconhecimentos de expressões faciais entre hemicampos visuais de homens e mulheres / Asymmetries in recognizing facial expressions between visual hemifields by men and women

Kusano, Maria Elisa 17 April 2015 (has links)
O reconhecimento das diferentes expressões faciais de emoções é de grande valia para as relações interpessoais. Porém, não há consenso sobre como ocorrem os processos inerentes a esse reconhecimento. Estudos sugerem diferenças no processamento de expressões faciais relacionadas à valência da emoção, assimetria funcional entre os hemisférios cerebrais, e características do observador (destreza manual, sexo e doenças) e ao tempo de exposição dos estímulos. Utilizando o método de campo visual dividido, associado ao de escolha forçada, foram investigados o desempenho de reconhecimento de faces tristes e alegres nos hemicampos visuais esquerdo e direito de 24 participantes (13M, 11H), todos adultos, destros e com acuidade normal ou superior. Todos foram submetidos a sessões experimentais em que pares de faces foram apresentados sucessivamente, por 100ms para um dos hemicampos visuais, direito ou esquerdo, sendo uma das faces neutra e outra emotiva (alegre ou triste), em ordem aleatória de apresentação. O gênero de cada face mantinha-se o mesmo no par (só masculina ou feminina), e trata-se da foto de uma mesma pessoa. As faces emotivas eram apresentadas aleatoriamente entre 11 níveis de intensidade emocional, obtidos pela técnica de morfinização gráfica. A tarefa do participante consistia em escolher em cada sucessão de par de faces em qual hemicampo visual encontrava-se a face mais emotiva. As taxas de acertos emcada nível de intensidade emocional de cada face emotiva permitiram estimar os parâmetros de curvas psicométricas ajustadas a curva acumulada normal para cada hemicampo visual de cada indivíduo. As análises estatísticas das taxas de acertos, em conjunto com os gráficos dos parâmetros das curvas psicométricas dos participantes, permitiu notar que houve maiores taxas de acerto às faces alegres em relação a faces tristes. Além disso, os resultados mostram que enquanto mulheres foram simétricas no reconhecimento das faces felizes e tristes, independente do hemicampo visual (HV); homens foram assimétricos: apresentaram superioridade do HV esquerdo no reconhecimento da face masculina e do HV direito em relação à face feminina. Foi possível observar diferenças no reconhecimento das faces, havendo interação entre sexo do participante e o da face-estímulo apresentada, valência da emoção e hemisfério cerebral. Este trabalho embasa parcialmente a Teoria do Hemisfério Direito e sugere que o tipo de delineamento experimental usado pode estar relacionado com a diferença de desempenho entre sexos feminino e masculino. / Recognizing different emotional facial expressions is worth for interpersonal relationship, although there is not a consensus how this recognition process really occurs. Studies suggest differences during the processing of facial expressions related with emotional valence, functional asymmetry between both brain hemisphere and observer\'s characteristics (manual dexterity, gender and diseases) and by the stimulus exposure time. By the divided visual field method associated with two-interval forced choice we investigated the performance of recognizing sad and happy faces in the left and the right visual hemifield of 24 participants (13 women, 11 men), all adults, right-handed and with normal or higher visual acuity. They were all submitted to experimental sessions where pair of faces were successively presented for 100ms in one of the visual hemifield, right or left, being one neutral and other emotive (happy or sad), in random order presentation . Each pair of faces were only masculine or feminine, of a single person, and the emotional faces could have emotional intensity chosen randomly between 11 intensity levels obtained by computer graphic morphing technique. The participant task was to choose in each pair of face sequence in the visual hemifield which one was the most emotive. The hit rate to each level of emotional intensity of each face allowed to estimate the parameters of psychometric curves adjusted to cumulative normal distribution for each visual hemifields. The statistic analysis of the parameters of the psychometric curves allowed pointed out e that the hit rates for the happy faces were higher than for the sad ones. Also, while women showed symmetric performance in recognizing happy and sad faces between the visual hemifields, men showed asymmetric performance, with superiority of recognizing the male face in the left visual field and of recognizing the female face in the right visual field. There were evidences that recognizing emotional faces are somewhat different, considering interaction between the gender of the participants and the gender of the stimulus face, emotional valences and brain hemisphere. This research partially supports the Right Hemisphere Theory and suggest experimental design influence the sex differences performance.
96

Intensity based methodologies for facial expression recognition.

January 2001 (has links)
by Hok Chun Lo. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 136-143). / Abstracts in English and Chinese. / LIST OF FIGURES --- p.viii / LIST OF TABLES --- p.x / Chapter 1. --- INTRODUCTION --- p.1 / Chapter 2. --- PREVIOUS WORK ON FACIAL EXPRESSION RECOGNITION --- p.9 / Chapter 2.1. --- Active Deformable Contour --- p.9 / Chapter 2.2. --- Facial Feature Points and B-spline Curve --- p.10 / Chapter 2.3. --- Optical Flow Approach --- p.11 / Chapter 2.4. --- Facial Action Coding System --- p.12 / Chapter 2.5. --- Neural Network --- p.13 / Chapter 3. --- EIGEN-ANALYSIS BASED METHOD FOR FACIAL EXPRESSION RECOGNITION --- p.15 / Chapter 3.1. --- Related Topics on Eigen-Analysis Based Method --- p.15 / Chapter 3.1.1. --- Terminologies --- p.15 / Chapter 3.1.2. --- Principal Component Analysis --- p.17 / Chapter 3.1.3. --- Significance of Principal Component Analysis --- p.18 / Chapter 3.1.4. --- Graphical Presentation of the Idea of Principal Component Analysis --- p.20 / Chapter 3.2. --- EigenFace Method for Face Recognition --- p.21 / Chapter 3.3. --- Eigen-Analysis Based Method for Facial Expression Recognition --- p.23 / Chapter 3.3.1. --- Person-Dependent Database --- p.23 / Chapter 3.3.2. --- Direct Adoption of EigenFace Method --- p.24 / Chapter 3.3.3. --- Multiple Subspaces Method --- p.27 / Chapter 3.4. --- Detail Description on Our Approaches --- p.29 / Chapter 3.4.1. --- Database Formation --- p.29 / Chapter a. --- Conversion of Image to Column Vector --- p.29 / Chapter b. --- "Preprocess: Scale Regulation, Orientation Regulation and Cropping." --- p.30 / Chapter c. --- Scale Regulation --- p.31 / Chapter d. --- Orientation Regulation --- p.32 / Chapter e. --- Cropping of images --- p.33 / Chapter f. --- Calculation of Expression Subspace for Direct Adoption Method --- p.35 / Chapter g. --- Calculation of Expression Subspace for Multiple Subspaces Method. --- p.38 / Chapter 3.4.2. --- Recognition Process for Direct Adoption Method --- p.38 / Chapter 3.4.3. --- Recognition Process for Multiple Subspaces Method --- p.39 / Chapter a. --- Intensity Normalization Algorithm --- p.39 / Chapter b. --- Matching --- p.44 / Chapter 3.5. --- Experimental Result and Analysis --- p.45 / Chapter 4. --- DEFORMABLE TEMPLATE MATCHING SCHEME FOR FACIAL EXPRESSION RECOGNITION --- p.53 / Chapter 4.1. --- Background Knowledge --- p.53 / Chapter 4.1.1. --- Camera Model --- p.53 / Chapter a. --- Pinhole Camera Model and Perspective Projection --- p.54 / Chapter b. --- Orthographic Camera Model --- p.56 / Chapter c. --- Affine Camera Model --- p.57 / Chapter 4.1.2. --- View Synthesis --- p.58 / Chapter a. --- Technique Issue of View Synthesis --- p.59 / Chapter 4.2. --- View Synthesis Technique for Facial Expression Recognition --- p.68 / Chapter 4.2.1. --- From View Synthesis Technique to Template Deformation --- p.69 / Chapter 4.3. --- Database Formation --- p.71 / Chapter 4.3.1. --- Person-Dependent Database --- p.72 / Chapter 4.3.2. --- Model Images Acquisition --- p.72 / Chapter 4.3.3. --- Templates' Structure and Formation Process --- p.73 / Chapter 4.3.4. --- Selection of Warping Points and Template Anchor Points --- p.77 / Chapter a. --- Selection of Warping Points --- p.78 / Chapter b. --- Selection of Template Anchor Points --- p.80 / Chapter 4.4. --- Recognition Process --- p.81 / Chapter 4.4.1. --- Solving Warping Equation --- p.83 / Chapter 4.4.2. --- Template Deformation --- p.83 / Chapter 4.4.3. --- Template from Input Images --- p.86 / Chapter 4.4.4. --- Matching --- p.87 / Chapter 4.5. --- Implementation of Automation System --- p.88 / Chapter 4.5.1. --- Kalman Filter --- p.89 / Chapter 4.5.2. --- Using Kalman Filter for Trakcing in Our System --- p.89 / Chapter 4.5.3. --- Limitation --- p.92 / Chapter 4.6. --- Experimental Result and Analysis --- p.93 / Chapter 5. --- CONCLUSION AND FUTURE WORK --- p.97 / APPENDIX --- p.100 / Chapter I. --- Image Sample 1 --- p.100 / Chapter II. --- Image Sample 2 --- p.109 / Chapter III. --- Image Sample 3 --- p.119 / Chapter IV. --- Image Sample 4 --- p.135 / BIBLIOGRAPHY --- p.136
97

The context effect of emotion words on emotional face processing. / CUHK electronic theses & dissertations collection

January 2012 (has links)
从面孔表情中感知情绪受到情绪背景的调节。用来指代各种情绪状态的情绪词汇或许是一类情绪知觉的背景。本研究采用改进后的启动范式系统探索了情绪词汇的情景效应的自动化程度和时间进程。实验1 发现情绪词汇和情绪面孔之间的情绪一致性可以调节实验参与者性别判断任务的成绩。实验2 和实验3 采用任务指导语操纵了对于情绪词汇的加工 水平。情绪词汇的情景效应仅在实验参与者主动记忆情绪词汇时被发现(实验2),而在实验参与者仅仅记忆词汇颜色时没有被发现(实验3)。采用更为简单的朝向判断任务,实验4 发现该情景效应仅仅表现在高兴面孔中。该情景效应同样受到情绪词汇加工水平的调节。对于高兴面孔的的情景效应仅仅在实验参与者主动记忆情绪词汇的条件下被发现(实验5 和实验7)。在实验参与者记忆词汇颜色时未被发现(实验6 和实验8)。实验9 采用脑电方法探索了面孔性别判断任务中情绪词汇的情景效应的时间进程。相比一致条件,N170的平均波幅在不一致条件下有更高的波幅。总之,(1)情绪词汇和情绪面孔的整合受到面孔加工任务和情绪词汇加工水平的调节;(2)情绪词汇和情绪面孔的整合或许发生在面孔加工的知觉阶段。 / Emotion perception offacial expressions is modulated by affective contexts. Emotion words, that are used to refer to discrete emotion categories, might also serve as a kind of context of emotion perception. The current study systematically explored the degree of automaticity and time course of context effect of emotion words with a modified priming paradigm. Experiment 1 demonstrated that emotion congruency between emotion words and emotional faces could modulate participants' task performance on gender judgment task, which did not require an explicit emotion judgment. In Experiment 2 and Experiment 3, the processing level of emotion words was manipulated by task instruction on emotion words. The context effect of emotion words was only found when participants deliberately memorized an emotion word (Experiment 2). This effect disappeared when participants memorized the color of emotion word (Experiment 3). With a more simple orientation judgment task, Experiment 4 demonstrated a congruency effect for appy faces only. Processing level of emotion words also modulated this effect. Reliable congruency effect for happy faces was only found when word identities were explicitly processed (Experiment 5 and 7) but not in a superficial word color task (Experiment 6 and 8). Experiment 9 explored the time course of context effect of emotion words on face gender judgment with EEG recording. The mean amplitude of N170 was enhanced in incongruent condition compared with congruent condition. In summary, (1) the integration of emotion words and emotional faces was modulated by task demands on faces and processing level of emotion words; (2) the integration of emotion words and emotional faces might happen at the perceptual stage of face processing. / Detailed summary in vernacular field only. / Yang, Lizhuang. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 108-116). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese. / ABSTRACT --- p.iii / Chapter ABSTRACT IN CHINESE --- p.v / ACKNOWLEDGEMENT --- p.vi / LIST OF TABLES --- p.x / LIST OF FIGURES --- p.xi / CHAPTER / Chapter 1. --- INTRODUCTION --- p.1 / Chapter 1.1 --- Emotional Face Processing --- p.1 / Chapter 1.1.1 --- General Models of Face Processing --- p.1 / Chapter 1.1.2 --- Emotion Perception of Facial Expressions --- p.3 / Chapter 1.2 --- Emotional Faces in Contexts --- p.4 / Chapter 1.2.1 --- Time Course of Context Effect --- p.5 / Chapter 1.2.2 --- Automaticity of Context Effect --- p.5 / Chapter 1.3 --- Emotion Words as Context --- p.6 / Chapter 1.3.1 --- Emotion Words --- p.6 / Chapter 1.3.2 --- Emotion Words and Emotion Perception --- p.7 / Chapter 1.4 --- The Current Study --- p.10 / Chapter 1.4.1 --- Aim and Motivation --- p.10 / Chapter 1.4.2 --- General Methodology --- p.12 / Chapter 1.4.3 --- Overview of Experiments --- p.15 / Chapter 2. --- THE EFFECT OF EMOTION WORDS ON GENDER JUDGMENT --- p.17 / Chapter 2.1 --- Experiment 1 --- p.17 / Chapter 2.1.1 --- Method --- p.17 / Chapter 2.1.2 --- Results --- p.21 / Chapter 2.1.3 --- Discussion --- p.24 / Chapter 2.2 --- Experiment 2 --- p.25 / Chapter 2.2.1 --- Method --- p.25 / Chapter 2.2.2 --- Results --- p.28 / Chapter 2.2.3 --- Discussion --- p.30 / Chapter 2.3 --- Experiment 3 --- p.31 / Chapter 2.3.1 --- Method --- p.31 / Chapter 2.3.2 --- Results --- p.32 / Chapter 2.3.3 --- Discussion --- p.33 / Chapter 2.4 --- General Discussion --- p.34 / Chapter 2.4.1 --- Summary of Main Findings --- p.34 / Chapter 2.4.2 --- The Perceptual Locus of Context Effect --- p.38 / Chapter 2.4.3 --- Task Demand on Context --- p.39 / Chapter 3. --- THE EFFECT OF EMOTION WORDS ON ORIENTATION JUDGEMENT --- p.41 / Chapter 3.1 --- Experiment 4 --- p.42 / Chapter 3.1.1 --- Method --- p.42 / Chapter 3.1.2 --- Results --- p.46 / Chapter 3.1.3 --- Discussion --- p.49 / Chapter 3.2 --- ExperimentS --- p.50 / Chapter 3.2.1 --- Method --- p.50 / Chapter 3.2.2 --- Results --- p.52 / Chapter 3.2.3 --- Discussion --- p.55 / Chapter 3.3 --- Experiment 6 --- p.55 / Chapter 3.3.1 --- Method --- p.56 / Chapter 3.3.2 --- Results --- p.56 / Chapter 3.3.3 --- Discussion --- p.59 / Chapter 3.4 --- Experiment 7 --- p.59 / Chapter 3.4.1 --- Method --- p.60 / Chapter 3.4.2 --- Results --- p.62 / Chapter 3.4.3 --- Discussion --- p.65 / Chapter 3.5 --- Experiment 8 --- p.66 / Chapter 3.5.1 --- Method --- p.67 / Chapter 3.5.2 --- Results --- p.67 / Chapter 3.5.3 --- Discussion --- p.70 / Chapter 3.6 --- General Discussion --- p.71 / Chapter 3.6.1 --- Results Summary --- p.71 / Chapter 3.6.2 --- Context Effect and Task Demand on Face --- p.73 / Chapter 3.6.3 --- Context Effect and Task Demand on Context --- p.76 / Chapter 4. --- CONTEXT EFFECT OF EMOTION WORDS: AN ERP STUDy --- p.77 / Chapter 4.1 --- Introduction --- p.77 / Chapter 4.2 --- Experiment 9 --- p.78 / Chapter 4.2.1 --- Method --- p.78 / Chapter 4.2.2 --- Results --- p.82 / Chapter 4.3 --- General Discussion --- p.86 / Chapter 4.3.1 --- Results Summary --- p.86 / Chapter 4.3.2 --- The Locus of Context Effect of Emotion Words --- p.87 / Chapter 4.3.3 --- Influence of Language on Perception --- p.87 / Chapter 5. --- GENERAL DISCUSSION --- p.89 / Chapter 5.1 --- Overview of Results --- p.89 / Chapter 5.2 --- The Modified Priming Paradigm --- p.92 / Chapter 5.3 --- Automaticity of Context Effect of Emotion Words --- p.94 / Chapter 5.4 --- The Locus of Context Effect of Emotion WordsError! Bookmark not defined. / Chapter 5.5 --- Limitations and Future Directions --- p.96 / Chapter 6. --- CONCLUSION --- p.100 / APPENDIX / Chapter A. --- Face Stimuli Source --- p.101 / Chapter B. --- Emotion Categorization of Faces --- p.102 / Chapter C. --- Happy Face Advantage in Orientation Experiment --- p.l03 / Chapter D. --- Summary of Measures of Pl and N170 in Face Task --- p.106 / BIBLIOGRAPHy --- p.108
98

About face, computergraphic synthesis and manipulation of facial imagery

Weil, Peggy January 1982 (has links)
Thesis (M.S.V.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1982. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ROTCH. VIDEODISC IN ARCHIVES AND ROTCH VISUAL COLLECTIONS. / Includes bibliographical references (leaves 87-90). / A technique of pictorially synthesizing facial imagery using optical videodiscs under computer control is described. Search, selection and averaging processes are performed on a catalogue of whole faces and facial features to yield a composite, expressive, recognizable face. An immediate application of this technique is the reconstruction of a particular face from memory for police identification, thus the project is called , IDENTIDISC. Part I-PACEMAKER describes the production and implementation of the IDENTIDISC system to produce composite faces. Part II-EXPRESSIONMAKER describes animation techniques to add expression and motion to composite faces . Expression sequences are manipulated to make 'anyface' make any face. Historical precedents of making facial composites, theories of facial recognition, classification and expression are also discussed. This thesis is accompanied by two copies of PACEMAKER-III, an optical videodisc produced at the Architecture Machine Group in 1982. The disc can be played on an optical videodisc player . The length is approximately 15 , 0000 frames. Frame numbers are indicated in the text by [ ]. / by Peggy Weil. / M.S.V.S.
99

Smiling and Snarling- Contextual-responsivity in emotional expression as a predictor of adjustment to spousal loss

Connolly, Philippa Sophie January 2019 (has links)
Why do some people experience more emotional distress than others after spousal-death? And can we predict who will struggle more than others? While many will exhibit resilience in the wake of a bereavement, a small but notable portion ranging from 7-10% (Maciejewski, Maercker, Boelen & Prigerson, 2016; Nielsen et al., 2017) experience a prolonged period of elevated symptoms and distress (Bonanno et al. 2007; Prigerson et al., 2009). Although there is marked individual variation in the grief course, little is yet known about the mechanisms underlying grief that endures, and why some people will struggle more than others after experiencing the death of a spouse. Compelling findings have linked deficits in emotion regulation with the development of psychopathology (Buss, Davidson, Kalin, & Goldsmith, 2004; Gehricke, & Shapiro, 2000), and the study of one particular form of emotion regulation, contextually responsive emotional responding, may be particularly promising in predicting divergent individual differences in the grief course following the death of a spouse (Bonanno & Burton, 2013). Recent bereavement studies have provided preliminary evidence linking contextually responsive emotional expression to grief-related adjustment. However, these studies suffer from notable methodological limitations, such as the use of limited measures of emotional expression or cross-sectional design. The current study will use a longitudinal design to investigate whether individual differences in emotional expressions of happiness and contempt, across varied contexts, can predict long-term adjustment and psychopathology. In addition, we will employ a standardized facial coding system to investigate contextually unresponsive facial behaviors, which we operationalize as the mismatch between facial expression of emotion and four systematically varying idiographic contexts.
100

Towards Man-Machine Interfaces: Combining Top-down Constraints with Bottom-up Learning in Facial Analysis

Kumar, Vinay P. 01 September 2002 (has links)
This thesis proposes a methodology for the design of man-machine interfaces by combining top-down and bottom-up processes in vision. From a computational perspective, we propose that the scientific-cognitive question of combining top-down and bottom-up knowledge is similar to the engineering question of labeling a training set in a supervised learning problem. We investigate these questions in the realm of facial analysis. We propose the use of a linear morphable model (LMM) for representing top-down structure and use it to model various facial variations such as mouth shapes and expression, the pose of faces and visual speech (visemes). We apply a supervised learning method based on support vector machine (SVM) regression for estimating the parameters of LMMs directly from pixel-based representations of faces. We combine these methods for designing new, more self-contained systems for recognizing facial expressions, estimating facial pose and for recognizing visemes.

Page generated in 0.0763 seconds