Spelling suggestions: "subject:"coacial expression"" "subject:"cracial expression""
201 |
Análise de emoções em expressões faciais: veracidade das emoções e rastreio ocularBusin, Yuri 06 February 2014 (has links)
Made available in DSpace on 2016-03-15T19:40:16Z (GMT). No. of bitstreams: 1
Yuri Busin.pdf: 1414328 bytes, checksum: b9fec72afdd3508a27b151307594f12c (MD5)
Previous issue date: 2014-02-06 / Emotions consist in reactions to significant actions on individuals, more specifically, the responses that involve an affective experience, and may be expressed verbally or not. The human face can express many emotions enabling the exchange of information about affective states, facilitating interaction between individuals. Not all facial expressions are genuine, because in social interactions exist lies and deceptions that can range from altruistic and motives to lie for fear of punishment. Studies show brain regions associated with identifying and perception of each emotion and as cerebral laterality may influence the recognition and expression. The left side of the face has been identified as the most informative to identify emotions, revealing the existence of asymmetry in facial expressions of emotions identification. The objective of the study is to evaluate the pattern of eye movements in tasks of trial expressions of real and falsified emotions. Participated 33 subjects, who was instructed to watch and judge the 96 movies, divided in 2 bateries, each containing 48 movies with true and false facial emotion expressions. All the movies showed more the left area of the face and were edited to have a mirror efect. The movies were presented on a computer with Eye- Tracking device for tracking eye movements. The expressions presented were: happiness, fear and sadness. Participants indicated whether the expression view was true or not. The results indicate that the left hemisphere of the brain is more involved in judging the emotions through the face, as well as being faster, and genuine emotions got a better hit rate. / As emoções consistem em reações a ações significativas nos indivíduos, mais especificamente, a respostas que envolvem uma experiência afetiva, podendo ser expressadas verbalmente ou não. A face humana pode expressar muitas emoções possibilitando a troca de informações sobre os estados afetivos, facilitando a interação entre indivíduos. Nem todas as expressões faciais são genuínas, pois nas interações sociais existem mentiras e dissimulações que podem variar desde motivações altruístas até mentiras por medo de punição. Estudos mostram as regiões cerebrais relacionadas com a identificação e sentimento de cada emoção, e como a lateralidade cerebral pode influenciar no reconhecimento e expressão das mesmas. Assim, o lado esquerdo do rosto tem sido identificado como mais informativo para identificar emoções, revelando a existência de assimetria facial na identificação das expressões de emoções. Objetivo do estudo é avaliar o padrão dos movimentos oculares em tarefas de julgamento de expressões de emoções reais e falseadas. Participaram do estudo 33 sujeitos, os quais serão instruídos a assistir grupos de vídeos, cada um contendo 48 vídeos com expressões falseadas e verdadeiras. Os vídeos apresentam a área esquerda do rosto maior. Estes vídeos foram editados de forma que apresentem as emoções espelhadamente e normais. Todos os vídeos foram apresentados em um computador com equipamento de Eye-Tracking para o rastreamento dos movimentos oculares. As expressões apresentadas foram: Alegria, Medo e Tristeza. Os participantes indicaram se a expressão vista era ou não verdadeira. Os resultados apontam que o campo visual direito é mais utilizado e mais rápidos para julgamento das emoções, sendo assim há uma maior ativação do hemisfério esquerdo cerebral. O julgamento de emoções genuínas obteve um melhor índice de acertos.
|
202 |
The Affective PDF ReaderRadits, Markus January 2010 (has links)
The Affective PDF Reader is a PDF Reader combined with affect recognition systems. The aim of the project is to research a way to provide the reader of a PDF with real - time visual feedback while reading the text to influence the reading experience in a positive way. The visual feedback is given in accordance to analyzed emotional states of the person reading the text - this is done by capturing and interpreting affective information with a facial expression recognition system. Further enhancements would also include analysis of voice in the computation as well as gaze tracking software to be able to use the point of gaze when rendering the visualizations.The idea of the Affective PDF Reader mainly arose in admitting that the way we read text on computers, mostly with frozen and dozed off faces, is somehow an unsatisfactory state or moreover a lonesome process and a poor communication. This work is also inspired by the significant progress and efforts in recognizing emotional states from video and audio signals and the new possibilities that arise from.The prototype system was providing visualizations of footprints in different shapes and colours which were controlled by captured facial expressions to enrich the textual content with affective information. The experience showed that visual feedback controlled by utterances of facial expressions can bring another dimension to the reading experience if the visual feedback is done in a frugal and non intrusive way and it showed that the evolvement of the users can be enhanced.
|
203 |
The Role of Teacher-Child Verbal and Nonverbal Prompts in Kindergarten Classrooms in GhanaOsafo-Acquah, Aaron 22 June 2017 (has links)
While previous studies have examined the educational system in Ghana, there seemed to be very little or no studies that had explored participation and engagement through teacher-child interactions in early childhood education in Ghanaian classrooms (Twum-Danso, 2013). The purpose of this video-based multiple case studies qualitative study of three Kindergarten classrooms in Cape Coast in the Central Region of Ghana was to identify verbal and nonverbal prompts that related to children’s participation in Ghanaian Kindergarten classroom settings. The data for the study were secondary, having been collected by a team of researchers for the New Civics Grant Program in an initial study to find apprenticeship and civic themes in Ghanaian Kindergarten classrooms. The design for the study was a qualitative video analysis of three early childhood centers in Cape Coast in the Central Region of Ghana using video cameras to capture classroom interactions to be able to answer the questions: What is the nature of Ghanaian Kindergarten teachers’ verbal and non-verbal prompts that relate to children’s participation during the instructional process? In what ways do children in Ghanaian Kindergartens participate during the instructional process?
I applied the sociocultural perspective of Rogoff’s (1990, 1993, 2003) three foci of analysis that provided a useful conceptual tool for analyzing research with young children (Robbin, 2007). It highlights how children’s thinking is integrated with and constituted by contexts, collaboration, and signs and cultural tools (p. 48). The findings indicated that Ghanaian Kindergarten teachers’ verbal and nonverbal prompts that related to children’s participation during the instructional process were the use of questions, appreciation, gestures etc. The findings also showed that the ways in which Ghanaian Kindergarten children participated during the instructional process were verbal/oral responses, doing exercises and activities, and also using gestures. It was also found that pedagogical attitudes such as pedagogical sensitivity and understanding, discussion and conversation, and rules and management related to children’s participation during the instructional process.
Ghanaian specific culturally relevant ways and practices of interactions between teachers and children were observed in the participant schools. Teachers used silence to convey messages of disapproval to the children, used eyeing to send messages of disapproval, and also used punishments and rewards to either encourage good behavior or stop bad behavior. Singing and dancing, building classroom community, and value on interpersonal connections were also found to be Ghanaian specific culturally relevant ways of interactions that teachers applied to the classroom interactions. All the teachers in the participant schools showed various forms of appreciation to the children as a way of reinforcing their behaviors and also for praise and redirection of attention.
From the findings of the study, the following recommendations are made:
1. Pre service teacher preparation, and teacher education in general should be reorganized so that the contexts in which the teachers operate will then be guided by contextually relevant pedagogy (Young, 2010). Ghana needs a type of pedagogy that will empower teachers intellectually, socially, emotionally, and politically by using cultural referents to impart knowledge, skills, and attitudes (p. 248).
2. The provision of adequate teaching and learning materials would enable teachers engage children more on exercises and activities during the instructional process. The materials would help teachers to provide enough activities to engage the children’s attention during the instructional process.
3. Ghanaian specific culturally relevant ways of interactions between teachers and children must be taught as a course at the University of Cape Coast to help in the preparation of pre-service teachers.
|
204 |
Autonomous facial expression recognition using the facial action coding systemde la Cruz, Nathan January 2016 (has links)
>Magister Scientiae - MSc / The South African Sign Language research group at the University of the Western Cape is in the process of creating a fully-edged machine translation system to automatically translate between South African Sign Language and English. A major component of the system is the ability to accurately recognise facial expressions, which are used to convey emphasis, tone and mood within South African Sign Language sentences. Traditionally, facial expression recognition research has taken one of two paths: either recognising whole facial expressions of which there are six i.e. anger, disgust, fear, happiness, sadness, surprise, as well as the neutral expression; or recognising the fundamental components of facial expressions as defined by the Facial Action Coding System in the form of Action Units. Action Units are directly related to the motion of specific muscles in the face, combinations of which are used to form any facial expression. This research investigates enhanced recognition of whole facial expressions by means of a hybrid approach that combines traditional whole facial expression recognition with Action Unit recognition to achieve an enhanced classification approach.
|
205 |
An experimental investigation of social cognitive mechanisms in Asperger Syndrome and an exploration of potential links with paranoiaJänsch, Claire January 2011 (has links)
Background: Social cognitive deficits are considered to be central to the interpersonal problems experienced by individuals with a diagnosis of Asperger syndrome, but existing research evidence regarding mentalising ability and emotion recognition ability is difficult to interpret and inconclusive. Higher levels of mental health problems are experienced in Asperger Syndrome than in the general population, including depression, general anxiety and anxiety-related disorders. Clinical accounts have described symptoms of psychosis in individuals with autism spectrum disorders, including Asperger syndrome, and a number of research studies have reported elevated levels of delusional beliefs in this population. Investigations of social cognition in psychosis have highlighted a number of impairments in abilities such as mentalising and emotion recognition, as well as data-gathering and attribution biases that may be related to delusional beliefs. Similarly, a number of factors, including theory of mind difficulties, self-consciousness and anxiety, have been associated with delusional beliefs in individuals with Asperger syndrome, but there is a lack of agreement in the existing research. A preliminary model of delusional beliefs in Asperger syndrome has previously been proposed, which needs to be tested further and potentially refined. The current study aimed to further investigate social cognitive mechanisms in individuals with Asperger syndrome and to explore potential links with the development of paranoia. Method: Participants with a diagnosis of Asperger syndrome were recruited through a number of voluntary organisations and completed screening measures, the Autism Spectrum Quotient and the Wechsler Abbreviated Scale of Intelligence, to ensure their suitability for the study. Participants in the control group were recruited through the university and local community resources and were matched group-wise with the Asperger syndrome group for age, sex and IQ scores. The study compared the Asperger syndrome group (N=30) with the control group (N= 30) with regard to their performance on four experimental tasks and their responses on a number of self-report questionnaires that were delivered as an online survey. The experimental tasks included two theory of mind measures, one designed to assess mental state decoding ability (The Reading the Mind in the Eyes Test) and one designed to assess mental state reasoning ability (the Hinting Task). The recognition of emotions was evaluated through the Facial Expression Recognition Task. The Beads Task was administered to assess data-gathering style and specifically to test for Jumping to Conclusions biases. The self-report questionnaires were employed to measure levels of depression, general anxiety, social anxiety, self-consciousness and paranoid thoughts. Results: The Asperger syndrome group performed less well than the control group on tasks measuring mental state decoding ability, mental state reasoning ability and the recognition of emotion in facial expressions. Additionally, those with Asperger syndrome tended to make decisions on the basis of less evidence and half of the group demonstrated a Jumping to Conclusions bias. Higher levels of depression, general anxiety, social anxiety and paranoid thoughts were reported in the AS group and levels of depression and general anxiety were found to be associated with levels of paranoid thoughts. Discussion: The results are considered in relation to previous research and revisions are proposed for the existing model of delusional beliefs in Asperger syndrome. A critical analysis of the current study is presented, implications for clinical practice are discussed and suggestions are made for future research.
|
206 |
An investigation of young infants’ ability to match phonetic and gender information in dynamic faces and voicePatterson, Michelle Louise 11 1900 (has links)
This dissertation explores the nature and ontogeny of infants' ability to match
phonetic information in comparison to non-speech information in the face and voice.
Previous research shows that infants' ability to match phonetic information in face and
voice is robust at 4.5 months of age (e.g., Kuhl & Meltzoff, 1982; 1984; 1988; Patterson &
Werker, 1999). These findings support claims that young infants can perceive structural
correspondences between audio and visual aspects of phonetic input and that speech is
represented amodally. It remains unclear, however, specifically what factors allow
speech to be perceived amodally and whether the intermodal perception of other
aspects of face and voice is like that of speech. Gender is another biologically significant
cue that is available in both the face and voice. In this dissertation, nine experiments
examine infants' ability to match phonetic and gender information with dynamic faces
and voices.
Infants were seated in front of two side-by-side video monitors which displayed
filmed images of a female or male face, each articulating a vowel sound ( / a / or / i / ) in
synchrony. The sound was played through a central speaker and corresponded with
one of the displays but was synchronous with both. In Experiment 1,4.5-month-old
infants did not look preferentially at the face that matched the gender of the heard voice
when presented with the same stimuli that produced a robust phonetic matching effect.
In Experiments 2 through 4, vowel and gender information were placed in conflict to
determine the relative contribution of each in infants' ability to match bimodal
information in the face and voice. The age at which infants do match gender
information with my stimuli was determined in Experiments 5 and 6. In order to
explore whether matching phonetic information in face and voice is based on featural or
configural information, two experiments examined infants' ability to match phonetic
information using inverted faces (Experiment 7) and upright faces with inverted
mouths (Experiment 8). Finally, Experiment 9 extended the phonetic matching effect to
2-month-old infants. The experiments in this dissertation provide evidence that, at 4.5
months of age, infants are more likely to attend to phonetic information in the face and
voice than to gender information. Phonetic information may have a special salience
and/or unity that is not apparent in similar but non-phonetic events. The findings are
discussed in relation to key theories of perceptual development. / Arts, Faculty of / Psychology, Department of / Graduate
|
207 |
Recognition of facial action units from video streams with recurrent neural networks : a new paradigm for facial expression recognitionVadapalli, Hima Bindu January 2011 (has links)
Philosophiae Doctor - PhD / This research investigated the application of recurrent neural networks (RNNs) for recognition of facial expressions based on facial action coding system (FACS). Support vector machines (SVMs) were used to validate the results obtained by RNNs. In this approach, instead of recognizing whole facial expressions, the focus was on the recognition of action units (AUs) that are defined in FACS. Recurrent neural networks are capable of gaining knowledge from temporal data while SVMs, which are time invariant, are known to be very good classifiers. Thus, the research consists of four important components: comparison of the use of image sequences against single static images, benchmarking feature selection and network optimization approaches, study of inter-AU correlations by implementing multiple output RNNs, and study of difference images as an approach for performance improvement. In the comparative studies, image sequences were classified using a combination of Gabor filters and RNNs, while single static images were classified using Gabor filters and SVMs. Sets of 11 FACS AUs were classified by both approaches, where a single RNN/SVM classifier was used for classifying each AU. Results indicated that classifying FACS AUs using image sequences yielded better results than using static images. The average recognition rate (RR) and false alarm rate (FAR) using image sequences was 82.75% and 7.61%, respectively, while the classification using single static images yielded a RR and FAR of 79.47% and 9.22%, respectively. The better performance by the use of image sequences can be at- tributed to RNNs ability, as stated above, to extract knowledge from time-series data. Subsequent research then investigated benchmarking dimensionality reduction, feature selection and network optimization techniques, in order to improve the performance provided by the use of image sequences. Results showed that an optimized network, using weight decay, gave best RR and FAR of 85.38% and 6.24%, respectively. The next study was of the inter-AU correlations existing in the Cohn-Kanade database and their effect on classification models. To accomplish this, a model was developed for the classification of a set of AUs by a single multiple output RNN. Results indicated that high inter-AU correlations do in fact aid classification models to gain more knowledge and, thus, perform better. However, this was limited to AUs that start and reach apex at almost the same time. This suggests the need for availability of a larger database of AUs, which could provide both individual and AU combinations for further investigation. The final part of this research investigated use of difference images to track the motion of image pixels. Difference images provide both noise and feature reduction, an aspect that was studied. Results showed that the use of difference image sequences provided the best results, with RR and FAR of 87.95% and 3.45%, respectively, which is shown to be significant when compared to use of normal image sequences classified using RNNs. In conclusion, the research demonstrates that use of RNNs for classification of image sequences is a new and improved paradigm for facial expression recognition.
|
208 |
Reconhecimento de emoções faciais em crianças e adolescentes com epilepsia de lobo temporal / Facial emotion recognition in children and adolescents with temporal lobe epilepsyLunardi, Luciane Lorencetti, 1983- 27 August 2018 (has links)
Orientadores: Marilisa Mantovani Guerreiro, Catarina Guimaraes Abraão, Daniel Fuentes Moreira / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Ciências Médicas / Made available in DSpace on 2018-08-27T05:34:44Z (GMT). No. of bitstreams: 1
Lunardi_LucianeLorencetti_D.pdf: 2173561 bytes, checksum: a55f3655c3270f627ab854a67313743a (MD5)
Previous issue date: 2015 / Resumo: A emoção tem um papel essencial na vida cotidiana do individuo. Entender nossas próprias emoções e reconhecer as emoções faciais expressas pelo outros é uma importante habilidade da cognição social. Os objetivos do presente estudo foram investigar a capacidade de reconhecimento de emoções faciais de crianças e adolescentes com epilepsia de lobo temporal (ELT); verificar a influência das variáveis clínicas nesta capacidade e a relação entre o reconhecimento de emoções faciais e a tomada de decisão. Para tal, comparou-se o desempenho entre 10 crianças e adolescentes com ELT, 16 crianças e adolescentes com epilepsia rolândica (ER) e 16 crianças e adolescentes saudáveis, com idades entre 7 a 16 anos e QI> 80. Verificou-se que o reconhecimento do medo foi pior em pacientes com ELT em comparação com o grupo ER (p = 0,00); a lateralidade do foco epileptogênico influênciou o reconhecimento de medo (p=0,04) e quanto melhor o desempenho no teste de tomada de decisões melhor foi a capacidade de reconhecimento de emoções faciais (p=0,02). Estudos sobre os aspectos da cognição social são ferramentas importantes para melhorias na prática clínica, pois pode orientar futuras intervenções e assim, melhorar a qualidade de vida dos pacientes com ELT / Abstract: Emotion has an essential role in everyday life. Understanding our own emotions and recognizing facial emotion expressed by others is important for social cognition. The aims of this study were to investigate the ability of children and adolescents with temporal lobe epilepsy (TLE) to recognize of facial emotions; to investigate the influence of clinical variables in this ability and the relationship between facial emotion recognition and decision making. For this purpose, we compared the performance between 10 children and adolescents with TLE, 16 children and adolescents with rolandic epilepsy (RE) and 16 healthy children and adolescents with aged 7-16 years and IQ> 80. We were found that the fear was recognized worse in patients with TLE compared to the ER (p = 0.00); laterality of the epileptogenic focus influenced the recognition of fear (p = 0.04) and the better performance in the decision-making test best was the recognition ability of facial emotions (p = 0.02). Studies on aspects of social cognition are important tools for improvements in clinical practice because it can guide future interventions and thus improve the quality of life of patients with TLE / Doutorado / Ciencias Biomedicas / Doutora em Ciências Médicas
|
209 |
Rozpoznávání výrazu tváře u neznámých osob / Facial features recognition of unknown personsBartončík, Michal January 2011 (has links)
This paper describes the various components and phases of the search and recognition of facial expressions of unknown persons. They are presented here as well as possible solutions and methods of addressing each phase of the project. My master’s thesis is designed to recognize facial expressions of unknown persons. For this thesis, I was lent industrial video camera, computer, and place in a laboratory. Furthermore, we introduce the color spaces and their use. From the lead representatives selects the most appropriate assistance for the use of Matlab and the proposed algorithm. After finding a suitable color space segments skin color in the image. The skin, however, surrounds the entire body and so need to be found, the separated parts of the image representing the color of skin, a face. Once you find a face is needed to find relevant points for the identification subsequent deformation to definition of facial expressions. We define here the actual muscle movements in different expressions.
|
210 |
The Effects of 3D Characters’ Facial Expressions on Student Motivation to Learn Japanese in a game-based environmentDixuan Cui (8782253) 01 May 2020 (has links)
<p>Previous research has shown that
student-teacher interaction is very important in motivating students to learn a
second language. However, it is unclear whether facial expression, which is one
of the most important portions of interaction, affects in-game language
learning motivation or not. The purpose of this study is to find out the
evidence demonstrating the facial expressions of the other party, in this case,
virtual characters in game, will or will not influence the learning motivation <a>of Japanese L2 students</a>. The researchers of this study
developed four versions of a 3D animated Japanese role-playing game. Each
version of the game represents one facial expression that is neutral, happy,
sad or angry. The entire research consists of two experiments: a validation
study and a motivation study. After validating all the facial expressions of
five main characters in the game, <a>eighty-four college
students from 200/300 level Japanese courses </a>joined in the motivation study
voluntarily. They played a version of the game assigned randomly to them and
then did a post-questionnaire. Conclusions were drawn based on the survey
results. The findings of this research suggested that virtual characters’
facial expressions in game had no significant effect on participants’ <b>learning
motivation</b>. However, significance was found in <b>years of learning
Japanese</b> and <b>gender</b>. Meanwhile, it was found <b>facial expression</b>
and <b>years of learning Japanese</b> had an interactive effect on the variable
<b>immersion into game.</b> </p>
|
Page generated in 0.0931 seconds