Spelling suggestions: "subject:"bacial expressions"" "subject:"abacial expressions""
101 |
Reconhecimento automático de expressões faciais gramaticais na língua brasileira de sinais / Automatic recognition of Grammatical Facial Expressions from Brazilian Sign Language (Libras)Fernando de Almeida Freitas 16 March 2015 (has links)
O Reconhecimento das Expressões Faciais tem atraído bastante a atenção dos pesquisadores nas últimas décadas, principalmente devido às suas ponteciais aplicações. Nas Línguas de Sinais, por serem línguas de modalidade visual-espacial e não contarem com o suporte sonoro da entonação, as Expressões Faciais ganham uma importância ainda maior, pois colaboram também para formar a estrutura gramatical da língua. Tais expressões são chamadas Expressões Faciais Gramaticais e estão presentes nos níveis morfológico e sintático das Línguas de Sinais, elas ganham destaque no processo de reconhecimento automático das Línguas de Sinais, pois colaboram para retirada de ambiguidades entre sinais que possuem parâmetros semelhantes, como configuração de mãos e ponto de articulação, além de colaborarem na composição do sentido semântico das sentenças. Assim, este projeto de pesquisa em nível de mestrado tem por objetivo desenvolver um conjunto de modelos de reconhecimento de padrões capazes de resolver o problema de intepretação automática de Expressões Faciais Gramaticais, usadas no contexto da Língua de Sinais Brasileira (Libras), considerando-as em Nível Sintático. / The facial expression recognition has attracted most of the researchers attention over the last years, because of that it can be very useful in many applications. The Sign Language is a spatio-visual language and it does not have the speech intonation support, so Facial Expression gain relative importance to convey grammatical information in a signed sentence and they contributed to morphological and/or syntactic level to a Sign Language. Those expressions are called Grammatical Facial Expression and they cooperate to solve the ambiguity between signs and give meaning to sentences. Thus, this research project aims to develop models that make possible to recognize automatically Grammatical Facial Expressions from Brazilian Sign Language (Libras)
|
102 |
Rapid Facial Reactions to Emotionally Relevant StimuliThunberg, Monika January 2007 (has links)
<p>The present thesis investigated the relationship between rapid facial muscle reactions and emotionally relevant stimuli. In Study I, it was demonstrated that angry faces elicit increased <i>Corrugator supercilii</i> activity, whereas happy faces elicit increased <i>Zygomaticus major</i> activity, as early as within the first second after stimulus onset. In Study II, during the first second of exposure, pictures of snakes elicited more corrugator activity than pictures of flowers. However, this effect was apparent only for female participants. Study III showed that participants high as opposed to low in fear of snakes respond with increased corrugator activity, as well as increased autonomic activity, when exposed to pictures of snakes. In Study IV, participants high as opposed to low in speech anxiety responded with a larger difference in corrugator responding between angry and happy faces, and also with a larger difference in zygomatic responding between happy and angry faces, indicating that people high in speech anxiety have an exaggerated facial responsiveness to social stimuli. In summary, the present results show that the facial EMG technique is sensitive to detecting rapid emotional reactions to different emotionally relevant stimuli (human faces and snakes). Additionally, they demonstrate the existence of differences in rapid facial reactions among groups for which the emotional relevance of the stimuli can be considered to differ.</p>
|
103 |
Rapid Facial Reactions to Emotionally Relevant StimuliThunberg, Monika January 2007 (has links)
The present thesis investigated the relationship between rapid facial muscle reactions and emotionally relevant stimuli. In Study I, it was demonstrated that angry faces elicit increased Corrugator supercilii activity, whereas happy faces elicit increased Zygomaticus major activity, as early as within the first second after stimulus onset. In Study II, during the first second of exposure, pictures of snakes elicited more corrugator activity than pictures of flowers. However, this effect was apparent only for female participants. Study III showed that participants high as opposed to low in fear of snakes respond with increased corrugator activity, as well as increased autonomic activity, when exposed to pictures of snakes. In Study IV, participants high as opposed to low in speech anxiety responded with a larger difference in corrugator responding between angry and happy faces, and also with a larger difference in zygomatic responding between happy and angry faces, indicating that people high in speech anxiety have an exaggerated facial responsiveness to social stimuli. In summary, the present results show that the facial EMG technique is sensitive to detecting rapid emotional reactions to different emotionally relevant stimuli (human faces and snakes). Additionally, they demonstrate the existence of differences in rapid facial reactions among groups for which the emotional relevance of the stimuli can be considered to differ.
|
104 |
Semantic Framing of Speech : Emotional and Topical Cues in Perception of Poorly Specified SpeechLidestam, Björn January 2003 (has links)
The general aim of this thesis was to test the effects of paralinguistic (emotional) and prior contextual (topical) cues on perception of poorly specified visual, auditory, and audiovisual speech. The specific purposes were to (1) examine if facially displayed emotions can facilitate speechreading performance; (2) to study the mechanism for such facilitation; (3) to map information-processing factors that are involved in processing of poorly specified speech; and (4) to present a comprehensive conceptual framework for speech perception, with specification of the signal being considered. Experi¬mental and correlational designs were used, and 399 normal-hearing adults participated in seven experiments. The main conclusions are summarised as follows. (a) Speechreading can be facilitated by paralinguistic information as constituted by facial displayed emotions. (b) The facilitatory effect of emitted emotional cues is mediated by their degree of specification in transmission and ambiguity as percepts; and by how distinct the perceived emotions combined with topical cues are as cues for lexical access. (c) The facially displayed emotions affect speech perception by conveying semantic cues; no effect via enhanced articulatory distinctiveness, nor of emotion-related state in the perceiver is needed for facilitation. (d) The combined findings suggest that emotional and topical cues provide constraints for activation spreading in the lexicon. (e) Both bottom-up and top-down factors are associated with perception of poorly specified speech, indicating that variation in information-processing abilities is a crucial factor for perception if there is paucity in sensory input. A conceptual framework for speech perception, comprising specification of the linguistic and paralinguistic information, as well as distinctiveness of primes, is presented. Generalisations of the findings to other forms of paralanguage and language processing are discussed.
|
105 |
Expecting Happy Women, Not Detecting the Angry Ones : Detection and Perceived Intensity of Facial Anger, Happiness, and EmotionalityPixton, Tonya S. January 2011 (has links)
Faces provide cues for judgments regarding the emotional state of individuals. Using signal-detection methodology and a standardized stimulus set, the overall aim of the present dissertation was to investigate the detection of emotional facial expressions (i.e., angry and happy faces) with neutral expressions as the nontarget stimuli. Study I showed a happy-superiority effect and a bias towards reporting happiness in female faces. As work progressed, questions arose regarding whether the emotional stimuli were equal with regard to perceived strength of emotion, and whether the neutral faces were perceived as neutral. To further investigate the effect of stimulus quality on the obtained findings, Study II was designed such that the facial stimuli were rated on scales of happy-sad, angry-friendly, and emotionality. Results showed that ‘neutral’ facial expressions were not rated as neutral, and that there was a greater perceived distance between happy and neutral faces than between angry and neutral faces. These results were used to adjust the detectability measures to compensate for the varying distances of the angry and happy stimuli from the neutral stimuli in the emotional space. The happy-superiority effect was weakened, while an angry-female disadvantage remained. However, as these results were based upon different participant groups for detection and emotional rating, Study III was designed to investigate whether the results from Studies I and II could be replicated in a design where the same participants performed both tasks. Again, the results showed the non-neutrality of ‘neutral’ expressions and that happiness was more easily detected than anger, as shown in general emotion as well as specific emotion detection. Taken together, the overall results of the present dissertation demonstrate a happy-superiority effect that was greater for female than male faces, that angry-female faces were the most difficult to detect, and a bias to report female faces as happy. / At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 1: In press. Paper 2: Manuscript. Paper 3: Manuscript.
|
106 |
[en] SOCIO-CONTEXTUAL COGNITION IN VICARIOUS EMOTIONAL REACTIONS / [pt] COGNIÇÃO SÓCIO-CONTEXTUAL EM REAÇÕES EMOCIONAIS VICÁRIASBRUNO MACIEL DE CARVALHO PINTO SALLES 11 December 2018 (has links)
[pt] Achados recentes sugerem que pistas sociais e contextuais podem moderar respostas a emoções alheias. O presente trabalho investigou cognição sócio-contextual em reações emocionais vicárias. Foi examinado se respostas convergentes e divergentes dependem da afiliação grupal, direção do olhar e a emoção mostrada pelo emissor; e se o grau proximidade modera respostas aversivas e compassivas ao sofrimento alheio. Essas variáveis emocionais foram analisadas por autorrelato, expressões faciais, rastreio ocular e dilatação de pupila. Os achados respaldam teorias de cognição social e seus efeitos sobre emoção e empatia. / [en] Recent findings suggest that social and contextual cues may moderate responses toward other s emotions. Therefore, the current work investigated socio-contextual cognition in vicarious emotional reactions. It was examined if convergent and divergent responses depend on group membership, gaze direction, and the emotion showed by the displayer; and if degrees of closeness moderate aversive and compassionate responses to other s suffering. These emotional variables were assessed by self-report, facial expressions, gaze behavior and pupil dilatation. Findings supports theories of social cognition and its effects on emotion and empathy.
|
107 |
Potenciais evocados relacionados à integração semântica entre estímulos musicais e faces em pessoas com alto desempenho musicalRocha, Viviane Cristina da 01 January 2013 (has links)
Made available in DSpace on 2016-03-15T19:39:58Z (GMT). No. of bitstreams: 1
Viviane Cristina da Rocha.pdf: 2278481 bytes, checksum: 0296266256348645bf89629bad29b949 (MD5)
Previous issue date: 2013-01-01 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Based on new techniques of brain imaging and electrophysiological approaches, it is possible to better understand brain functioning when listening or making music. This study aims to investigate how emotional cues in melodies are integrated with emotional cues in faces and how musical training influences this particular task. Participated in the experiments 32 adults, brazilian Portuguese speakers, from 21 to 35 years of age, divided in 2 groups: G1, with professional classical singers and G2, with people who had no musical training during their lives. Both groups were submitted to the 2 experiments. Experiment 1 had 80 different melodies sung without words by a female professional singer, 40 related to happiness and 40 related to sadness. Every musical excerpt was followed by a female face, that could express happiness or sadness. Participants had to judge whether the face was congruent or incongruent with the preceding musical excerpt. In Experiment 2, 20 of the participants did a similar task. However, they had to judge whether the face was congruent or incongruent with a word that had previously appeared. For this experiment, there were 40 words, 20 related to happiness and 20 related to sadness. Behavioural data (task performance and reaction time) and electrophysiological data (amplitude of evoked potentials) were analyzed through a repeated measures ANOVA, with established error α=5%. The evoked potentials P1, N1, N170, EPN and N2 were individually analyzed. As results, there were no significant effects for group. However, in the electrophysiological data, there were significant effects for group in P1, N1, EPN and N2 for the first experiment, as such results were not found for the second experiment. These results indicate that there might be a possible influence of priming when it comes to subjects musical experience. / Por meio das tecnologias de imageamento cerebral e de investigação eletrofisiológica, pode-se compreender melhor o funcionamento do cérebro ao ouvir música ou executá-la, sem que sejam necessárias técnicas invasivas de exploração neurológica. Este trabalho tem como objetivo geral investigar o processamento cerebral de melodias de conotação alegre ou triste por pessoas de duas diferentes populações. Participaram do estudo 32 adultos, entre 21 e 35 anos, falantes língua portuguesa brasileira, divididos em dois grupos: G1, composto por cantores líricos profissionais e G2, por pessoas sem formação musical. Os dois grupos foram submetidos a dois experimentos. O Experimento 1, composto por melodias cantadas sem palavras, divididas em melodias relacionadas a alegria e melodias relacionadas a tristeza, compostas especialmente para este estudo. Todos os trechos foram cantados em vocalize, seguidos de uma face, alegre ou triste, sendo congruente ou incongruente em relação ao trecho que a precede. No total, foram apresentados 80 excertos musicais, 40 relacionados à alegria e 40 à tristeza. Os participantes julgavam, para cada trecho ouvido, se a face apresentada ao final do excerto seria considerada congruente ou incongruente em relação à melodia que a precedeu. No Experimento 2, 20 dos participantes realizavam tarefa semelhante, porém deveriam julgar se as faces eram congruentes ou incongruentes à palavra que a precedia. Para esse experimento, foram utilizadas 40 palavras (20 relacionadas a alegria e 20 a tristeza). Foram analisados os dados comportamentais (desempenho na tarefa e tempo de reação) e os dados eletrofisiológicos (amplitude média dos componentes). Foi realizada análise estatística por meio de ANOVA para medidas repetidas, estabelecendo-se erro α=5%. Para os dados eletrofisiológicos, foram analisados individualmente os potenciais P1, N1, N70, EPN e N2. Não foram encontrados efeitos de grupo para os dados comportamentais. No entanto, efeitos de grupo foram encontrados somente nos dados eletrofisiológicos do primeiro experimento, para os componentes P1, N1, EPN e N2, indicando uma possível influência do priming em função do conhecimento musical dos sujeitos.
|
108 |
Using player's facial emotional expressions as a game input : Effects on narrative engagementTurečková, Šárka January 2016 (has links)
Even self generated facial expression could hypothetically affect emotional engagement in narrative and with it possibly even other aspects of narrative engagement. This thesis evaluates effects of using player's facial emotional expressions as a game decision input in a social situation on narrative engagement and its dimensions. To evaluate this, players' experiences from each of the versions of for this purpose developed game are collected, compared and analysed. The data collection is conducted through Busselle's and Bilandzic's (2009) narrative engagement scale, additional questions, observations and short interview with focus on characters and goals. Effects on narrative engagement couldn't be statistically proven. However, from the analysis was shown, that using player's facial emotional expressions as a game decision input in a social situation could possibly positively affect empathic engagement with characters and enjoyment.
|
109 |
Humorous implications and meanings : a multi-modal approach to sarcasm in interactional humor / Implications humoristiques : une étude multi-modale du sarcasme en interactionTabacaru, Sabina 05 December 2014 (has links)
Cette thèse examine les différentes façons utilisées pour construire de l’humour en interaction dans deux séries américaines contemporaines—/Dr. House/ et /The Big Bang Theory/. A travers les différentes techniques d’écriture, nous observons les éléments utilisés pour construire des énoncés humoristiques. Le dialogue entre les personnages occupe une place fondamentale puisqu’il est centré sur les points de convergence et donc sur l’idée d’intersubjectivité entre les interlocuteurs.Cette étude est basée sur une expérience originale qui implique l’examen de la gestuelle utilisée par les personnages dans les deux séries pour créer des effets humoristiques. Les gestes et les différentes techniques humoristiques ont été annotés dans le logiciel vidéo ELAN qui permet une vision plus large sur les processus créant l’humour en interaction.Les résultats montrent une visible préférence pour le sarcasme en tant que catégorie de l’humour la plus utilisée dans le corpus. De plus, le corpus montre aussi une prédilection pour l’utilisation de certaines expressions du visage (haussement et froncement des sourcils) ainsi que certains mouvements de la tête (inclinaison et hochement). Ces éléments sont repris et expliqués en fonction de leur rôle dans le contexte et dans l’attitude des locuteurs pour une meilleure compréhension de l’humour en interaction. / This dissertation examines the different techniques used to achieve humor in interaction in two contemporary American television-series—/House M.D./ and /The Big Bang Theory./ Through different writing techniques, we observe the elements that are used in the process of humorous meaning construction. The dialogue between interlocutors plays a central role since it centers on intersubjectivity, and hence, the common ground between speakers.This original study also implies the investigation of the different gestures used by interlocutors in the two series to create humorous effects. These /gestural triggers/ as well as the different humor types have been annotated in ELAN, which allows a more holistic view of the processes involved in humor.The results show an evident preference for sarcasm as well as a preference for certain facial expressions (raising eyebrows and frowning) and head movements (head tilts and head nods). These elements are explained in accordance with a given context and with the speakers’ attitude for a better understanding of humor in interaction.
|
110 |
Modelagem computacional para reconhecimento de emoções baseada na análise facial / Computational modeling for emotion recognition based on facial analysisGiampaolo Luiz Libralon 24 November 2014 (has links)
As emoções são objeto de estudo não apenas da psicologia, mas também de diversas áreas como filosofia, psiquiatria, biologia, neurociências e, a partir da segunda metade do século XX, das ciências cognitivas. Várias teorias e modelos emocionais foram propostos, mas não existe consenso quanto à escolha de uma ou outra teoria ou modelo. Neste sentido, diversos pesquisadores argumentam que existe um conjunto de emoções básicas que foram preservadas durante o processo evolutivo, pois servem a propósitos específicos. Porém, quantas e quais são as emoções básicas aceitas ainda é um tópico em discussão. De modo geral, o modelo de emoções básicas mais difundido é o proposto por Paul Ekman, que afirma a existência de seis emoções: alegria, tristeza, medo, raiva, aversão e surpresa. Estudos também indicam que existe um pequeno conjunto de expressões faciais universais capaz de representar as seis emoções básicas. No contexto das interações homem-máquina, o relacionamento entre ambos vem se tornando progressivamente natural e social. Desta forma, à medida que as interfaces evoluem, a capacidade de interpretar sinais emocionais de interlocutores e reagir de acordo com eles de maneira apropriada é um desafio a ser superado. Embora os seres humanos utilizem diferentes maneiras para expressar emoções, existem evidências de que estas são mais precisamente descritas por expressões faciais. Assim, visando obter interfaces que propiciem interações mais realísticas e naturais, nesta tese foi desenvolvida uma modelagem computacional, baseada em princípios psicológicos e biológicos, que simula o sistema de reconhecimento emocional existente nos seres humanos. Diferentes etapas são utilizadas para identificar o estado emocional: a utilização de um mecanismo de pré-atenção visual, que rapidamente interpreta as prováveis emoções, a detecção das características faciais mais relevantes para o reconhecimento das expressões emocionais identificadas, e a análise de características geométricas da face para determinar o estado emocional final. Vários experimentos demonstraram que a modelagem proposta apresenta taxas de acerto elevadas, boa capacidade de generalização, e permite a interpretabilidade das características faciais encontradas. / Emotions are the object of study not only of psychology, but also of various research areas such as philosophy, psychiatry, biology, neuroscience and, from the second half of the twentieth century, the cognitive sciences. A number of emotional theories and models have been proposed, but there is no consensus on the choice of one or another of these models or theories. In this sense, several researchers argue that there is a set of basic emotions that have been preserved during the evolutionary process because they serve specific purposes. However, it is still a topic for discussion how many and which the accepted basic emotions are. In general, the model of basic emotions proposed by Paul Ekman, which asserts the existence of six emotions - happiness, sadness, fear, anger, disgust and surprise, is the most popular. Studies also indicate the existence of a small set of universal facial expressions related to the six basic emotions. In the context of human-machine interactions, the relationship between human beings and machines is becoming increasingly natural and social. Thus, as the interfaces evolve, the ability to interpret emotional signals of interlocutors and to react accordingly in an appropriate manner is a challenge to surpass. Even though emotions are expressed in different ways by human beings, there is evidence that they are more accurately described by facial expressions. In order to obtain interfaces that allow more natural and realistic interactions, a computational modeling based on psychological and biological principles was developed to simulate the emotional recognition system existing in human beings. It presents distinct steps to identify an emotional state: the use of a preattentive visual mechanism, which quickly interprets the most likely emotions, the detection of the most important facial features for recognition of the identified emotional expressions, and the analysis of geometric facial features to determine the final emotional state. A number of experiments demonstrated that the proposed computational modeling achieves high accuracy rates, good generalization performance, and allows the interpretability of the facial features revealed.
|
Page generated in 0.1012 seconds