Spelling suggestions: "subject:"coacial expression"" "subject:"cracial expression""
231 |
Estudo eletromiográfico do padrão de contração muscular da face de adultos / Electromyographic study of muscular contraction patterns in adultsStefani, Fabiane Miron 09 September 2008 (has links)
A motricidade orofacial é a especialidade da Fonoaudiologia, que tem como objetivo a prevenção, diagnóstico e tratamento das alterações miofuncionais do sistema estomatognático. Atualmente, muitos pesquisadores desta área, nacional e internacionalmente, têm buscado metodologias mais objetivas de avaliação e conduta. Dentre tais aparatos está a eletromiografia de superfície (EMG). A EMG é a medida da atividade elétrica de um músculo. Os objetivos deste trabalho foram o de identificar, por meio da EMG, a atividade elétrica dos músculos faciais de adultos saudáveis durante movimentos faciais normalmente utilizados terapeuticamente na clínica fonoaudiológica, para identificar o papel de cada músculo durante os movimentos e para diferenciar a atividade elétrica destes músculos nestes mesmos movimentos, bem como avaliar a validade da EMG na clínica fonoaudiológica. Foram avaliadas 31 pessoas (18 mulheres) com média de idade de 29,48 anos e sem queixas fonoaudiológicas ou odontológicas. Os eletrodos de superfície bipolares foram aderidos aos músculos masseteres, bucinadores e supra-hióides bilateralmente e aos músculos orbicular da boca superior e inferior. Os eletrodos foram conectados a um eletromiógrafo EMG 1000 da Lynx Tecnologia Eletrônica de oito canais, e foi pedido que cada participante realizasse os seguintes movimentos: Protrusão Labial (PL), Protrusão Lingual (L), Inflar Bochechas (IB), Sorriso Aberto (SA), Sorriso Fechado (SF), Lateralização Labial Direita (LD) e Esquerda (LE) e Pressão de um lábio contra o outro (AL). Os dados eletromiográficos foram registrados em microvolts (RMS) e foi considerada a média dos movimentos para a realização da análise dos dados, que foram normalizados utilizando como base o registro da EMG no repouso e os resultados demonstram que os músculos orbiculares da boca inferior e superior apresentam maior atividade elétrica que os outros músculos na maior parte dos movimentos, com exceção dos movimentos de L e SF, Nos movimentos de LD e LE, os orbiculares da boca também estavam mais ativos, mas os músculos bucinadores demonstraram participação importante, especialmente o bucinador direito em LD A Protrusão Lingual não demonstrou diferenças significativas entre os músculos estudados. O SA teve maior participação do orbicular da boca Inferior que o superior, e demonstrou ser o movimento que mais movimenta os músculos da face como um todo e o músculo com maior atividade durante o SF foi o bucinador. Concluímos que o aparato da EMG é eficiente não só para a avaliação dos músculos mastigatórios, mas também dos da mímica, a não ser no movimento de Protrusão lingual, onde o EMG de superfície não foi eficiente. Os músculos orbiculares foram mais ativos durante os movimentos testados, portanto, são também os mais exercitados durante os exercícios de motricidade oral. O movimento que envolve a maior atividade dos músculos da face como um todo foi o Sorriso Aberto / Speech Therapy has been considered subjective during many years due to its manual and visual methods. Many researchers have been searching for more objective methodology of evaluation, based on electronics devises. One of them is the EMG- Surface Electromyography, which is the electric unit measure of a muscle. Literature presents many works in TMJ and Orthodontics areas, special attention to the chewing muscles- temporal and masseter- for been bigger muscles, presenting more evident results in EMG. Less attention is paid for mimic muscles. The objective of our work is to identify, by means of EMG, the electrical activity of facial muscles of healthy adults during facial movements normally used in speech therapy clinic, to identify the role of each muscle during movements and to differentiate the electrical activity of these muscles during this movements. 31 volunteers have been evaluated (18 women) with mean age of 29,84 years, no speech therapy or odontological complains. Bipolar surface electrodes have been adhered to masseter, buccinator and suprahyoid muscles bilaterally and to superior and inferior orbicular oris muscles. Electrodes were connected to a EMG 1000 from Lynx Tecnologia Eletrônica of 8 channels, and it was asked each participant to carry out the following movements: Labial Protrusion (PL), Lingual Protrusion (L), Cheek Inflating (CI), Opened Smile (OS), Closed Smile (CS), Labial Lateralization (LL) and pressure of one lip against the other (LP). EMG data was registered in microvolts (RMS) and the movement media was considered for data analyses, which were normalized using as bases the rest EMG and results show that orbicular oris are more electric activity than other muscles in PL, CI, OS, LL and LP. In LL movements, orbicularis oris also showed greater activity, but buccinator muscles showed effective participation in movement, especially in right LL. L didnt show any differences between evaluated muscles. Buccinator was the most active muscle during CS. We concluded that Orbicularis Ores were the most active muscles during the tasks, exception made to L and CS. In L no muscle was significantly higher and in CS Buccinators were the most active. Opened Smile is the movement where the muscles are more activated in a role. This results shows that EMG is of great use for mimic muscles evaluation, but should be used carefully in specific tongue assessment
|
232 |
AFFECT-PRESERVING VISUAL PRIVACY PROTECTIONXu, Wanxin 01 January 2018 (has links)
The prevalence of wireless networks and the convenience of mobile cameras enable many new video applications other than security and entertainment. From behavioral diagnosis to wellness monitoring, cameras are increasing used for observations in various educational and medical settings. Videos collected for such applications are considered protected health information under privacy laws in many countries. Visual privacy protection techniques, such as blurring or object removal, can be used to mitigate privacy concern, but they also obliterate important visual cues of affect and social behaviors that are crucial for the target applications. In this dissertation, we propose to balance the privacy protection and the utility of the data by preserving the privacy-insensitive information, such as pose and expression, which is useful in many applications involving visual understanding.
The Intellectual Merits of the dissertation include a novel framework for visual privacy protection by manipulating facial image and body shape of individuals, which: (1) is able to conceal the identity of individuals; (2) provide a way to preserve the utility of the data, such as expression and pose information; (3) balance the utility of the data and capacity of the privacy protection.
The Broader Impacts of the dissertation focus on the significance of privacy protection on visual data, and the inadequacy of current privacy enhancing technologies in preserving affect and behavioral attributes of the visual content, which are highly useful for behavior observation in educational and medical settings. This work in this dissertation represents one of the first attempts in achieving both goals simultaneously.
|
233 |
Reconnaissance d'états émotionnels par analyse visuelle du visage et apprentissage machine / Recognition of emotional states by visual facial analysis and machine learningLekdioui, Khadija 29 December 2018 (has links)
Dans un contexte présentiel, un acte de communication comprend des expressions orales et émotionnelles. A partir de l’observation, du diagnostic et de l’identification de l’état émotionnel d’un individu, son interlocuteur pourra entreprendre des actions qui influenceraient la qualité de la communication. A cet égard, nous pensons améliorer la manière dont les individus perçoivent leurs échanges en proposant d’enrichir la CEMO (communication écrite médiatisée par ordinateur) par des émotions ressenties par les collaborateurs. Pour ce faire, nous proposons d’intégrer un système de reconnaissance, en temps réel, des émotions (joie, peur, surprise, colère, dégoût, tristesse, neutralité) à la plate-forme pédagogique “Moodle”, à partir de l’analyse des expressions faciales de l’apprenant à distance lors des activités collaboratives. La reconnaissance des expressions faciales se fait en trois étapes. Tout d’abord, le visage et ses composants (sourcils, nez, bouche, yeux) sont détectés à partir de la configuration de points caractéristiques. Deuxièmement, une combinaison de descripteurs hétérogènes est utilisée pour extraire les traits caractéristiques du visage. Finalement, un classifieur est appliqué pour classer ces caractéristiques en six émotions prédéfinies ainsi que l’état neutre. Les performances du système proposé seront évaluées sur des bases publiques d’expressions faciales posées et spontanées telles que Cohn-Kanade (CK), Karolinska Directed Emotional Faces (KDEF) et Facial Expressions and Emotion Database (FEED). / In face-to-face settings, an act of communication includes verbal and emotional expressions. From observation, diagnosis and identification of the individual's emotional state, the interlocutor will undertake actions that would influence the quality of the communication. In this regard, we suggest to improve the way that the individuals perceive their exchanges by proposing to enrich the textual computer-mediated communication by emotions felt by the collaborators. To do this, we propose to integrate a real time emotions recognition system in a platform “Moodle”, to extract them from the analysis of facial expressions of the distant learner in collaborative activities. There are three steps to recognize facial expressions. First, the face and its components (eyebrows, nose, mouth, eyes) are detected from the configuration of facial landmarks. Second, a combination of heterogeneous descriptors is used to extract the facial features. Finally, a classifier is applied to classify these features into six predefined emotions as well as the neutral state. The performance of the proposed system will be assessed on a public basis of posed and spontaneous facial expressions such as Cohn-Kanade (CK), Karolinska Directed Emotional Faces (KDEF) and Facial Expressions and Emotion Database (FEED).
|
234 |
Neural Correlates of Attention Bias in Posttraumatic Stress Disorder: A fMRI StudyFani, Negar 11 August 2011 (has links)
Attention biases to trauma-related information contribute to symptom maintenance in Posttraumatic Stress Disorder (PTSD); this phenomenon has been observed through various behavioral studies, although findings from studies using a precise, direct bias task, the dot probe, have been mixed. PTSD neuroimaging studies have indicated atypical function in specific brain regions involved with attention bias; when viewing emotionally-salient cues or engaging in tasks that require attention, individuals with PTSD have demonstrated altered activity in brain regions implicated in cognitive control and attention allocation, including the medial prefrontal cortex (mPFC), dorsolateral prefrontal cortex (dlPFC) and amygdala. However, remarkably few PTSD neuroimaging studies have employed tasks that both measure attentional strategies being engaged and include emotionally-salient information.
In the current study of attention biases in highly traumatized African-American adults, a version of the dot probe task that includes stimuli that are both salient (threatening facial expressions) and relevant (photographs of African-American faces) was administered to 19 participants with and without PTSD during functional magnetic resonance imaging (fMRI). I hypothesized that: 1) individuals with PTSD would show a significantly greater attention bias to threatening faces than traumatized controls; 2) PTSD symptoms would be associated with a significantly greater attentional bias toward threat expressed in African-American, but not Caucasian, faces; 3) PTSD symptoms would be significantly associated with abnormal activity in the mPFC, dlPFC, and amygdala during presentation of threatening faces.
Behavioral data did not provide evidence of attentional biases associated with PTSD. However, increased activation in the dlPFC and regions of the mPFC in response to threat cues was found in individuals with PTSD, relative to traumatized controls without PTSD; this may reflect hyper-engaged cognitive control, attention, and conflict monitoring resources in these individuals. Additionally, viewing threat in same-race, both not other-race, faces was associated with increased activation in the mPFC. These findings have important theoretical and treatment implications, suggesting that PTSD, particularly in those individuals who have experienced chronic or multiple types of trauma, may be characterized less by top-down “deficits” or failures, but by imbalanced neurobiological and cognitive systems that become over-engaged in order to “control” the emotional disruption caused by trauma-related triggers.
|
235 |
Morphable 3d Facial Animation Based On Thin Plate SplinesErdogdu, Aysu 01 May 2010 (has links) (PDF)
The aim of this study is to present a novel three dimensional (3D) facial animation method for morphing emotions and facial expressions from one face model to another. For this purpose, smooth and realistic face models were animated with thin plate splines (TPS). Neutral face models were animated and compared with the actual expressive face models. Neutral and expressive face models were obtained from subjects via a 3D face scanner.
The face models were preprocessed for pose and size normalization. Then muscle and wrinkle control points were located to the source face with neutral expression according to the human anatomy. Facial Action Coding System (FACS) was used to determine the control points and the face regions in the underlying model. The final positions of the control points after a facial expression were received from the expressive scan data of the source face. Afterwards control points were transferred to the target face using the facial landmarks and TPS as the morphing function. Finally, the neutral target face was animated with control points by TPS.
In order to visualize the method, face scans with expressions composed of a selected subset of action units found in Bosphorus Database were used. Five lower-face and three-upper face action units are simulated during this study. For experimental results, the facial expressions were created on the 3D neutral face scan data of a human subject and the synthetic faces were compared to the subject&rsquo / s actual 3D scan data with the same facial expressions taken from the dataset.
|
236 |
Visual Observation of Human Emotions / L'observation visuelle des émotions humainesJain, Varun 30 March 2015 (has links)
Cette thèse a pour sujet le développement de méthodes et de techniques permettant d'inférer l'état affectif d'une personne à partir d'informations visuelles. Plus précisement, nous nous intéressons à l'analyse d'expressions du visage, puisque le visage est la partie la mieux visible du corps, et que l'expression du visage est la manifestation la plus évidente de l'affect. Nous étudions différentes théories psychologiques concernant affect et émotions, et différentes facons de représenter et de classifier les émotions d'une part et la relation entre expression du visage et émotion sousjacente d'autre part. Nous présentons les dérivées Gaussiennes multi-échelle en tant que descripteur dímages pour l'estimation de la pose de la tête, pour la détection de sourire, puis aussi pour la mesure de l'affect. Nous utilisons l'analyse en composantes principales pour la réduction de la dimensionalité, et les machines à support de vecteur pour la classification et la regression. Nous appliquons cette même architecture, simple et efficace, aux différents problèmes que sont l'estimation de la pose de tête, la détection de sourire, et la mesure d'affect. Nous montrons que non seulement les dérivées Gaussiennes multi-échelle ont une performance supérieure aux populaires filtres de Gabor, mais qu'elles sont également moins coûteuses en calculs. Lors de nos expérimentations nous avons constaté que dans le cas d'un éclairage partiel du visage les dérivées Gaussiennes multi-échelle ne fournissent pas une description d'image suffisamment discriminante. Pour résoudre ce problème nous combinons des dérivées Gaussiennes avec des histogrammes locaux de type LBP (Local Binary Pattern). Avec cette combinaison nous obtenons des résultats à la hauteur de l'état de l'art pour la détection de sourire dans le base d'images GENKI qui comporte des images de personnes trouvées «dans la nature» sur internet, et avec la difficile «extended YaleB database». Pour la classification dans la reconnaissance de visage nous utilisons un apprentissage métrique avec comme mesure de similarité une distance de Minkowski. Nous obtenons le résultat que les normes L1 and L2 ne fournissent pas toujours la distance optimale; cet optimum est souvent obtenu avec une norme Lp où p n'est pas entier. Finalement, nous développons un système multi-modal pour la détection de dépressions nerveuses, avec en entrée des informations audio et vidéo. Pour la détection de mouvements intra-faciaux dans les données vidéo nous utilisons de descripteurs de type LBP-TOP (Local Binary Patterns -Three Orthogonal Planes), alors que nous utilisons des trajectoires denses pour les mouvements plus globaux, par exemple de la tête ou des épaules. Nous avons trouvé que les descripteurs LBP-TOP encodés avec des vecteurs de Fisher suffisent pour dépasser la performance de la méthode de reférence dans la compétition «Audio Visual Emotion Challenge (AVEC) 2014». Nous disposons donc d'une technique effective pour l'evaluation de l'état dépressif, technique qui peut aisement être étendue à d'autres formes d'émotions qui varient lentement, comme l'humeur (mood an Anglais). / In this thesis we focus on the development of methods and techniques to infer affect from visual information. We focus on facial expression analysis since the face is one of the least occluded parts of the body and facial expressions are one of the most visible manifestations of affect. We explore the different psychological theories on affect and emotion, different ways to represent and classify emotions and the relationship between facial expressions and underlying emotions. We present the use of multiscale Gaussian derivatives as an image descriptor for head pose estimation, smile detection before using it for affect sensing. Principal Component Analysis is used for dimensionality reduction while Support Vector Machines are used for classification and regression. We are able to employ the same, simple and effective architecture for head pose estimation, smile detection and affect sensing. We also demonstrate that not only do multiscale Gaussian derivatives perform better than the popular Gabor Filters but are also computationally less expensive to compute. While performing these experiments we discovered that multiscale Gaussian derivatives do not provide an appropriately discriminative image description when the face is only partly illuminated. We overcome this problem by combining Gaussian derivatives with Local Binary Pattern (LBP) histograms. This combination helps us achieve state-of-the-art results for smile detection on the benchmark GENKI database which contains images of people in the "wild" collected from the internet. We use the same description method for face recognition on the CMU-PIE database and the challenging extended YaleB database and our results compare well with the state-of-the-art. In the case of face recognition we use metric learning for classification, adopting the Minkowski distance as the similarity measure. We find that L1 and L2 norms are not always the optimum distance metrics and the optimum is often an Lp norm where p is not an integer. Lastly we develop a multi-modal system for depression estimation with audio and video information as input. We use Local Binary Patterns -Three Orthogonal Planes (LBP-TOP) features to capture intra-facial movements in the videos and dense trajectories for macro movements such as the movement of the head and shoulders. These video features along with Low Level Descriptor (LLD) audio features are encoded using Fisher Vectors and finally a Support Vector Machine is used for regression. We discover that the LBP-TOP features encoded with Fisher Vectors alone are enough to outperform the baseline method on the Audio Visual Emotion Challenge (AVEC) 2014 database. We thereby present an effective technique for depression estimation which can be easily extended for other slowly varying aspects of emotions such as mood.
|
237 |
Le visage de la douleur : informations efficientes pour la reconnaissance et impacts sur l’observateur.Roy, Cynthia 04 1900 (has links)
L’expression faciale de la douleur occupe un rôle central dans la communication de la douleur et dans l’estimation de l’intensité de la douleur vécue par autrui. Les propriétés du visage d’une personne en souffrance ont été investiguées principalement à l’aide de méthodes descriptives (e.g. FACS). L’introduction fait le point sur les connaissances de l’expression faciale de douleur et de la communication de cette expérience sur les plans comportemental et cérébral et souligne que les mécanismes et stratégies visuels utilisés par l’observateur pour parvenir à détecter la douleur dans le visage d’autrui demeurent très peu connus. L’étude des processus impliqués dans la reconnaissance de l’expression de la douleur est essentielle pour comprendre la communication de la douleur et éventuellement expliquer des phénomènes ayant des impacts cliniques considérables, tel que l’effet classique de sous-estimation de la douleur d’autrui. L’article 1 vise à établir à l’aide d’une méthode directe (Bubbles) les informations visuelles utilisées efficacement par l’observateur lorsqu’il doit catégoriser la douleur parmi les émotions de base. Les résultats montrent que parmi l’ensemble des caractéristiques du visage typique de la douleur, peu d’informations sont vraiment efficaces pour parvenir à cette discrimination et que celles qui le sont encodent la partie affective- motivationnelle de l’expérience d’autrui. L’article 2 investigue le pouvoir de ces régions privilégiées du visage de la douleur dans la modulation d’une expérience nociceptive chez l’observateur afin de mieux comprendre les mécanismes impliqués dans une telle
modulation. En effet, s’il est connu que des stimuli ayant une valence émotionnelle négative, y compris des expressions faciales de douleur, peuvent augmenter les réponses spinales (réflexes) et supra-spinales (ex.: perceptives) de la douleur, l’information visuelle suffisante pour permettre l’activation des voies modulatrices demeure inconnue. Les résultats montrent qu’en
voyant les régions diagnostiques pour la reconnaissance de l’expression faciale de douleur, la douleur perçue par l’observateur suite à une stimulation nociceptive est plus grande que lorsqu’il voit les régions les moins corrélées avec une bonne reconnaissance de la douleur. L’exploration post-expérimentale des caractéristiques de
nos stimuli suggère que cette modulation n’est pas explicable par l’induction d’un état émotionnel négatif, appuyant ainsi un rôle prépondérant de la communication de la douleur dans la modulation vicariante de l’expérience douloureuse de l’observateur. Les
mesures spinales ne sont toutefois pas modulées par ces
manipulations et suggèrent ainsi que ce ne sont pas des voies cérébro-spinale qui sont impliquées dans ce phénomène. / Facial expression plays a central role in pain communication including when judging on others’ pain intensity. Facial expression characteristics have been investigated mainly with descriptive methods (e.g. FACS). The thesis introduction summarizes current knowledge on behavioral and cerebral processes involved in pain facial expression and pain communication. Moreover, a better understanding of the processes subtending the recognition of pain in others appears essential to address clinical issues such as the classical under-estimation effect. The article 1 uses a direct method (Bubbles) to identify the visual information efficiently used by the observer to correctly discriminate pain among basic emotions facial expressions. Results show that, among all the facial movement typically found in pain facial expressions, the observers use few information. Moreover, the visual regions highly correlated with correct identification of pain have previously been described as encoding the affective-motivational of the pain experience of the sufferer. It is known that the pain experience can be increased by visual stimuli with a negative affective valence, including pain facial expressions, through spinal and supra-spinal (perceptual) processes. The article 2 aims to study if the facial regions important for pain identification are sufficient to modulate the observer’s pain experience. Results showed that looking at the diagnostic visual information for pain identification enhanced the perception of pain) of the observer when compared to viewing regions not correlated with correct identification. A post-experimental exploration of the characteristics of our stimuli suggests that this effect is not attributable to an induction of negative emotions, thereby supporting a predominant role for pain communication in the vicarious facilitation of pain in the observer. However, spinal measures were not modulated by our visual stimuli, suggesting that the neural system underlying the modulation does not involve cerebro-spinal processes.
|
238 |
Simulation Of Turkish Lip Motion And Facial Expressions In A 3d Environment And Synchronization With A Turkish Speech EngineAkagunduz, Erdem 01 January 2004 (has links) (PDF)
In this thesis, 3D animation of human facial expressions and lip motion and their synchronization with a Turkish Speech engine using JAVA programming language, JAVA3D API and Java Speech API, is analyzed. A three-dimensional animation model for simulating Turkish lip motion and facial expressions is developed.
In addition to lip motion, synchronization with a Turkish speech engine is achieved. The output of the study is facial expressions and Turkish lip motion synchronized with Turkish speech, where the input is Turkish text in Java Speech Markup Language (JSML) format, also indicating expressions.
Unlike many other languages, in Turkish, words are easily broken up into syllables. This property of Turkish Language lets us use a simple method to map letters to Turkish visual phonemes. In this method, totally 37 face models are used to represent the Turkish visual phonemes and these letters are mapped to 3D facial models considering the syllable structures.
The animation is created using JAVA3D API. 3D facial models corresponding to different lip positions of the same person are morphed to each other to construct the animation.
Moreover, simulations of human facial expressions of emotions are created within the animation. Expression weight parameter, which states the weight of the given expression, is introduced.
The synchronization of lip motion with Turkish speech is achieved via CloudGarden® / &rsquo / s Java Speech API interface.
As a final point a virtual Turkish speaker with facial expression of emotions is created for JAVA3D animation.
|
239 |
Recognition of facial affect in individuals scoring high and low in psychopathic personality characteristicsAli, Afiya. January 2007 (has links)
Thesis (M.Soc.Sc. Psychology)--University of Waikato, 2007. / Title from PDF cover (viewed April 8, 2008) Includes bibliographical references (p. 70-76)
|
240 |
Anthropologie des Ausdrucks : die Expressivität des Menschen zwischen Natur und Kultur /Meuter, Norbert. January 2006 (has links)
Zugl.: Berlin, Humboldt-Universiẗat, Habil.-Schr., 2004. / Angekündigt u.d.T.: Meuter, Norbert: Die Expressivität der menschlichen Existenz.
|
Page generated in 0.0807 seconds