Spelling suggestions: "subject:"[een] EMOTION RECOGNITION"" "subject:"[enn] EMOTION RECOGNITION""
121 |
Emocijų atpažinimas tiriant žmogaus fiziologinius parametrus / Emotion recognition on researching human physiological parametersMarozas, JULIUS, Marozas, Julius 25 August 2008 (has links)
Emocijų atpažinimas, tiriant žmogaus fiziologinius parametrus, yra labai aktualus šiuolaikinės informatikos mokslo uždavinys. Šio darbo siekis yra suprojektuoti emocijų atpažinimo sistemos prototipą, kurį su minimaliais kaštais galima būtų pritaikyti įvairiose emocijomis grindžiamose sistemose. Pateikiamas tokios sistemos koncepcinis modelis, susidedantis iš aparatūrinės įrangos ir programinio modelių. Fiziologinių signalų stiprinimui naudojami AD620 ir OP97 operaciniai stiprintuvai. Analoginio signalo keitimas ir perdavimas į kompiuterį atliekamas naudojant Atmega16 mikrovaldiklį. Schemų testavimui naudojamas kompiuterinis oscilografas (PCS500). Atliekamas skaitmeninio signalo filtravimo metodų tyrimas. Pristatomi fiziologinių parametrų fiksavimo metodų (EKG, SC) atpažinimo algoritmai, pagrįsti SVW algoritmu. Atliekama HRV spektrinė analizė į dažninę sritį, naudojant Furjė transformacijas. Pateikiama praplėsta neraiškioji valdymo sistema, kuri iš fiziologinių parametrų (HR, HRVL, HRVH, SCR, STpirštas, STgalva) išveda susijaudinimo–valentingumo laipsnius. Pagal šiuos laipsnius išvedamos emocijos. Aptariami sukurtų algoritmų, realizuojant juos realioje aplinkoje, nauji elgsenos charakteristikų ypatumai. / Very important task of modern computer science is emotion recognition by human physiological parameters. The purpose of this work is to design a prototype of emotion recognition system, which could be possible customize in various emotions systems with minimal costs. There is representing such system conceptual model, which consists from them realization hardware and programme models. For amplifying the physiological signals is using instrumental AD620 and high precision OP97 operations amplifiers. Analogical signal converting to digital and transmitting to the computer redundant by Atmega16 microcontroller. For electrical schemas testing is using oscillograph (PCS500) connected on the computer. It is redundant analysis for digital signal filtering methods. There is presenting the recognition algorithms, based on SVW algorithm, for physiological parameters fixing methods (ECG, SC). For HRV spectral analyze in frequency domain it is using Furies transformations. There is extended fuzzy control system, which deduces arousal–valence levels from physiological parameters (HR, HRVL, HRVH, SCR, STfinger, SThead). By theses levels is deduces emotions. The new characteristical singularities of designed algorithms in real environment are also discussed.
|
122 |
[en] BUILDING THE VISUAL TRACKING PARADIGM IN THE RECOG-NITION OF EMOTIONAL IN CHILDREN WITH AUTISM / [pt] CONSTRUÇÃO DE UM PARADIGMA DE RASTREIO VISUAL NO RECONHECIMENTO DE EMOÇÕES EM CRIANÇAS AUTISTASKELLY LUANA MAMEDE N ZANGRANDO 13 September 2018 (has links)
[pt] O Autismo é um transtorno do neurodesenvolvimento caracterizado por prejuízos na interação social, na comunicação e no comportamento. Um dos deficit apresentados em seu escopo é no reconhecimento de emoções, apontando para uma série de estratégias de visualização atípicas, tais como: olhar reduzido para estímulos sociais; preferência para a região da boca em vez dos olhos e dificuldades em fixar a atenção. Todavia, não existe um consenso, até o momento, sobre os fatores que podem acarretar tais prejuízos, bem como se existe um padrão característico do rastreio viso espacial para essa população. Com base nesses dados, que a presente dissertação desenvolveu um paradigma de rastreio visual no reconhecimento de emoções em crianças do Espectro Autista (EA). Para tanto, foi necessária uma revisão sistemática, que a partir de uma seleção criteriosa, verificou 65 paradigmas investigados na avaliação do Transtorno do Espectro Autista (TEA) que utilizaram o Eye-tracker como instrumento. A partir de então foi desenvolvido um roteiro para a posterior programação das tarefas. O paradigma de rastreio foi, então, aplicado em quatro crianças diagnosticadas com TEA, que compunham o grupo experimental e em três com desenvolvimento típico para controle, com a finalidade de avaliar a sua aplicabilidade. E embora existam limitações na tarefa que precisam passar por adaptações, foi possível verificar que os participantes do grupo experimental tiveram a duração da tarefa ampliada em decorrência de uma dificuldade na fixação do olhar, bem como tiveram o desempenho prejudicado no reconhecimento das emoções. Esses dados, junto a outros estudos, sugerem que os indivíduos do espectro autista utilizam estratégias visuais atípicas. Entretanto mais pesquisas são necessárias sobre o assunto. / [en] Autism is a neurodevelopmental disorder characterized by impairments in social interaction, communication and behavior. One of the deficits presented in its scope is the emotions recognition, pointing to a number of atypical visualization strategies, such as: reduced look at social stimuli; preference for the mouth instead of the eyes region, and difficulties in fixing attention. However, there is no consensus so far on the factors that can lead to such damages, as well as whether there is a characteristic pattern of visuospatial screening for that population. Based on these data, this dissertation developed a visual tracking paradigm in the recognition of emotions in children of the Autistic Spectrum (EA). Therefore, a system-atic review was necessary, which, based on a careful selection, verified 65 paradigms investigated in the evaluation of Autistic Spectrum Disorder (ASD) and that used the Eye-tracker as instrument. From then on, a script was developed for later tasks programming. The screening paradigm was then applied in four children diagnosed with ASD, who composed the experimental group, and in three with typical development, to control, to evaluate its applicability. Although there are limitations in the task, that must undergo adaptations, it was possible to verify that the participants of the experimental group had a longer duration of the task, due to it s difficulty in fixing the look, as well as they had the performance impaired in the emotions recognition. These data, along with other studies, suggest that individuals on the autistic spectrum use atypical visual strategies. However more research is needed on the subject.
|
123 |
Reconnaissance des émotions par traitement d’images / Emotions recognition based on image processingGharsalli, Sonia 12 July 2016 (has links)
La reconnaissance des émotions est l'un des domaines scientifiques les plus complexes. Ces dernières années, de plus en plus d'applications tentent de l'automatiser. Ces applications innovantes concernent plusieurs domaines comme l'aide aux enfants autistes, les jeux vidéo, l'interaction homme-machine. Les émotions sont véhiculées par plusieurs canaux. Nous traitons dans notre recherche les expressions émotionnelles faciales en s'intéressant spécifiquement aux six émotions de base à savoir la joie, la colère, la peur, le dégoût, la tristesse et la surprise. Une étude comparative de deux méthodes de reconnaissance des émotions l'une basée sur les descripteurs géométriques et l'autre basée sur les descripteurs d'apparence est effectuée sur la base CK+, base d'émotions simulées, et la base FEEDTUM, base d'émotions spontanées. Différentes contraintes telles que le changement de résolution, le nombre limité d'images labélisées dans les bases d'émotions, la reconnaissance de nouveaux sujets non inclus dans la base d'apprentissage sont également prises en compte. Une évaluation de différents schémas de fusion est ensuite réalisée lorsque de nouveaux cas, non inclus dans l'ensemble d'apprentissage, sont considérés. Les résultats obtenus sont prometteurs pour les émotions simulées (ils dépassent 86%), mais restent insuffisant pour les émotions spontanées. Nous avons appliqué également une étude sur des zones locales du visage, ce qui nous a permis de développer des méthodes hybrides par zone. Ces dernières améliorent les taux de reconnaissance des émotions spontanées. Finalement, nous avons développé une méthode de sélection des descripteurs d'apparence basée sur le taux d'importance que nous avons comparée avec d'autres méthodes de sélection. La méthode de sélection proposée permet d'améliorer le taux de reconnaissance par rapport aux résultats obtenus par deux méthodes reprises de la littérature. / Emotion recognition is one of the most complex scientific domains. In the last few years, various emotion recognition systems are developed. These innovative applications are applied in different domains such as autistic children, video games, human-machine interaction… Different channels are used to express emotions. We focus on facial emotion recognition specially the six basic emotions namely happiness, anger, fear, disgust, sadness and surprise. A comparative study between geometric method and appearance method is performed on CK+ database as the posed emotion database, and FEEDTUM database as the spontaneous emotion database. We consider different constraints in this study such as different image resolutions, the low number of labelled images in learning step and new subjects. We evaluate afterward various fusion schemes on new subjects, not included in the training set. Good recognition rate is obtained for posed emotions (more than 86%), however it is still low for spontaneous emotions. Based on local feature study, we develop local features fusion methods. These ones increase spontaneous emotions recognition rates. A feature selection method is finally developed based on features importance scores. Compared with two methods, our developed approach increases the recognition rate.
|
124 |
Représentation invariante des expressions faciales. : Application en analyse multimodale des émotions. / Invariant Representation of Facial Expressions : Application to Multimodal Analysis of EmotionsSoladié, Catherine 13 December 2013 (has links)
De plus en plus d’applications ont pour objectif d’automatiser l’analyse des comportements humains afin d’aider les experts qui réalisent actuellement ces analyses. Cette thèse traite de l’analyse des expressions faciales qui fournissent des informations clefs sur ces comportements.Les travaux réalisés portent sur une solution innovante, basée sur l’organisation des expressions, permettant de définir efficacement une expression d’un visage.Nous montrons que l’organisation des expressions, telle que définie, est universelle : une expression est alors caractérisée par son intensité et sa position relative par rapport aux autres expressions. La solution est comparée aux méthodes classiques et montre une augmentation significative des résultats de reconnaissance sur 14 expressions non basiques. La méthode a été étendue à des sujets inconnus. L’idée principale est de créer un espace d’apparence plausible spécifique à la personne inconnue en synthétisant ses expressions basiques à partir de déformations apprises sur d’autres sujets et appliquées sur le neutre du sujet inconnu. La solution est aussi mise à l’épreuve dans un environnement multimodal dont l’objectif est la reconnaissance d’émotions lors de conversations spontanées. Notre méthode a été mise en œuvre dans le cadre du challenge international AVEC 2012 (Audio/Visual Emotion Challenge) où nous avons fini 2nd, avec des taux de reconnaissance très proches de ceux obtenus par les vainqueurs. La comparaison des deux méthodes (la nôtre et celles des vainqueurs) semble montrer que l’extraction des caractéristiques pertinentes est la clef de tels systèmes. / More and more applications aim at automating the analysis of human behavior to assist or replace the experts who are conducting these analyzes. This thesis deals with the analysis of facial expressions, which provide key information on these behaviors.Our work proposes an innovative solution to effectively define a facial expression, regardless of the morphology of the subject. The approach is based on the organization of expressions.We show that the organization of expressions, such as defined, is universal and can be effectively used to uniquely define an expression. One expression is given by its intensity and its relative position to the other expressions. The solution is compared with the conventional methods based on appearance data and shows a significant increase in recognition results of 14 non-basic expressions. The method has been extended to unknown subjects. The main idea is to create a plausible appearance space dedicated to the unknown person by synthesizing its basic expressions from deformations learned on other subjects and applied to the neutral face of the unknown subject. The solution is tested in a more comprehensive multimodal environment, whose aim is the recognition of emotions in spontaneous conversations. Our method has been implemented in the international challenge AVEC 2012 (Audio / Visual Emotion Challenge) where we finished 2nd, with recognition rates very close to the winners’ ones. Comparison of both methods (ours and the winners’ one) seems to show that the extraction of relevant features is the key to such systems.
|
125 |
Emocionalita dětí, v jejichž rodinách probíhalo domácí násilí / Emotionality of children who experienced domestic violence in their familiesBenešová, Klára January 2017 (has links)
This thesis studies a specific population of children exposed to domestic violence. The theoretical part is focused on the impacts of domestic violence on emotionality and emotional development of children. It also deals with the problematic aspects of children's emotionality that are believed to be disturbed by this traumatizing experience. These particular areas were chosen based on present studies focused on the children exposed to domestic violence. The empirical part of the study was developed in cooperation with the Locika centre. It studies chosen aspects of children's emotionality in a quantitative and qualitative way. Particularly, it investigates the ability to distinguish emotions and the ability of emotion regulation within the overall cognitive development.
|
126 |
Multi-modal expression recognitionChandrapati, Srivardhan January 1900 (has links)
Master of Science / Department of Mechanical and Nuclear Engineering / Akira T. Tokuhiro / Robots will eventually become common everyday items. However before this becomes a reality, robots would need to learn be socially interactive. Since humans communicate much more information through expression than through actual spoken words, expression recognition is an important aspect in the development of social robots. Automatic recognition of emotional expressions has a number of potential applications other than just social robots. It can be used in systems that make sure the operator is alert at all times, or it can be used for psycho-analysis or cognitive studies. Emotional expressions are not always deliberate and can also occur without the person being aware of them. Recognizing these involuntary expressions provide an insight into the persons thought, state of mind and could be used as indicators for a hidden intent. In this research we developed an initial multi-modal emotion recognition system using cues from emotional expressions in face and voice. This is achieved by extracting features from each of the modalities using signal processing techniques, and then classifying these features with the help of artificial neural networks. The features extracted from the face are the eyes, eyebrows, mouth and nose; this is done using image processing techniques such as seeded region growing algorithm, particle swarm optimization and general properties of the feature being extracted. In contrast features of interest in speech are pitch, formant frequencies and mel spectrum along with some statistical properties such as mean and median and also the rate of change of these properties. These features are extracted using techniques such as Fourier transform and linear predictive coding. We have developed a toolbox that can read an audio and/or video file and perform emotion recognition on the face in the video and speech in the audio channel. The features extracted from the face and voices are independently classified into emotions using two separate feed forward type of artificial neural networks. This toolbox then presents the output of the artificial neural networks from one/both the modalities on a synchronized time scale. Some interesting results from this research is consistent misclassification of facial expressions between two databases, suggesting a cultural basis for this confusion. Addition of voice component has been shown to partially help in better classification.
|
127 |
La reconnaissance d’émotions faciales et musicales dans le processus de vieillissement normalCroteau, Alexina 12 1900 (has links)
No description available.
|
128 |
CONTENT UNDERSTANDING FOR IMAGING SYSTEMS: PAGE CLASSIFICATION, FADING DETECTION, EMOTION RECOGNITION, AND SALIENCY BASED IMAGE QUALITY ASSESSMENT AND CROPPINGShaoyuan Xu (9116033) 12 October 2021 (has links)
<div>This thesis consists of four sections which are related with four research projects.</div><div><br></div><div>The first section is about Page Classification. In this section, we extend our previous approach which could classify 3 classes of pages: Text, Picture and Mixed, to 5 classes which are: Text, Picture, Mixed, Receipt and Highlight. We first design new features to define those two new classes and then use DAG-SVM to classify those 5 classes of images. Based on the results, our algorithm performs well and is able to classify 5 types of pages.</div><div><br></div><div>The second section is about Fading Detection. In this section, we develop an algorithm that can automatically detect fading for both text and non-text region. For text region, we first do global alignment and then perform local alignment. After that, we create a 3D color node system, assign each connected component to a color node and get the color difference between raster page connected component and scanned page connected. For non-text region, after global alignment, we divide the page into "super pixels" and get the color difference between raster super pixels and testing super pixels. Compared with the traditional method that uses a diagnostic page, our method is more efficient and effective.</div><div><br></div><div>The third section is about CNN Based Emotion Recognition. In this section, we build our own emotion recognition classification and regression system from scratch. It includes data set collection, data preprocessing, model training and testing. We extend the model to real-time video application and it performs accurately and smoothly. We also try another approach of solving the emotion recognition problem using Facial Action Unit detection. By extracting Facial Land Mark features and adopting SVM training framework, the Facial Action Unit approach achieves comparable accuracy to the CNN based approach.</div><div><br></div><div>The forth section is about Saliency Based Image Quality Assessment and Cropping. In this section, we propose a method of doing image quality assessment and recomposition with the help of image saliency information. Saliency is the remarkable region of an image that attracts people's attention easily and naturally. By showing everyday examples as well as our experimental results, we demonstrate the fact that, utilizing the saliency information will be beneficial for both tasks.</div>
|
129 |
Exploration de la reconnaissance des émotions en schizophrénie comorbidePaquin, Karine 11 1900 (has links)
No description available.
|
130 |
Interpersonal Functions of Non-Suicidal Self-Injury and Their Relationship to Facial Emotion Recognition and Social Problem-SolvingCopps, Emily Caroline January 2019 (has links)
No description available.
|
Page generated in 0.0597 seconds