• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 12
  • 10
  • 9
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 194
  • 194
  • 49
  • 38
  • 36
  • 34
  • 30
  • 26
  • 25
  • 25
  • 23
  • 21
  • 21
  • 19
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Reconnaissance des émotions par traitement d’images / Emotions recognition based on image processing

Gharsalli, Sonia 12 July 2016 (has links)
La reconnaissance des émotions est l'un des domaines scientifiques les plus complexes. Ces dernières années, de plus en plus d'applications tentent de l'automatiser. Ces applications innovantes concernent plusieurs domaines comme l'aide aux enfants autistes, les jeux vidéo, l'interaction homme-machine. Les émotions sont véhiculées par plusieurs canaux. Nous traitons dans notre recherche les expressions émotionnelles faciales en s'intéressant spécifiquement aux six émotions de base à savoir la joie, la colère, la peur, le dégoût, la tristesse et la surprise. Une étude comparative de deux méthodes de reconnaissance des émotions l'une basée sur les descripteurs géométriques et l'autre basée sur les descripteurs d'apparence est effectuée sur la base CK+, base d'émotions simulées, et la base FEEDTUM, base d'émotions spontanées. Différentes contraintes telles que le changement de résolution, le nombre limité d'images labélisées dans les bases d'émotions, la reconnaissance de nouveaux sujets non inclus dans la base d'apprentissage sont également prises en compte. Une évaluation de différents schémas de fusion est ensuite réalisée lorsque de nouveaux cas, non inclus dans l'ensemble d'apprentissage, sont considérés. Les résultats obtenus sont prometteurs pour les émotions simulées (ils dépassent 86%), mais restent insuffisant pour les émotions spontanées. Nous avons appliqué également une étude sur des zones locales du visage, ce qui nous a permis de développer des méthodes hybrides par zone. Ces dernières améliorent les taux de reconnaissance des émotions spontanées. Finalement, nous avons développé une méthode de sélection des descripteurs d'apparence basée sur le taux d'importance que nous avons comparée avec d'autres méthodes de sélection. La méthode de sélection proposée permet d'améliorer le taux de reconnaissance par rapport aux résultats obtenus par deux méthodes reprises de la littérature. / Emotion recognition is one of the most complex scientific domains. In the last few years, various emotion recognition systems are developed. These innovative applications are applied in different domains such as autistic children, video games, human-machine interaction… Different channels are used to express emotions. We focus on facial emotion recognition specially the six basic emotions namely happiness, anger, fear, disgust, sadness and surprise. A comparative study between geometric method and appearance method is performed on CK+ database as the posed emotion database, and FEEDTUM database as the spontaneous emotion database. We consider different constraints in this study such as different image resolutions, the low number of labelled images in learning step and new subjects. We evaluate afterward various fusion schemes on new subjects, not included in the training set. Good recognition rate is obtained for posed emotions (more than 86%), however it is still low for spontaneous emotions. Based on local feature study, we develop local features fusion methods. These ones increase spontaneous emotions recognition rates. A feature selection method is finally developed based on features importance scores. Compared with two methods, our developed approach increases the recognition rate.
152

Représentation invariante des expressions faciales. : Application en analyse multimodale des émotions. / Invariant Representation of Facial Expressions : Application to Multimodal Analysis of Emotions

Soladié, Catherine 13 December 2013 (has links)
De plus en plus d’applications ont pour objectif d’automatiser l’analyse des comportements humains afin d’aider les experts qui réalisent actuellement ces analyses. Cette thèse traite de l’analyse des expressions faciales qui fournissent des informations clefs sur ces comportements.Les travaux réalisés portent sur une solution innovante, basée sur l’organisation des expressions, permettant de définir efficacement une expression d’un visage.Nous montrons que l’organisation des expressions, telle que définie, est universelle : une expression est alors caractérisée par son intensité et sa position relative par rapport aux autres expressions. La solution est comparée aux méthodes classiques et montre une augmentation significative des résultats de reconnaissance sur 14 expressions non basiques. La méthode a été étendue à des sujets inconnus. L’idée principale est de créer un espace d’apparence plausible spécifique à la personne inconnue en synthétisant ses expressions basiques à partir de déformations apprises sur d’autres sujets et appliquées sur le neutre du sujet inconnu. La solution est aussi mise à l’épreuve dans un environnement multimodal dont l’objectif est la reconnaissance d’émotions lors de conversations spontanées. Notre méthode a été mise en œuvre dans le cadre du challenge international AVEC 2012 (Audio/Visual Emotion Challenge) où nous avons fini 2nd, avec des taux de reconnaissance très proches de ceux obtenus par les vainqueurs. La comparaison des deux méthodes (la nôtre et celles des vainqueurs) semble montrer que l’extraction des caractéristiques pertinentes est la clef de tels systèmes. / More and more applications aim at automating the analysis of human behavior to assist or replace the experts who are conducting these analyzes. This thesis deals with the analysis of facial expressions, which provide key information on these behaviors.Our work proposes an innovative solution to effectively define a facial expression, regardless of the morphology of the subject. The approach is based on the organization of expressions.We show that the organization of expressions, such as defined, is universal and can be effectively used to uniquely define an expression. One expression is given by its intensity and its relative position to the other expressions. The solution is compared with the conventional methods based on appearance data and shows a significant increase in recognition results of 14 non-basic expressions. The method has been extended to unknown subjects. The main idea is to create a plausible appearance space dedicated to the unknown person by synthesizing its basic expressions from deformations learned on other subjects and applied to the neutral face of the unknown subject. The solution is tested in a more comprehensive multimodal environment, whose aim is the recognition of emotions in spontaneous conversations. Our method has been implemented in the international challenge AVEC 2012 (Audio / Visual Emotion Challenge) where we finished 2nd, with recognition rates very close to the winners’ ones. Comparison of both methods (ours and the winners’ one) seems to show that the extraction of relevant features is the key to such systems.
153

Emocionalita dětí, v jejichž rodinách probíhalo domácí násilí / Emotionality of children who experienced domestic violence in their families

Benešová, Klára January 2017 (has links)
This thesis studies a specific population of children exposed to domestic violence. The theoretical part is focused on the impacts of domestic violence on emotionality and emotional development of children. It also deals with the problematic aspects of children's emotionality that are believed to be disturbed by this traumatizing experience. These particular areas were chosen based on present studies focused on the children exposed to domestic violence. The empirical part of the study was developed in cooperation with the Locika centre. It studies chosen aspects of children's emotionality in a quantitative and qualitative way. Particularly, it investigates the ability to distinguish emotions and the ability of emotion regulation within the overall cognitive development.
154

Multi-modal expression recognition

Chandrapati, Srivardhan January 1900 (has links)
Master of Science / Department of Mechanical and Nuclear Engineering / Akira T. Tokuhiro / Robots will eventually become common everyday items. However before this becomes a reality, robots would need to learn be socially interactive. Since humans communicate much more information through expression than through actual spoken words, expression recognition is an important aspect in the development of social robots. Automatic recognition of emotional expressions has a number of potential applications other than just social robots. It can be used in systems that make sure the operator is alert at all times, or it can be used for psycho-analysis or cognitive studies. Emotional expressions are not always deliberate and can also occur without the person being aware of them. Recognizing these involuntary expressions provide an insight into the persons thought, state of mind and could be used as indicators for a hidden intent. In this research we developed an initial multi-modal emotion recognition system using cues from emotional expressions in face and voice. This is achieved by extracting features from each of the modalities using signal processing techniques, and then classifying these features with the help of artificial neural networks. The features extracted from the face are the eyes, eyebrows, mouth and nose; this is done using image processing techniques such as seeded region growing algorithm, particle swarm optimization and general properties of the feature being extracted. In contrast features of interest in speech are pitch, formant frequencies and mel spectrum along with some statistical properties such as mean and median and also the rate of change of these properties. These features are extracted using techniques such as Fourier transform and linear predictive coding. We have developed a toolbox that can read an audio and/or video file and perform emotion recognition on the face in the video and speech in the audio channel. The features extracted from the face and voices are independently classified into emotions using two separate feed forward type of artificial neural networks. This toolbox then presents the output of the artificial neural networks from one/both the modalities on a synchronized time scale. Some interesting results from this research is consistent misclassification of facial expressions between two databases, suggesting a cultural basis for this confusion. Addition of voice component has been shown to partially help in better classification.
155

La reconnaissance d’émotions faciales et musicales dans le processus de vieillissement normal

Croteau, Alexina 12 1900 (has links)
No description available.
156

CONTENT UNDERSTANDING FOR IMAGING SYSTEMS: PAGE CLASSIFICATION, FADING DETECTION, EMOTION RECOGNITION, AND SALIENCY BASED IMAGE QUALITY ASSESSMENT AND CROPPING

Shaoyuan Xu (9116033) 12 October 2021 (has links)
<div>This thesis consists of four sections which are related with four research projects.</div><div><br></div><div>The first section is about Page Classification. In this section, we extend our previous approach which could classify 3 classes of pages: Text, Picture and Mixed, to 5 classes which are: Text, Picture, Mixed, Receipt and Highlight. We first design new features to define those two new classes and then use DAG-SVM to classify those 5 classes of images. Based on the results, our algorithm performs well and is able to classify 5 types of pages.</div><div><br></div><div>The second section is about Fading Detection. In this section, we develop an algorithm that can automatically detect fading for both text and non-text region. For text region, we first do global alignment and then perform local alignment. After that, we create a 3D color node system, assign each connected component to a color node and get the color difference between raster page connected component and scanned page connected. For non-text region, after global alignment, we divide the page into "super pixels" and get the color difference between raster super pixels and testing super pixels. Compared with the traditional method that uses a diagnostic page, our method is more efficient and effective.</div><div><br></div><div>The third section is about CNN Based Emotion Recognition. In this section, we build our own emotion recognition classification and regression system from scratch. It includes data set collection, data preprocessing, model training and testing. We extend the model to real-time video application and it performs accurately and smoothly. We also try another approach of solving the emotion recognition problem using Facial Action Unit detection. By extracting Facial Land Mark features and adopting SVM training framework, the Facial Action Unit approach achieves comparable accuracy to the CNN based approach.</div><div><br></div><div>The forth section is about Saliency Based Image Quality Assessment and Cropping. In this section, we propose a method of doing image quality assessment and recomposition with the help of image saliency information. Saliency is the remarkable region of an image that attracts people's attention easily and naturally. By showing everyday examples as well as our experimental results, we demonstrate the fact that, utilizing the saliency information will be beneficial for both tasks.</div>
157

Exploration de la reconnaissance des émotions en schizophrénie comorbide

Paquin, Karine 11 1900 (has links)
No description available.
158

Interpersonal Functions of Non-Suicidal Self-Injury and Their Relationship to Facial Emotion Recognition and Social Problem-Solving

Copps, Emily Caroline January 2019 (has links)
No description available.
159

Multi-objective optimization for model selection in music classification / Flermålsoptimering för modellval i musikklassificering

Ujihara, Rintaro January 2021 (has links)
With the breakthrough of machine learning techniques, the research concerning music emotion classification has been getting notable progress combining various audio features and state-of-the-art machine learning models. Still, it is known that the way to preprocess music samples and to choose which machine classification algorithm to use depends on data sets and the objective of each project work. The collaborating company of this thesis, Ichigoichie AB, is currently developing a system to categorize music data into positive/negative classes. To enhance the accuracy of the existing system, this project aims to figure out the best model through experiments with six audio features (Mel spectrogram, MFCC, HPSS, Onset, CENS, Tonnetz) and several machine learning models including deep neural network models for the classification task. For each model, hyperparameter tuning is performed and the model evaluation is carried out according to pareto optimality with regard to accuracy and execution time. The results show that the most promising model accomplished 95% correct classification with an execution time of less than 15 seconds. / I och med genombrottet av maskininlärningstekniker har forskning kring känsloklassificering i musik sett betydande framsteg genom att kombinera olikamusikanalysverktyg med nya maskinlärningsmodeller. Trots detta är hur man förbehandlar ljuddatat och valet av vilken maskinklassificeringsalgoritm som ska tillämpas beroende på vilken typ av data man arbetar med samt målet med projektet. Denna uppsats samarbetspartner, Ichigoichie AB, utvecklar för närvarande ett system för att kategorisera musikdata enligt positiva och negativa känslor. För att höja systemets noggrannhet är målet med denna uppsats att experimentellt hitta bästa modellen baserat på sex musik-egenskaper (Mel-spektrogram, MFCC, HPSS, Onset, CENS samt Tonnetz) och ett antal olika maskininlärningsmodeller, inklusive Deep Learning-modeller. Varje modell hyperparameteroptimeras och utvärderas enligt paretooptimalitet med hänsyn till noggrannhet och beräkningstid. Resultaten visar att den mest lovande modellen uppnådde 95% korrekt klassificering med en beräkningstid på mindre än 15 sekunder.
160

Digital game-based learning and socioemotional skills : A quasi-experimental study of the effectiveness of digital-game based learning on the socioemotional skills of children with intellectual disability

Vorkapic, Robert, Christiansson, Nora January 2022 (has links)
Digital games are being increasingly implemented in the educational sector to improvevarious skills. Among these the effectiveness of digital game-based learning (DGBL) on the socio-emotional ability of individuals have been investigated with overall positive results. However, a limited number of studies have investigated the effectiveness of DGBL on this ability in children with intellectual disability and no studies have researched whether DGBL could be effective on the socio-emotional skills of children with this form of disability. Thus, the current study aimed at investigating the effectiveness of DGBL on the specific socio-emotional skill of emotion recognition in children with intellectual disability in the educational sector. The following research question was formulated: Does DGBL increase the socio-emotional skill of emotion recognition in children with intellectual disability. To answer this question a quasiexperimental one-group pretest-posttest design was adopted where participants engaged in a DGBL-intervention where they played a game aimed at improving their emotion recognition ability. Participants were selected via purposive sampling and the final sample consisted of N=7. The sample consisted of children with intellectual disability between the age span of six and nine of both male and female gender. The experiment consisted of three parts: a pretest where data in terms of the socio-emotional skill of emotion recognition was collected, the actual intervention where the participants engaged inplaying a digital game, and lastly, a posttest where the skill of emotion recognition was measured again. The data was subsequently analyzed via a paired sample t-test. The results of the study showed that DGBL did not significantly increase the socio-emotional skill of emotion recognition in children with intellectual disability. This result in part contradicts earlier research on DGBL and intellectual disability as well as DGBL andsocio-emotional skill where significant effects have been identified. However, since no previous research has investigated whether DGBL could be effective in increasing the socio-emotional skill of children with intellectual disability, future research is needed to confirm or reject the present results. In summary, the current research has extended on current knowledge and provided important implications for the field of special education.

Page generated in 0.094 seconds