Spelling suggestions: "subject:"emotionation detection""
1 |
An analysis of emotion-exchange motifs in multiplex networks during emergency eventsKusen, Ema, Strembeck, Mark January 2019 (has links) (PDF)
In this paper, we present an analysis of the emotion-exchange patterns that arise from
Twitter messages sent during emergency events. To this end, we performed a
systematic structural analysis of the multiplex communication network that we derived
from a data-set including more than 1.9 million tweets that have been sent during five
recent shootings and terror events. In order to study the local communication
structures that emerge as Twitter users directly exchange emotional messages, we
propose the concept of emotion-exchangemotifs. Our findings suggest that
emotion-exchange motifs which contain reciprocal edges (indicating online
conversations) only emerge when users exchange messages that convey anger or fear,
either in isolation or in any combination with another emotion. In contrast, the
expression of sadness, disgust, surprise, as well as any positive emotion are rather
characteristic for emotion-exchange motifs representing one-way communication
patterns (instead of online conversations). Among other things, we also found that a
higher structural similarity exists between pairs of network layers consisting of one
high-arousal emotion and one low-arousal emotion, rather than pairs of network layers
belonging to the same arousal dimension.
|
2 |
Domain-specific lexicon generation for emotion detection from textBandhakavi, Anil January 2018 (has links)
Emotions play a key role in effective and successful human communication. Text is popularly used on the internet and social media websites to express and share emotions, feelings and sentiments. However useful applications and services built to understand emotions from text are limited in effectiveness due to reliance on general purpose emotion lexicons that have static vocabulary and sentiment lexicons that can only interpret emotions coarsely. Thus emotion detection from text calls for methods and knowledge resources that can deal with challenges such as dynamic and informal vocabulary, domain-level variations in emotional expressions and other linguistic nuances. In this thesis we demonstrate how labelled (e.g. blogs, news headlines) and weakly-labelled (e.g. tweets) emotional documents can be harnessed to learn word-emotion lexicons that can account for dynamic and domain-specific emotional vocabulary. We model the characteristics of realworld emotional documents to propose a generative mixture model, which iteratively estimates the language models that best describe the emotional documents using expectation maximization (EM). The proposed mixture model has the ability to model both emotionally charged words and emotion-neutral words. We then generate a word-emotion lexicon using the mixture model to quantify word-emotion associations in the form of a probability vectors. Secondly we introduce novel feature extraction methods to utilize the emotion rich knowledge being captured by our word-emotion lexicon. The extracted features are used to classify text into emotion classes using machine learning. Further we also propose hybrid text representations for emotion classification that use the knowledge of lexicon based features in conjunction with other representations such as n-grams, part-of-speech and sentiment information. Thirdly we propose two different methods which jointly use an emotion-labelled corpus of tweets and emotion-sentiment mapping proposed in psychology to learn word-level numerical quantification of sentiment strengths over a positive to negative spectrum. Finally we evaluate all the proposed methods in this thesis through a variety of emotion detection and sentiment analysis tasks on benchmark data sets covering domains from blogs to news articles to tweets and incident reports.
|
3 |
Effects of affective states on driver situation awareness and adaptive mitigation interfaces: focused on angerJeon, Myounghoon 03 July 2012 (has links)
Research has suggested that affective states have critical effects on various cognitive processes and performance. Evidence from driving studies has also emphasized the importance of driver situation awareness (Endsley, 1995b) for driving performance and safety. However, to date, no research has investigated the relationship between affective effects and driver situation awareness. Two studies examined the relationship between a driver's affective states and situation awareness. In Experiment 1, 30 undergraduates drove in a simulator after either anger or neutral affect induction. Results suggested that an induced angry state can degrade driver situation awareness and driving performance more than the neutral state. Interestingly, the angry state did not influence participants' perceived workload. Experiment 2 explored the possibilities of using an "attention deployment" emotion regulation strategy as an intervention for mitigating angry effects on driving, via an adaptive speech-based system. 60 undergraduates drove the same scenario as in Experiment 1 after affect induction with different intervention conditions: anger with no sound; anger with the ER system: directive/ command style emotion regulation messages; anger with the SA system: suggestive/ notification style situation awareness prompts; or neutral with no sound. Results showed that both speech-based systems can not only enhance driver situation awareness and driving performance, but also reduce the anger level and perceived workload. Participants rated the ER system as more effective, but they rated the SA system as less annoying and less authoritative than the ER system. Based on the results of Experiment 2, regression models were constructed between a driver's affective states and driving performance, being mediated by situation awareness (full mediation for speeding and partial mediation for collision).
These results allow researchers to construct a more detailed driver behavior model by showing how an affective state can influence driver situation awareness and performance. The practical implications of this research include the use of situation awareness prompts as a possible strategy for mitigating affective effects, for the design of an affect detection and mitigation system for drivers.
|
4 |
Emotion detection deficits and changes in personality traits linked to loss of white matter integrity in primary progressive aphasiaMultani, Namita, Galantucci, Sebastiano, Wilson, Stephen M., Shany-Ur, Tal, Poorzand, Pardis, Growdon, Matthew E., Jang, Jung Yun, Kramer, Joel H., Miller, Bruce L., Rankin, Katherine P., Gorno-Tempini, Maria Luisa, Tartaglia, Maria Carmela January 2017 (has links)
Non-cognitive features including personality changes are increasingly recognized in the three PPA variants (semantic-svPPA, non fluent-nfvPPA, and logopenic-lvPPA). However, differences in emotion processing among the PPA variants and its association with white matter tracts are unknown. We compared emotion detection across the three PPA variants and healthy controls (HC), and related them to white matter tract integrity and cortical degeneration. Personality traits in the PPA group were also examined in relation to white matter tracts. Thirty-three patients with svPPA, nfvPPA, lvPPA, and 32 HC underwent neuropsychological assessment, emotion evaluation task (EET), and MRI scan. Patients' study partners were interviewed on the Clinical Dementia Rating Scale (CDR) and completed an interpersonal traits assessment, the Interpersonal Adjective Scale (IAS). Diffusion tensor imaging of uncinate fasciculus (UF), superior longitudinal fasciculus (SLF) and inferior longitudinal fasciculus (ILF), and voxel-based morphometry to derive gray matter volumes for orbitofrontal cortex (OFC), anterior temporal lobe (ATL) regions were performed. In addition, gray matter volumes of white matter tract-associated regions were also calculated: inferior frontal gyrus (IFG), posterior temporal lobe (PTL), inferior parietal lobe (IPL) and occipital lobe (OL). ANCOVA was used to compare EET performance. Partial correlation and multivariate linear regression were conducted to examine association between EET and neuroanatomical regions affected in PPA. All three variants of PPA performed significantly worse than HC on EET, and the svPPA group was least accurate at recognizing emotions. Performance on EET was related to the right UF, SLF, and ILF integrity. Regression analysis revealed EET performance primarily relates to the right UF integrity. The IAS subdomain, cold-hearted, was also associated with right UF integrity. Disease-specific emotion recognition and personality changes occur in the three PPA variants and are likely associated with disease-specific neuroanatomical changes. Loss of white matter integrity contributes as significantly as focal atrophy in behavioral changes in PPA.
|
5 |
Identifying Expressions of Emotions and Their Stimuli in TextGhazi, Diman January 2016 (has links)
Emotions are among the most pervasive aspects of human experience. They have long been of interest to social and behavioural sciences. Recently, emotions have attracted the attention of researchers in computer science and particularly in computational linguistics. Computational approaches to emotion analysis have also focused on various emotion modalities, but there is less effort in the direction of automatic recognition of the emotion expressed. Although some past work has addressed detecting emotions, detecting why an emotion arises is ignored.
In this work, we explore the task of classifying texts automatically by the emotions
expressed, as well as detecting the reason why a particular emotion is felt. We believe there is still a large gap between the theoretical research on emotions in psychology and emotion studies in computational linguistics. In our research, we try to fill this gap by considering both theoretical and computational aspects of emotions. Starting with a general explanation of emotion and emotion causes from the psychological and cognitive perspective, we clarify the definition that we base our work on. We explain what is feasible in the scope of text and what is practically doable based on the current NLP techniques and tools.
This work is organized in two parts: first part on Emotion Expression and the second
part on Emotion Stimulus.
In emotion expression detection, we start with shallow methodologies, such as corpus-based and lexical-based, followed by deeper methods considering syntactic and semantic relations in text. First, we demonstrate the usefulness of external knowledge resources, such as polarity and emotion lexicons, in automatic emotion detection. Next, we provide a description of the more advanced features chosen for characterizing emotional content based on the syntactic structure of sentences, as well as the machine learning techniques adopted for emotion classification.
The main novelty of our learning methodology is that it breaks down a problem into
hierarchical steps. It starts from a simpler problem to solve, and uses what is learnt to
extend the solution to solve harder problems. Here, we are learning emotion of sentences with one emotion word and we are extending the solution to sentences with more than one emotion word.
Next, we frame the detection of causes of emotions as finding a stimulus frame element as defined for the emotion frame in FrameNet – a lexical database of English based on the theory of meaning called Frame Semantics, which was built by manually annotating examples of how words are used in actual texts. According to FrameNet, an emotion stimulus is the person, event, or state of affairs that evokes the emotional response in the Experiencer. We believe it is the closest definition to emotion cause in order to answer why the experiencer feels that emotion.
We create the first ever dataset annotated with both emotion stimulus and emotion class; it can be used for evaluation or training purposes. We applied sequential learning methods to the dataset. We explored syntactic and semantic features in addition to corpus-based features. We built a model which outperforms all our carefully-built baselines. To show the robustness of our model and to study the problem more thoroughly, we apply those models to another dataset (that we used for the first part as well) to go deeper than detecting the emotion expressed and also detect the stimulus span which explains why the emotion was felt.
Although we first address emotion expression and emotion stimulus independently, we
believe that an emotion stimulus and the emotion itself are not mutually independent. In the last part, we address the relation of emotion expression and emotion stimulus by building four cases: both emotion expression and emotion stimulus occur at the same time, none of them appear in the text, there is only emotion expression, or only the emotion stimulus exists while there is no explicit mention of the emotion expressed. We found the last case the most challenging, so we study it in more detail.
Finally, we showcase how a clinical psychology application can benefit from our research. We also conclude our work and explain the future directions of this research.
Note: see http://www.eecs.uottawa.ca/~diana/resources/emotion_stimulus_data/
for all the data built for this thesis and discussed in it.
|
6 |
EXPLORING PSEUDO-TOPIC-MODELING FOR CREATING AUTOMATED DISTANT-ANNOTATION SYSTEMSSommers, Alexander Mitchell 01 September 2021 (has links)
We explore the use a Latent Dirichlet Allocation (LDA) imitating pseudo-topic-model, based on our original relevance metric, as a tool to facilitate distant annotation of short (often one to two sentence or less) documents. Our exploration manifests as annotating tweets for emotions, this being the current use-case of interest to us, but we believe the method could be extended to any multi-class labeling task of documents of similar length. Tweets are gathered via the Twitter API using "track" terms thought likely to capture tweets with a greater chance of exhibiting each emotional class, 3,000 tweets for each of 26 topics anticipated to elicit emotional discourse. Our pseudo-topic-model is used to produce relevance-ranked vocabularies for each corpus of tweets and these are used to distribute emotional annotations to those tweets not manually annotated, magnifying the number of annotated tweets by a factor of 29. The vector labels the annotators produce for the topics are cascaded out to the tweets via three different schemes which are compared for performance by proxy through the competition of bidirectional-LSMTs trained using the tweets labeled at a distance. An SVM and two emotionally annotated vocabularies are also tested on each task to provide context and comparison.
|
7 |
EMOTION DISCOVERY IN HINDI-ENGLISH CODE-MIXED CONVERSATIONSMonika Vyas (18431835) 28 April 2024 (has links)
<p dir="ltr">This thesis delves into emotion recognition in Hindi-English code-mixed dialogues, particularly focusing on romanized text, which is essential for understanding multilingual communication dynamics. Using a dataset from bilingual television shows, the study employs machine learning and natural language processing techniques, with models like Support Vector Machine, Logistic Regression, and XLM-Roberta tailored to handle the nuances of code-switching and transliteration in romanized Hindi-English. To combat challenges such as data imbalance, SMOTE (Synthetic Minority Over-sampling Technique) is utilized, enhancing model training and generalization. The research also explores ensemble learning with methods like VotingClassifier to improve emotional classification accuracy. Logistic regression stands out for its high accuracy and robustness, demonstrated through rigorous cross-validation. The findings underscore the potential of advanced machine learning models and advocate for further exploration of deep learning and multimodal data to enhance emotion detection in diverse linguistic settings.</p>
|
8 |
Reconnaissance de l'émotion thermiqueFu, Yang 05 1900 (has links)
Pour améliorer les interactions homme-ordinateur dans les domaines de la santé, de l'e-learning et des jeux vidéos, de nombreux chercheurs ont étudié la reconnaissance des émotions à partir des signaux de texte, de parole, d'expression faciale, de détection d'émotion ou d'électroencéphalographie (EEG). Parmi eux, la reconnaissance d'émotion à l'aide d'EEG a permis une précision satisfaisante. Cependant, le fait d'utiliser des dispositifs d'électroencéphalographie limite la gamme des mouvements de l'utilisateur. Une méthode non envahissante est donc nécessaire pour faciliter la détection des émotions et ses applications. C'est pourquoi nous avons proposé d'utiliser une caméra thermique pour capturer les changements de température de la peau, puis appliquer des algorithmes d'apprentissage machine pour classer les changements d'émotion en conséquence. Cette thèse contient deux études sur la détection d'émotion thermique avec la comparaison de la détection d'émotion basée sur EEG. L'un était de découvrir les profils de détection émotionnelle thermique en comparaison avec la technologie de détection d'émotion basée sur EEG; L'autre était de construire une application avec des algorithmes d'apprentissage en machine profonds pour visualiser la précision et la performance de la détection d'émotion thermique et basée sur EEG. Dans la première recherche, nous avons appliqué HMM dans la reconnaissance de l'émotion thermique, et après avoir comparé à la détection de l'émotion basée sur EEG, nous avons identifié les caractéristiques liées à l'émotion de la température de la peau en termes d'intensité et de rapidité. Dans la deuxième recherche, nous avons mis en place une application de détection d'émotion qui supporte à la fois la détection d'émotion thermique et la détection d'émotion basée sur EEG en appliquant les méthodes d'apprentissage par machine profondes - Réseau Neuronal Convolutif (CNN) et Mémoire à long court-terme (LSTM). La précision de la détection d'émotion basée sur l'image thermique a atteint 52,59% et la précision de la détection basée sur l'EEG a atteint 67,05%. Dans une autre étude, nous allons faire plus de recherches sur l'ajustement des algorithmes d'apprentissage machine pour améliorer la précision de détection d'émotion thermique. / To improve computer-human interactions in the areas of healthcare, e-learning and video
games, many researchers have studied on recognizing emotions from text, speech, facial
expressions, emotion detection, or electroencephalography (EEG) signals. Among them,
emotion recognition using EEG has achieved satisfying accuracy. However, wearing
electroencephalography devices limits the range of user movement, thus a noninvasive method
is required to facilitate the emotion detection and its applications. That’s why we proposed using
thermal camera to capture the skin temperature changes and then applying machine learning
algorithms to classify emotion changes accordingly. This thesis contains two studies on thermal
emotion detection with the comparison of EEG-base emotion detection. One was to find out the
thermal emotional detection profiles comparing with EEG-based emotion detection technology;
the other was to implement an application with deep machine learning algorithms to visually
display both thermal and EEG based emotion detection accuracy and performance. In the first
research, we applied HMM in thermal emotion recognition, and after comparing with EEG-base
emotion detection, we identified skin temperature emotion-related features in terms of intensity
and rapidity. In the second research, we implemented an emotion detection application
supporting both thermal emotion detection and EEG-based emotion detection with applying the
deep machine learning methods – Convolutional Neutral Network (CNN) and LSTM (Long-
Short Term Memory). The accuracy of thermal image based emotion detection achieved 52.59%
and the accuracy of EEG based detection achieved 67.05%. In further study, we will do more
research on adjusting machine learning algorithms to improve the thermal emotion detection
precision.
|
9 |
Methods and resources for sentiment analysis in multilingual documents of different text typesBalahur Dobrescu, Alexandra 13 June 2011 (has links)
No description available.
|
10 |
Emotion Detection from Electroencephalography Data with Machine Learning : Classification of emotions elicited by auditory stimuli from music on self-collected data sets / Känslodetektion från Elektroencefalografidata med Maskininlärning : Klassificering av känslor framkallade av hörselstimuli från musik på egeninsamlade datasetSöderqvist, Filip January 2021 (has links)
The recent advances in deep learning have made it state-of-the-art for many different tasks, making its potential usefulness for analyzing electroencephalography (EEG) data appealing. This study aims at automatic feature extraction and classification of likeability, valence, and arousal elicited by auditory stimuli from music by training deep neural networks (DNNs) on minimally pre-processed multivariate EEG time series. Two data sets were collected, the first containing 840 samples from 21 subjects, the second containing 400 samples from a single subject. Each sample consists of a 30 second EEG stream which was recorded during music playback. Each subject in the multiple subject data set was played 40 different songs from 8 categories, after which they were asked to self-label their opinion of the song and the emotional response it elicited. Different pre- processing and data augmentation methods were tested on the data before it was fed to the DNNs. Three different network architectures were implemented and tested, including a one-dimensional translation of ResNet18, InceptionTime, and a novel architecture built upon from InceptionTime, dubbed EEGNet. The classification tasks were posed both as a binary and a three-class classification problem. The results from the DNNs were compared to three different methods of handcrafted feature extraction. The handcrafted features were used to train LightGBM models, which were used as a baseline. The experiments showed that the DNNs struggled to extract relevant features to discriminate between the different targets, as the results were close to random guessing. The experiments with the baseline models showed generalizability indications in the data, as all 36 experiments performed better than random guessing. The best results were a classification accuracy of 64 % and an AUC of 0.638 for valence on the multiple subject data set. The background study discovered many flaws and unclarities in the published work on the topic. Therefore, future work should not rely too much on these papers and explore other network architectures that can extract the relevant features to classify likeability and emotion from EEG data. / Djupinlärning har visat sig vara effektivt för många olika uppgifter, vilket gör det möjligt att det även kan användas för att analysera data från elektroencefalografi (EEG). Målet med denna studie är att genom två egeninsamlade dataset försöka klassificera huruvida någon gillar en låt eller inte samt vilka känslor låten väcker genom att träna djupa neurala nätverk (DNN) på minimalt pre-processade EEG-tidsserier. För det första datasettet samlades 840 dataexempel in från 21 deltagare. Dessa fick lyssna på 30-sekunders snuttar av 40 olika låtar från 8 kategorier varvid de fick svara på frågor angående vad de tyckte om låten samt vilka känslor den väckte. Det andra datasettet samlade in 400 dataexempel från endast en deltagare. Datan blev behandlad med flera olika metoder för att öka antalet träningsexempel innan det blev visat för de neurala nätverken. Tre olika nätverksarkitekturer implementerades och testades; en endimensionell variant av ResNet18, InceptionTime samt en egenbyggd arkitektur som byggde vidare på InceptionTime, döpt till EEGNet. Nätverken tränades både för binär och tre-klass klassificering. Resultaten från nätverken jämfördes med tre olika metoder för att bygga egna prediktorer från EEG-datan. Dessa prediktorer användes för att träna LightGBM modeller, vars resultat användes som baslinje. Experimenten visade att DNNsen hade svårt att extrahera relevanta prediktorer för att kunna diskriminera mellan de olika klasserna, då resultaten var nära till godtyckligt gissande. Experimenten med LightGBM modellerna och de handgjorda prediktorerna visade dock indikationer på att det finns relevant information i datan för att kunna prediktera ett visst utfall, då alla 36 experiment presterade bättre än godyckligt gissande. Det bästa resultatet var 64 % träffsäkerhet för valens och binär klassificering, med en AUC på 0.638, för datasettet med många deltagare. Bakgrundsstudien upptäckte många oklarheter och fel i flera av de artiklar som är publicerade på ämnet. Framtida arbete bör därför inte förlita sig på denna alltför mycket. Den bör fokusera på att utveckla arkitekturer som klarar att extrahera de relevanta prediktorer som behövs för att kunna prediktera huruvida någon tycker om en låt eller inte samt vilka känslor denna väckte.
|
Page generated in 0.124 seconds