Spelling suggestions: "subject:"[een] EMOTION RECOGNITION"" "subject:"[enn] EMOTION RECOGNITION""
131 |
Multi-objective optimization for model selection in music classification / Flermålsoptimering för modellval i musikklassificeringUjihara, Rintaro January 2021 (has links)
With the breakthrough of machine learning techniques, the research concerning music emotion classification has been getting notable progress combining various audio features and state-of-the-art machine learning models. Still, it is known that the way to preprocess music samples and to choose which machine classification algorithm to use depends on data sets and the objective of each project work. The collaborating company of this thesis, Ichigoichie AB, is currently developing a system to categorize music data into positive/negative classes. To enhance the accuracy of the existing system, this project aims to figure out the best model through experiments with six audio features (Mel spectrogram, MFCC, HPSS, Onset, CENS, Tonnetz) and several machine learning models including deep neural network models for the classification task. For each model, hyperparameter tuning is performed and the model evaluation is carried out according to pareto optimality with regard to accuracy and execution time. The results show that the most promising model accomplished 95% correct classification with an execution time of less than 15 seconds. / I och med genombrottet av maskininlärningstekniker har forskning kring känsloklassificering i musik sett betydande framsteg genom att kombinera olikamusikanalysverktyg med nya maskinlärningsmodeller. Trots detta är hur man förbehandlar ljuddatat och valet av vilken maskinklassificeringsalgoritm som ska tillämpas beroende på vilken typ av data man arbetar med samt målet med projektet. Denna uppsats samarbetspartner, Ichigoichie AB, utvecklar för närvarande ett system för att kategorisera musikdata enligt positiva och negativa känslor. För att höja systemets noggrannhet är målet med denna uppsats att experimentellt hitta bästa modellen baserat på sex musik-egenskaper (Mel-spektrogram, MFCC, HPSS, Onset, CENS samt Tonnetz) och ett antal olika maskininlärningsmodeller, inklusive Deep Learning-modeller. Varje modell hyperparameteroptimeras och utvärderas enligt paretooptimalitet med hänsyn till noggrannhet och beräkningstid. Resultaten visar att den mest lovande modellen uppnådde 95% korrekt klassificering med en beräkningstid på mindre än 15 sekunder.
|
132 |
Digital game-based learning and socioemotional skills : A quasi-experimental study of the effectiveness of digital-game based learning on the socioemotional skills of children with intellectual disabilityVorkapic, Robert, Christiansson, Nora January 2022 (has links)
Digital games are being increasingly implemented in the educational sector to improvevarious skills. Among these the effectiveness of digital game-based learning (DGBL) on the socio-emotional ability of individuals have been investigated with overall positive results. However, a limited number of studies have investigated the effectiveness of DGBL on this ability in children with intellectual disability and no studies have researched whether DGBL could be effective on the socio-emotional skills of children with this form of disability. Thus, the current study aimed at investigating the effectiveness of DGBL on the specific socio-emotional skill of emotion recognition in children with intellectual disability in the educational sector. The following research question was formulated: Does DGBL increase the socio-emotional skill of emotion recognition in children with intellectual disability. To answer this question a quasiexperimental one-group pretest-posttest design was adopted where participants engaged in a DGBL-intervention where they played a game aimed at improving their emotion recognition ability. Participants were selected via purposive sampling and the final sample consisted of N=7. The sample consisted of children with intellectual disability between the age span of six and nine of both male and female gender. The experiment consisted of three parts: a pretest where data in terms of the socio-emotional skill of emotion recognition was collected, the actual intervention where the participants engaged inplaying a digital game, and lastly, a posttest where the skill of emotion recognition was measured again. The data was subsequently analyzed via a paired sample t-test. The results of the study showed that DGBL did not significantly increase the socio-emotional skill of emotion recognition in children with intellectual disability. This result in part contradicts earlier research on DGBL and intellectual disability as well as DGBL andsocio-emotional skill where significant effects have been identified. However, since no previous research has investigated whether DGBL could be effective in increasing the socio-emotional skill of children with intellectual disability, future research is needed to confirm or reject the present results. In summary, the current research has extended on current knowledge and provided important implications for the field of special education.
|
133 |
Evolving user-specific emotion recognition model via incremental genetic programming / 漸進型遺伝的プログラミングによるユーザ特定型の感情認識モデルの進化に関する研究 / ゼンシンガタ イデンテキ プログラミング ニヨル ユーザ トクテイガタ ノ カンジョウ ニンシキ モデル ノ シンカ ニカンスル ケンキュウユスフ ラハディアン, Rahadian Yusuf 22 March 2017 (has links)
本論文では,漸進型遺伝的プログラミングを用いて特定ユーザを対象にした感情認識モデルを進化的に実現する方法論について提案した.特徴量の木構造で解を表現する遺伝的プログラミングを用い,時間情報も含め顔表情データを取得できる汎用センサの情報を基にユーザ適応型の感情認識モデルを進化させた.同時に遺伝的プログラミングの非決定性,汎化性の欠如,過適応に対処するため,進化を漸進的に展開する機構を組み込んだ漸進型遺伝的プログラミング法を開発した. / This research proposes a model to tackle challenges common in Emotion Recognition based on facial expression. First, we use pervasive sensor and environment, enabling natural expressions of user, as opposed to unnatural expressions on a large dataset. Second, the model analyzes relevant temporal information, unlike many other researches. Third, we employ user-specific approach and adaptation to user. We also show that our evolved model by genetic programming can be analyzed on how it really works and not a black-box model. / 博士(工学) / Doctor of Philosophy in Engineering / 同志社大学 / Doshisha University
|
134 |
Speech Emotion Recognition from Raw Audio using Deep Learning / Känsloigenkänning från rå ljuddata med hjälp av djupinlärningRintala, Jonathan January 2020 (has links)
Traditionally, in Speech Emotion Recognition, models require a large number of manually engineered features and intermediate representations such as spectrograms for training. However, to hand-engineer such features often requires both expert domain knowledge and resources. Recently, with the emerging paradigm of deep-learning, end-to-end models that extract features themselves and learn from the raw speech signal directly have been explored. A previous approach has been to combine multiple parallel CNNs with different filter lengths to extract multiple temporal features from the audio signal, and then feed the resulting sequence to a recurrent block. Also, other recent work present high accuracies when utilizing local feature learning blocks (LFLBs) for reducing the dimensionality of a raw audio signal, extracting the most important information. Thus, this study will combine the idea of LFLBs for feature extraction with a block of parallel CNNs with different filter lengths for capturing multitemporal features; this will finally be fed into an LSTM layer for global contextual feature learning. To the best of our knowledge, such a combined architecture has yet not been properly investigated. Further, this study will investigate different configurations of such an architecture. The proposed model is then trained and evaluated on the well-known speech databases EmoDB and RAVDESS, both in a speaker-dependent and speaker-independent manner. The results indicate that the proposed architecture can produce comparable results with state-of-the-art; despite excluding data augmentation and advanced pre-processing. It was reported 3 parallel CNN pipes yielded the highest accuracy, together with a series of modified LFLBs that utilize averagepooling and ReLU activation. This shows the power of leaving the feature learning up to the network and opens up for interesting future research on time-complexity and trade-off between introducing complexity in pre-processing or in the model architecture itself. / Traditionellt sätt, vid talbaserad känsloigenkänning, kräver modeller ett stort antal manuellt konstruerade attribut och mellanliggande representationer, såsom spektrogram, för träning. Men att konstruera sådana attribut för hand kräver ofta både domänspecifika expertkunskaper och resurser. Nyligen har djupinlärningens framväxande end-to-end modeller, som utvinner attribut och lär sig direkt från den råa ljudsignalen, undersökts. Ett tidigare tillvägagångssätt har varit att kombinera parallella CNN:er med olika filterlängder för att extrahera flera temporala attribut från ljudsignalen och sedan låta den resulterande sekvensen passera vidare in i ett så kallat Recurrent Neural Network. Andra tidigare studier har också nått en hög noggrannhet när man använder lokala inlärningsblock (LFLB) för att reducera dimensionaliteten hos den råa ljudsignalen, och på så sätt extraheras den viktigaste informationen från ljudet. Således kombinerar denna studie idén om att nyttja LFLB:er för extraktion av attribut, tillsammans med ett block av parallella CNN:er som har olika filterlängder för att fånga multitemporala attribut; detta kommer slutligen att matas in i ett LSTM-lager för global inlärning av kontextuell information. Så vitt vi vet har en sådan kombinerad arkitektur ännu inte undersökts. Vidare kommer denna studie att undersöka olika konfigurationer av en sådan arkitektur. Den föreslagna modellen tränas och utvärderas sedan på de välkända taldatabaserna EmoDB och RAVDESS, både via ett talarberoende och talaroberoende tillvägagångssätt. Resultaten indikerar att den föreslagna arkitekturen kan ge jämförbara resultat med state-of-the-art, trots att ingen ökning av data eller avancerad förbehandling har inkluderats. Det rapporteras att 3 parallella CNN-lager gav högsta noggrannhet, tillsammans med en serie av modifierade LFLB:er som nyttjar average-pooling och ReLU som aktiveringsfunktion. Detta visar fördelarna med att lämna inlärningen av attribut till nätverket och öppnar upp för intressant framtida forskning kring tidskomplexitet och avvägning mellan introduktion av komplexitet i förbehandlingen eller i själva modellarkitekturen.
|
135 |
Image Emotion Analysis: Facial Expressions vs. Perceived ExpressionsAyyalasomayajula, Meghana 20 December 2022 (has links)
No description available.
|
136 |
Automated Multimodal Emotion Recognition / Automatiserad multimodal känsloigenkänningFernández Carbonell, Marcos January 2020 (has links)
Being able to read and interpret affective states plays a significant role in human society. However, this is difficult in some situations, especially when information is limited to either vocal or visual cues. Many researchers have investigated the so-called basic emotions in a supervised way. This thesis holds the results of a multimodal supervised and unsupervised study of a more realistic number of emotions. To that end, audio and video features are extracted from the GEMEP dataset employing openSMILE and OpenFace, respectively. The supervised approach includes the comparison of multiple solutions and proves that multimodal pipelines can outperform unimodal ones, even with a higher number of affective states. The unsupervised approach embraces a traditional and an exploratory method to find meaningful patterns in the multimodal dataset. It also contains an innovative procedure to better understand the output of clustering techniques. / Att kunna läsa och tolka affektiva tillstånd spelar en viktig roll i det mänskliga samhället. Detta är emellertid svårt i vissa situationer, särskilt när information är begränsad till antingen vokala eller visuella signaler. Många forskare har undersökt de så kallade grundläggande känslorna på ett övervakat sätt. Det här examensarbetet innehåller resultaten från en multimodal övervakad och oövervakad studie av ett mer realistiskt antal känslor. För detta ändamål extraheras ljud- och videoegenskaper från GEMEP-data med openSMILE respektive OpenFace. Det övervakade tillvägagångssättet inkluderar jämförelse av flera lösningar och visar att multimodala pipelines kan överträffa unimodala sådana, även med ett större antal affektiva tillstånd. Den oövervakade metoden omfattar en konservativ och en utforskande metod för att hitta meningsfulla mönster i det multimodala datat. Den innehåller också ett innovativt förfarande för att bättre förstå resultatet av klustringstekniker.
|
137 |
Exploring the Roles of Adolescent Emotion Regulation, Recognition, and Socialization in Severe Illness: A Comparison Between Anorexia Nervosa and Chronic PainHughes-Scalise, Abigail T. 02 September 2014 (has links)
No description available.
|
138 |
Разработка информационной платформы обмена данными для управления трансфером технологий : магистерская диссертация / Development of information platform for data exchange for managing technology transferКочетов, Р. В., Kochetov, R. V. January 2023 (has links)
Объектом исследования являются методы машинного обучения, позволяющие фильтровать данные, и методы разработки информационных платформ. Фильтрация данных подобного типа применяется в такой области, как поисковые системы, чтобы на основе запроса выдать пользователю релевантные результаты. Предмет исследования – разработка модели машинного обучения, фильтрующей текстовые данные, и информационной платформы для отображения отфильтрованных данных. Особенностями исследования являются открытая реализация полного проекта, то есть она доступна каждому, и возможность его модификации. Для обучения модели был использован самостоятельно составленный набор научных работ, информационная платформа была разработана с нуля. Итоговая модель LSTM, выбранная методом сравнения метрик, показала результат предсказания соответствия целевой тематике в 90%, что позволяет говорить о ее возможном внедрении в соответствующие Интернет-ресурсы, так как они гарантированно уменьшат объем научных работ, проверяемых вручную. / The object of the research is machine learning methods that allow filtering text data obtained from the information platform. Filtering of this type of data is used in such an area as search engines to give relevant results to the user based on a query. Within the framework of this dissertation, it was proposed to apply machine learning methods to filter a set of scientific papers based on their title and target label in the form of the subject of the work. The features of the study are the open implementation of the full project, that is, it is available to everyone, and the possibility of its modification. A self-compiled set of scientific papers was used to train the model, the information platform was developed from scratch. The final LSTM model, chosen by the method of comparing metrics, showed the result of predicting compliance with the target topic in 95%, which allows us to talk about its possible implementation in the relevant Internet resources, since they are guaranteed to reduce the volume of scientific papers checked manually.
|
139 |
Modelling human emotions using immersive virtual reality, physiological signals and behavioural responsesMarín Morales, Javier 27 July 2020 (has links)
Tesis por compendio / [ES] El uso de la realidad virtual (RV) se ha incrementado notablemente en la comunidad científica para la investigación del comportamiento humano. En particular, la RV inmersiva ha crecido debido a la democratización de las gafas de realidad virtual o head mounted displays (HMD), que ofrecen un alto rendimiento con una inversión económica. Uno de los campos que ha emergido con fuerza en la última década es el Affective Computing, que combina psicofisiología, informática, ingeniería biomédica e inteligencia artificial, desarrollando sistemas que puedan reconocer emociones automáticamente. Su progreso es especialmente importante en el campo de la investigación del comportamiento humano, debido al papel fundamental que las emociones juegan en muchos procesos psicológicos como la percepción, la toma de decisiones, la creatividad, la memoria y la interacción social.
Muchos estudios se han centrado en intentar obtener una metodología fiable para evocar y automáticamente identificar estados emocionales, usando medidas fisiológicas objetivas y métodos de aprendizaje automático. Sin embargo, la mayoría de los estudios previos utilizan imágenes, audios o vídeos para generar los estados emocionales y, hasta donde llega nuestro conocimiento, ninguno de ellos ha desarrollado un sistema de reconocimiento emocional usando RV inmersiva. Aunque algunos trabajos anteriores sí analizan las respuestas fisiológicas en RV inmersivas, estos no presentan modelos de aprendizaje automático para procesamiento y clasificación automática de bioseñales.
Además, un concepto crucial cuando se usa la RV en investigación del comportamiento humano es la validez: la capacidad de evocar respuestas similares en un entorno virtual a las evocadas por el espacio físico. Aunque algunos estudios previos han usado dimensiones psicológicas y cognitivas para comparar respuestas entre entornos reales y virtuales, las investigaciones que analizan respuestas fisiológicas o comportamentales están mucho menos extendidas. Según nuestros conocimientos, este es el primer trabajo que compara entornos físicos con su réplica en RV, empleando respuestas fisiológicas y algoritmos de aprendizaje automático y analizando la capacidad de la RV de transferir y extrapolar las conclusiones obtenidas al entorno real que se está simulando.
El objetivo principal de la tesis es validar el uso de la RV inmersiva como una herramienta de estimulación emocional usando respuestas psicofisiológicas y comportamentales en combinación con algoritmos de aprendizaje automático, así como realizar una comparación directa entre un entorno real y virtual. Para ello, se ha desarrollado un protocolo experimental que incluye entornos emocionales 360º, un museo real y una virtualización 3D altamente realista del mismo museo.
La tesis presenta novedosas contribuciones del uso de la RV inmersiva en la investigación del comportamiento humano, en particular en lo relativo al estudio de las emociones. Esta ayudará a aplicar metodologías a estímulos más realistas para evaluar entornos y situaciones de la vida diaria, superando las actuales limitaciones de la estimulación emocional que clásicamente ha incluido imágenes, audios o vídeos. Además, en ella se analiza la validez de la RV realizando una comparación directa usando una simulación altamente realista. Creemos que la RV inmersiva va a revolucionar los métodos de estimulación emocional en entornos de laboratorio. Además, su sinergia junto a las medidas fisiológicas y las técnicas de aprendizaje automático, impactarán transversalmente en muchas áreas de investigación como la arquitectura, la salud, la evaluación psicológica, el entrenamiento, la educación, la conducción o el marketing, abriendo un nuevo horizonte de oportunidades para la comunidad científica. La presente tesis espera contribuir a caminar en esa senda. / [EN] In recent years the scientific community has significantly increased its use of virtual reality (VR) technologies in human behaviour research. In particular, the use of immersive VR has grown due to the introduction of affordable, high performance head mounted displays (HMDs). Among the fields that has strongly emerged in the last decade is affective computing, which combines psychophysiology, computer science, biomedical engineering and artificial intelligence in the development of systems that can automatically recognize emotions. The progress of affective computing is especially important in human behaviour research due to the central role that emotions play in many background processes, such as perception, decision-making, creativity, memory and social interaction.
Several studies have tried to develop a reliable methodology to evoke and automatically identify emotional states using objective physiological measures and machine learning methods. However, the majority of previous studies used images, audio or video to elicit emotional statements; to the best of our knowledge, no previous research has developed an emotion recognition system using immersive VR. Although some previous studies analysed physiological responses in immersive VR, they did not use machine learning techniques for biosignal processing and classification.
Moreover, a crucial concept when using VR for human behaviour research is validity: the capacity to evoke a response from the user in a simulated environment similar to the response that might be evoked in a physical environment. Although some previous studies have used psychological and cognitive dimensions to compare responses in real and virtual environments, few have extended this research to analyse physiological or behavioural responses. Moreover, to our knowledge, this is the first study to compare VR scenarios with their real-world equivalents using physiological measures coupled with machine learning algorithms, and to analyse the ability of VR to transfer and extrapolate insights obtained from VR environments to real environments.
The main objective of this thesis is, using psycho-physiological and behavioural responses in combination with machine learning methods, and by performing a direct comparison between a real and virtual environment, to validate immersive VR as an emotion elicitation tool. To do so we develop an experimental protocol involving emotional 360º environments, an art exhibition in a real museum, and a highly-realistic 3D virtualization of the same art exhibition.
This thesis provides novel contributions to the use of immersive VR in human behaviour research, particularly in relation to emotions. VR can help in the application of methodologies designed to present more realistic stimuli in the assessment of daily-life environments and situations, thus overcoming the current limitations of affective elicitation, which classically uses images, audio and video. Moreover, it analyses the validity of VR by performing a direct comparison using highly-realistic simulation. We believe that immersive VR will revolutionize laboratory-based emotion elicitation methods. Moreover, its synergy with physiological measurement and machine learning techniques will impact transversely in many other research areas, such as architecture, health, assessment, training, education, driving and marketing, and thus open new opportunities for the scientific community. The present dissertation aims to contribute to this progress. / [CA] L'ús de la realitat virtual (RV) s'ha incrementat notablement en la comunitat científica per a la recerca del comportament humà. En particular, la RV immersiva ha crescut a causa de la democratització de les ulleres de realitat virtual o head mounted displays (HMD), que ofereixen un alt rendiment amb una reduïda inversió econòmica. Un dels camps que ha emergit amb força en l'última dècada és el Affective Computing, que combina psicofisiologia, informàtica, enginyeria biomèdica i intel·ligència artificial, desenvolupant sistemes que puguen reconéixer emocions automàticament. El seu progrés és especialment important en el camp de la recerca del comportament humà, a causa del paper fonamental que les emocions juguen en molts processos psicològics com la percepció, la presa de decisions, la creativitat, la memòria i la interacció social. Molts estudis s'han centrat en intentar obtenir una metodologia fiable per a evocar i automàticament identificar estats emocionals, utilitzant mesures fisiològiques objectives i mètodes d'aprenentatge automàtic. No obstant això, la major part dels estudis previs utilitzen imatges, àudios o vídeos per a generar els estats emocionals i, fins on arriba el nostre coneixement, cap d'ells ha desenvolupat un sistema de reconeixement emocional mitjançant l'ús de la RV immersiva. Encara que alguns treballs anteriors sí que analitzen les respostes fisiològiques en RV immersives, aquests no presenten models d'aprenentatge automàtic per a processament i classificació automàtica de biosenyals. A més, un concepte crucial quan s'utilitza la RV en la recerca del comportament humà és la validesa: la capacitat d'evocar respostes similars en un entorn virtual a les evocades per l'espai físic. Encara que alguns estudis previs han utilitzat dimensions psicològiques i cognitives per a comparar respostes entre entorns reals i virtuals, les recerques que analitzen respostes fisiològiques o comportamentals estan molt menys esteses. Segons els nostres coneixements, aquest és el primer treball que compara entorns físics amb la seua rèplica en RV, emprant respostes fisiològiques i algorismes d'aprenentatge automàtic i analitzant la capacitat de la RV de transferir i extrapolar les conclusions obtingudes a l'entorn real que s'està simulant. L'objectiu principal de la tesi és validar l'ús de la RV immersiva com una eina d'estimulació emocional usant respostes psicofisiològiques i comportamentals en combinació amb algorismes d'aprenentatge automàtic, així com realitzar una comparació directa entre un entorn real i virtual. Per a això, s'ha desenvolupat un protocol experimental que inclou entorns emocionals 360º, un museu real i una virtualització 3D altament realista del mateix museu. La tesi presenta noves contribucions de l'ús de la RV immersiva en la recerca del comportament humà, en particular quant a l'estudi de les emocions. Aquesta ajudarà a aplicar metodologies a estímuls més realistes per a avaluar entorns i situacions de la vida diària, superant les actuals limitacions de l'estimulació emocional que clàssicament ha inclòs imatges, àudios o vídeos. A més, en ella s'analitza la validesa de la RV realitzant una comparació directa usant una simulació altament realista. Creiem que la RV immersiva revolucionarà els mètodes d'estimulació emocional en entorns de laboratori. A més, la seua sinergia al costat de les mesures fisiològiques i les tècniques d'aprenentatge automàtic, impactaran transversalment en moltes àrees de recerca com l'arquitectura, la salut, l'avaluació psicològica, l'entrenament, l'educació, la conducció o el màrqueting, obrint un nou horitzó d'oportunitats per a la comunitat científica. La present tesi espera contribuir a caminar en aquesta senda. / Marín Morales, J. (2020). Modelling human emotions using immersive virtual reality, physiological signals and behavioural responses [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/148717 / Compendio
|
140 |
Detecting and Explaining Emotional Reactions in Personal NarrativeTurcan, Elsbeth January 2024 (has links)
It is no longer any secret that people worldwide are struggling with their mental health, in terms of diagnostic disorders as well as non-diagnostic measures like perceived stress. Barriers to receiving professional mental healthcare are significant, and even in locations where the availability of such care is increasing, our infrastructures are not equipped to find people the support they need. Meanwhile, in a highly-connected digital world, many people turn to outlets like social media to express themselves and their struggles and interact with like-minded others.
This setting---where human experts are overwhelmed and human patients are acutely in need---is one in which we believe artificial intelligence (AI) and natural language processing (NLP) systems have great potential to do good. At the same time, we must acknowledge the limitations of our models and strive to deploy them responsibly alongside human experts, such that their logic and mistakes are transparent. We argue that models that make and explain their predictions in ways guided by domain-specific research will be more understandable to humans, who can benefit from the models' statistical knowledge but use their own judgment to mitigate the models' mistakes.
In this thesis, we leverage domain expertise in the form of psychology research to develop models for two categories of emotional tasks: identifying emotional reactions in text and explaining the causes of emotional reactions. The first half of the thesis covers our work on detecting emotional reactions, where we focus on a particular, understudied type of emotional reaction: psychological distress. We present our original dataset, Dreaddit, gathered for this problem from the social media website Reddit, as well as some baseline analysis and benchmarking that shows psychological distress detection is a challenging problem. Drawing on literature that connects particular emotions to the experience of distress, we then develop several multitask models that incorporate basic emotion detection, and quantitatively change the way our distress models make their predictions to make them more readily understandable.
Then, the second half of the thesis expands our scope to consider not only the emotional reaction being experienced, but also its cause. We treat this cause identification problem first as a span extraction problem in news headlines, where we employ multitask learning (jointly with basic emotion classification) and commonsense reasoning; and then as a free-form generation task in response to a long-form Reddit post, where we leverage the capabilities of large language models (LLMs) and their distilled student models. Here, as well, multitask learning with basic emotion detection is beneficial to cause identification in both settings.
Our contributions in this thesis are fourfold. First, we produce a dataset for psychological distress detection, as well as emotion-infused models that incorporate emotion detection for this task. Second, we present multitask and commonsense-infused models for joint emotion detection and emotion cause extraction, showing increased performance on both tasks. Third, we produce a dataset for the new problem of emotion-focused explanation, as well as characterization of the abilities of distilled generation models for this problem. Finally, we take an overarching approach to these problems inspired by psychology theory that incorporates expert knowledge into our models where possible, enhancing explainability and performance.
|
Page generated in 0.0555 seconds