• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 12
  • 10
  • 9
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 191
  • 191
  • 49
  • 37
  • 35
  • 34
  • 28
  • 26
  • 25
  • 24
  • 23
  • 21
  • 21
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Teaching Social-Emotional Learning to Children With Autism Using Animated Avatar Video Modeling

Davis, Emelie 12 December 2022 (has links)
People with a diagnosis of autism spectrum disorder (ASD) often have difficulties understanding or applying skills related to Social-Emotional Learning (SEL). An individual having a better understanding of SEL concepts is generally associated with more fulfilling connections with others and increased satisfaction in life. Since people with ASD tend to have greater success with learning in structured environments, we created a module to teach these skills using Nearpod. These modules were created with videos of a person embodying a cartoon dog face using Animoji for two reasons; because the animation was meant to appeal to children, and the creation was user-friendly enough for teachers to potentially create or replicate this model. Along with these videos, the modules also included multiple choice questions about content from the lessons and about scenarios portraying different emotions. Participants came to a research lab where they completed the modules at a computer while being supervised by researchers. Looking at the results from the intervention there was little to no trend between baseline and intervention sessions across four participants. While Nearpod is a tool that could be useful for parents or teachers to create and present video modeling lessons, participants had difficulty navigating the modules without support from the researchers due to length of the modules, getting easily distracted and difficulty with using the technology. Some directions for future research may include delivering similar content using animated avatars through shorter, more child-friendly delivery methods.
102

How do voiceprints age?

Nachesa, Maya Konstantinovna January 2023 (has links)
Voiceprints, like fingerprints, are a biometric. Where fingerprints record a person's unique pattern on their finger, voiceprints record what a person's voice "sounds like", abstracting away from what the person said. They have been used in speaker recognition, including verification and identification. In other words, they have been used to ask "is this speaker who they say they are?" or "who is this speaker?", respectively. However, people age, and so do their voices. Do voiceprints age, too? That is, can a person's voice change enough that after a while, the original voiceprint can no longer be used to identify them? In this thesis, I use Swedish audio recordings from Riksdagen's (the Swedish parliament) debate speeches to test this idea. Depending on the answer, this influences how well we can search the database for previously unmarked speeches. I find that speaker verification performance decreases as the age-gap between voiceprints increases, and that it decreases more strongly after roughly five years. Additionally, I grouped the speakers into age groups spanning five years, and found that speaker verification has the highest performance for those for whom the initial voiceprint was recorded from 29-33 years of age. Additionally, longer input speech provides higher quality voiceprints, with performance improvements stagnating when voiceprints become longer than 30 seconds. Finally, voiceprints for men age more strongly than those for women after roughly 5 years. I also investigated how emotions are encoded in voiceprints, since this could potentially impede in speaker recognition. I found that it is possible to train a classifier to recognise emotions from voiceprints, and that this classifier does better when recognising emotions from known speakers. That is, emotions are encoded more characteristically per person as opposed to per emotion itself. As such, they are unlikely to interfere with speaker recognition.
103

Multimodal Emotion Recognition Using Temporal Convolutional Networks

Harb, Hussein 19 July 2023 (has links)
Over the past decade, the field of affective computing has received increasing attention. With advancements in machine learning, a wide range of methodologies have been developed to better understand human emotions. However, one of the major challenges in this field is accurately modeling emotions on a set of continuous dimensions, such as arousal and valence. This type of modeling is essential to represent complex and subtle emotions, and to capture the full spectrum of human emotional experiences. Additionally, predicting changes in emotions across time series adds another layer of complexity, as emotions can shift continuously. Our work addresses these challenges using a dataset that includes natural and spontaneous emotions from diverse individuals. We extract multiple features from different modalities, including audio, video, and text, and use them to predict emotions across three axes: arousal, valence, and liking. To achieve this, we employ deep features and multiple fusion techniques to combine the modalities. Our results demonstrate that temporal convolutional networks outperform long short-term memory models in multimodal emotion prediction. Overall, our research contributes to advancing the field of affective computing by developing more accurate and comprehensive methods for modeling and predicting human emotions.
104

The Relationship Between Visual Attention and Emotion Knowledge in Children with Attention-Deficit Hyperactivity Disorder

Serrano, Verenea J. 12 June 2014 (has links)
No description available.
105

Evaluating the Effectiveness of a Combined Emotion Recognition and Emotion Regulation Intervention for Preschool Children with Autism Spectrum Disorder

Walker, Bethany Lynn 23 June 2017 (has links)
No description available.
106

Exploring Social Information Processing of Emotion Content and its Relationship with Social Outcomes in Children at-risk for Attention-Deficit/Hyperactivity Disorder

Serrano, Verenea J. 19 September 2017 (has links)
No description available.
107

Effectiveness of a Social Skills Curriculum on Preschool Prosocial Behavior and Emotion Recognition

Kuebel, Laura A. 28 August 2017 (has links)
No description available.
108

FACIAL AFFECT RECOGNITON DEFICITS IN BIPOLAR DISORDER

Getz, Glen Edward 11 October 2001 (has links)
No description available.
109

Aging and Emotion Recognition: An Examination of Stimulus and Attentional Mechanisms

Sedall, Stephanie Nicole, Sedall 25 May 2016 (has links)
No description available.
110

Social Anxiety Symptoms, Heart Rate Variability, and Vocal Emotion Recognition: Evidence of a Normative Vagally-Mediated Positivity Bias in Women

Madison, Annelise Alissa 30 September 2019 (has links)
No description available.

Page generated in 0.1389 seconds