Return to search

Analysis of Sign Language Facial Expressions and Deaf Students' Retention Using Machine Learning and Agent-based Modeling

There are currently about 466 million people worldwide who have a hearing disability, and that number is expected to increase to 900 million by 2050. About 15% of adult Americans have hearing disabilities and about every three in 1,000 U.S. children are born with hearing loss in one or both ears. The World Health Organization (WHO) estimates that unaddressed hearing loss poses an annual global cost of $980 billion, including cost of educational support, loss of productivity, and societal costs. These are all evident that people with hearing loss are experiencing several kinds and levels of difficulties. In this dissertation, we are addressing two main challenges of hearing impaired people; sign language recognition and post-secondary education. Both sign language recognition and reliable education systems that properly support the deaf community are essential needs of the globe and in this dissertation we aim to attack these exact problems. For the first part, we introduce novel dataset and methodology using machine learning while for the second part, a novel agent-based model framework is proposed. Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this dissertation, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over 3000 facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of "sad", "surprise", "fear", "angry", "neutral", "disgust", and "happy". We also considered the "None" class if the image's facial expression could not be described by any of the emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems. In addition, post-secondary education persistence is the likelihood of a student remaining in post-secondary education. Although statistics show that post-secondary persistence for deaf students has increased recently, there are still many obstacles obstructing students from completing their post-secondary degree goals. Therefore, increasing the persistence rate is crucial to increase education and work goals for deaf students. In this work, we present an agent-based model using NetLogo software for the persistence phenomena of deaf students. We consider four non-cognitive factors: having clear goals, social integration, social skills, and academic experience, which influence the departure decision of deaf students. Progress and results of this work suggest that agent-based modeling approaches promise to give better understanding of what will increase persistence.

Identiferoai:union.ndltd.org:ucf.edu/oai:stars.library.ucf.edu:etd2020-2316
Date01 January 2021
CreatorsAlaghband, Marie
PublisherSTARS
Source SetsUniversity of Central Florida
LanguageEnglish
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceElectronic Theses and Dissertations, 2020-

Page generated in 0.1404 seconds