The rapid growth of user-generated content on social media platforms in the form of text
caused moderating toxic language manually to become an increasingly challenging task.
Consequently, researchers have turned to artificial intelligence (AI) and machine learning
(ML) models to detect and classify toxic comments automatically. However, these models
often exhibit unintended bias against comments containing sensitive terms related to de-
mographic groups, such as race and gender, which leads to unfair classifications of samples.
In addition, most existing research on this topic focuses on fully supervised learning frame-
works. Therefore, there is a growing need to explore fairness in semi-supervised toxicity
detection due to the difficulty of annotating large amounts of data. In this thesis, we aim
to address this gap by developing a fair generative-based semi-supervised framework for
mitigating social bias in toxicity text classification. This framework consists of two parts,
first, we trained a semi-supervised generative-based text classification model on the bench-
mark toxicity datasets. Then, in the second step, we mitigated social bias in the trained
classifier in step 1 using adversarial debiasing, to improve fairness. In this work, we use
two different semi-supervised generative-based text classification models, NDAGAN and
GANBERT (the difference between them is that the former adds negative data augmenta-
tion to address some of the problems in GANBERT), to propose two fair semi-supervised
models called FairNDAGAN and FairGANBERT. Finally, we compare the performance of
the proposed fair semi-supervised models in terms of accuracy and fairness (equalized odds
difference) against baselines to clarify the challenges of social fairness in semi-supervised
toxicity text classification for the first time.
Based on the experimental results, the key contributions of this research are: first,
we propose a novel fair semi-supervised generative-based framework for fair toxicity text
classification for the first time. Second, we show that we can achieve fairness in semi-
supervised toxicity text classification without considerable loss of accuracy. Third, we
demonstrate that achieving fairness at the coarse-grained level improves fairness at the
fine-grained level but does not always guarantee it. Fourth, we justify the impact of
the labeled and unlabeled data in terms of fairness and accuracy in the studied semi-
supervised framework. Finally, we demonstrate the susceptibility of the supervised and
semi-supervised models against data imbalance in terms of accuracy and fairness.
Identifer | oai:union.ndltd.org:uottawa.ca/oai:ruor.uottawa.ca:10393/45140 |
Date | 11 July 2023 |
Creators | Shayesteh, Shahriar |
Contributors | Inkpen, Diana |
Publisher | Université d'Ottawa / University of Ottawa |
Source Sets | Université d’Ottawa |
Language | English |
Detected Language | English |
Type | Thesis |
Format | application/pdf |
Page generated in 0.0022 seconds