Return to search

NOVEL APPROACHES TO MITIGATE DATA BIAS AND MODEL BIAS FOR FAIR MACHINE LEARNING PIPELINES

<p dir="ltr">Despite the recent advancement and exponential growth in the utility of deep learning models across various fields and tasks, we are confronted with emerging challenges. Among them, one prevalent issue is the biases inherent in deep models, which often mimic stereotypical or subjective behavior observed in data, potentially resulting in negative societal impact or disadvantaging certain subpopulations based on race, gender, etc. This dissertation addresses the critical problem of fairness and bias in machine learning from diverse perspectives, encompassing both data biases and model biases.</p><p dir="ltr">First, we study the multifaceted nature of data biases to comprehensively address the challenges. Specifically, the proposed approaches include the development of a generative model for balancing data distribution with counterfactual samples to address data skewness. In addition, we introduce a novel feature selection method aimed at eliminating sensitive-relevant features that could potentially convey sensitive information, e.g., race, considering the interrelationship between features. Moreover, we present a scalable thresholding method to appropriately binarize model outputs or regression data considering fairness constraints for fairer decision-making, extending fairness beyond categorical data.</p><p dir="ltr">However, addressing fairness problem solely by correcting data bias often encounters several challenges. Particularly, establishing fairness-curated data demands substantial resources and may be restricted by regal constraints, while explicitly identifying the biases is non-trivial due to their intertwined nature. Further, it is important to recognize that models may interpret data differently by their architectures or downstream tasks. In response, we propose a line of methods to address model bias, on top of addressing the data bias mentioned above, by learning fair latent representations. These methods include fair disentanglement learning, which projects latent subspace independent of sensitive information by employing conditional mutual information, and a debiased contrastive learning method for fair self-supervised learning without sensitive attribute annotations. Lastly, we introduce a novel approach to debias the multimodal embedding of pretrained vision-language models (VLMs) without requiring downstream annotated datasets, retraining, or fine-tuning of the large model considering the constrained resource of research labs.</p>

  1. 10.25394/pgs.25670736.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/25670736
Date28 April 2024
CreatorsTaeuk Jang (18333504)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/NOVEL_APPROACHES_TO_MITIGATE_DATA_BIAS_AND_MODEL_BIAS_FOR_FAIR_MACHINE_LEARNING_PIPELINES/25670736

Page generated in 0.0023 seconds