• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

NOVEL APPROACHES TO MITIGATE DATA BIAS AND MODEL BIAS FOR FAIR MACHINE LEARNING PIPELINES

Taeuk Jang (18333504) 28 April 2024 (has links)
<p dir="ltr">Despite the recent advancement and exponential growth in the utility of deep learning models across various fields and tasks, we are confronted with emerging challenges. Among them, one prevalent issue is the biases inherent in deep models, which often mimic stereotypical or subjective behavior observed in data, potentially resulting in negative societal impact or disadvantaging certain subpopulations based on race, gender, etc. This dissertation addresses the critical problem of fairness and bias in machine learning from diverse perspectives, encompassing both data biases and model biases.</p><p dir="ltr">First, we study the multifaceted nature of data biases to comprehensively address the challenges. Specifically, the proposed approaches include the development of a generative model for balancing data distribution with counterfactual samples to address data skewness. In addition, we introduce a novel feature selection method aimed at eliminating sensitive-relevant features that could potentially convey sensitive information, e.g., race, considering the interrelationship between features. Moreover, we present a scalable thresholding method to appropriately binarize model outputs or regression data considering fairness constraints for fairer decision-making, extending fairness beyond categorical data.</p><p dir="ltr">However, addressing fairness problem solely by correcting data bias often encounters several challenges. Particularly, establishing fairness-curated data demands substantial resources and may be restricted by regal constraints, while explicitly identifying the biases is non-trivial due to their intertwined nature. Further, it is important to recognize that models may interpret data differently by their architectures or downstream tasks. In response, we propose a line of methods to address model bias, on top of addressing the data bias mentioned above, by learning fair latent representations. These methods include fair disentanglement learning, which projects latent subspace independent of sensitive information by employing conditional mutual information, and a debiased contrastive learning method for fair self-supervised learning without sensitive attribute annotations. Lastly, we introduce a novel approach to debias the multimodal embedding of pretrained vision-language models (VLMs) without requiring downstream annotated datasets, retraining, or fine-tuning of the large model considering the constrained resource of research labs.</p>

Page generated in 0.0442 seconds