Return to search

<b>IMPROVING MACHINE LEARNING FAIRNESS BY REPAIRING MISLABELED DATA</b>

<p dir="ltr">As Machine learning (ML) and Artificial intelligence (AI) are becoming increasingly prevalent in high-stake decision-making, fairness has emerged as a critical societal issue. Individuals belonging to diverse groups receive different algorithmic outcomes largely due to the inherent errors and biases in the underlying training data, thus resulting in violations of group fairness or bias.  </p><p dir="ltr">This study investigates the problem of resolving group fairness by detecting mislabeled data and flipping the label instances in the training data. Four solutions are proposed to obtain an ordering in which the labels of training data instances should be flipped to reduce the bias in predictions of a model trained over the modified data. Through experimental evaluation, we showcase the effectiveness of repairing mislabeled data using mislabel detection techniques to improve the fairness of machine learning models.</p>

  1. 10.25394/pgs.27742707.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/27742707
Date15 November 2024
CreatorsShashank A Thandri (20161635)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/_b_IMPROVING_MACHINE_LEARNING_FAIRNESS_BY_REPAIRING_MISLABELED_DATA_b_/27742707

Page generated in 0.0025 seconds