1 |
<b>IMPROVING MACHINE LEARNING FAIRNESS BY REPAIRING MISLABELED DATA</b>Shashank A Thandri (20161635) 15 November 2024 (has links)
<p dir="ltr">As Machine learning (ML) and Artificial intelligence (AI) are becoming increasingly prevalent in high-stake decision-making, fairness has emerged as a critical societal issue. Individuals belonging to diverse groups receive different algorithmic outcomes largely due to the inherent errors and biases in the underlying training data, thus resulting in violations of group fairness or bias. </p><p dir="ltr">This study investigates the problem of resolving group fairness by detecting mislabeled data and flipping the label instances in the training data. Four solutions are proposed to obtain an ordering in which the labels of training data instances should be flipped to reduce the bias in predictions of a model trained over the modified data. Through experimental evaluation, we showcase the effectiveness of repairing mislabeled data using mislabel detection techniques to improve the fairness of machine learning models.</p>
|
Page generated in 0.1457 seconds