Algorithms that are being used in sensitive contexts such as deciding to give a job offer or giving inmates parole should be accurate as well as being non-discriminatory. The latter is important especially due to emerging concerns about automatic decision making being unfair to individuals belonging to certain groups. The machine learning literature has seen a rapid evolution in research on this topic. In this thesis, we study various problems in sequential decision making motivated by challenges in algorithmic fairness. As part of this thesis, we modify the fundamental framework of prediction with expert advice. We assume a learning agent is making decisions using the advice provided by a set of experts while this set can shrink. In other words, experts can become unavailable due to scenarios such as emerging anti-discriminatory laws prohibiting the learner from using experts detected to be unfair. We provide efficient algorithms for this setup, as well as a detailed analysis of the optimality of them. Later we explore a problem concerned with providing any-time fairness guarantees using the well-known exponential weights algorithm, which leads to an open question about a lower bound on the cumulative loss of exponential weights algorithm. Finally, we introduce a novel fairness notion for supervised learning tasks motivated by the concept of envy-freeness. We show how this notion might bypass certain issues of existing fairness notions such as equalized odds. We provide solutions for a simplified version of this problem and insights to deal with further challenges that arise by adopting this notion. / Graduate
Identifer | oai:union.ndltd.org:uvic.ca/oai:dspace.library.uvic.ca:1828/12098 |
Date | 02 September 2020 |
Creators | Azami, Sajjad |
Contributors | Mehta, Nishant |
Source Sets | University of Victoria |
Language | English, English |
Detected Language | English |
Type | Thesis |
Format | application/pdf |
Rights | Available to the World Wide Web |
Page generated in 0.0018 seconds