Spelling suggestions: "subject:"algorithmic fairness"" "subject:"algorithmic hairness""
1 |
Exploring fair machine learning in sequential prediction and supervised learningAzami, Sajjad 02 September 2020 (has links)
Algorithms that are being used in sensitive contexts such as deciding to give a job offer or giving inmates parole should be accurate as well as being non-discriminatory. The latter is important especially due to emerging concerns about automatic decision making being unfair to individuals belonging to certain groups. The machine learning literature has seen a rapid evolution in research on this topic. In this thesis, we study various problems in sequential decision making motivated by challenges in algorithmic fairness. As part of this thesis, we modify the fundamental framework of prediction with expert advice. We assume a learning agent is making decisions using the advice provided by a set of experts while this set can shrink. In other words, experts can become unavailable due to scenarios such as emerging anti-discriminatory laws prohibiting the learner from using experts detected to be unfair. We provide efficient algorithms for this setup, as well as a detailed analysis of the optimality of them. Later we explore a problem concerned with providing any-time fairness guarantees using the well-known exponential weights algorithm, which leads to an open question about a lower bound on the cumulative loss of exponential weights algorithm. Finally, we introduce a novel fairness notion for supervised learning tasks motivated by the concept of envy-freeness. We show how this notion might bypass certain issues of existing fairness notions such as equalized odds. We provide solutions for a simplified version of this problem and insights to deal with further challenges that arise by adopting this notion. / Graduate
|
2 |
Group-Envy Fairness in the Stochastic Bandit SettingScinocca, Stephen 29 September 2022 (has links)
We introduce a new, group fairness-inspired stochastic multi-armed bandit problem
in the pure exploration setting. We look at the discrepancy between an arm’s mean
reward from a group and the highest mean reward for any arm from that group, and
call this the disappointment that group suffers from that arm. We define the optimal
arm to be the one that minimizes the maximum disappointment over all groups. This
optimal arm addresses one problem with maximin fairness, where the group used to
choose the maximin best arm suffers little disappointment regardless of what arm is
picked, but another group suffers significantly more disappointment by picking that
arm as the best one. The challenge of this problem is that the highest mean reward
for a group and the arm that gives that reward are unknown. This means we need
to pull arms for multiple goals: to find the optimal arm, and to estimate the highest
mean reward of certain groups. This leads to the new adaptive sampling algorithm for
best arm identification in the fixed confidence setting called MD-LUCB, or Minimax
Disappointment LUCB. We prove bounds on MD-LUCB’s sample complexity and
then study its performance with empirical simulations. / Graduate
|
3 |
Three Essays on HRM Algorithms: Where Do We Go from Here?Cheng, Minghui January 2024 (has links)
The field of Human Resource Management (HRM) has experienced a significant transformation with the emergence of big data and algorithms. Major technology companies have introduced software and platforms for analyzing various HRM practices, such as hiring, compensation, employee engagement, and turnover management, utilizing algorithmic approaches. However, scholarly research has taken a cautious stance, questioning the strategic value and causal inference basis of these tools, while also raising concerns about bias, discrimination, and ethical issues in the applications of algorithms. Despite these concerns, algorithmic management has gained prominence in large organizations, shaping workforce management practices. This thesis aims to address the gap between the rapidly changing market of HRM algorithms and the lack of theoretical understanding.
The thesis begins by conducting a comprehensive review of HRM algorithms in HRM practice and scholarship, clarifying their definition, exploring their unique features, and identifying specific topics and research questions in the field. It aims to bridge the gap between academia and practice to enhance the understanding and utilization of algorithms in HRM. I then explore the legal, causal, and moral issues associated with HR algorithms, comparing fairness criteria and advocating for the use of causal modeling to evaluate algorithmic fairness. The multifaceted nature of fairness is illustrated and practical strategies for enhancing justice perceptions and incorporating fairness into HR algorithms are proposed. Finally, the thesis adopts an artifact-centric approach to examine the ethical implications of HRM algorithms. It explores competing views on moral responsibility, introduces the concept of "ethical affordances," and analyzes the distribution of moral responsibility based on different types of ethical affordances. The paper provides a framework for analyzing and assigning moral responsibility to stakeholders involved in the design, use, and regulation of HRM algorithms.
Together, these papers contribute to the understanding of algorithms in HRM by addressing the research-practice gap, exploring fairness and accountability issues, and investigating the ethical implications. They offer theoretical insights, practical recommendations, and future research directions for both researchers and practitioners. / Thesis / Doctor of Philosophy (PhD) / This thesis explores the use of advanced algorithms in Human Resource Management (HRM) and how they affect decision-making in organizations. With the rise of big data and powerful algorithms, companies can analyze various HR practices like hiring, compensation, and employee engagement. However, there are concerns about biases and ethical issues in algorithmic decision-making. This research examines the benefits and challenges of HRM algorithms and suggests ways to ensure fairness and ethical considerations in their design and application. By bridging the gap between theory and practice, this thesis provides insights into the responsible use of algorithms in HRM. The findings of this research can help organizations make better decisions while maintaining fairness and upholding ethical standards in HR practices.
|
4 |
Fairness in RankingsZehlike, Meike 26 April 2022 (has links)
Künstliche Intelligenz und selbst-lernende Systeme, die ihr Verhalten aufgrund
vergangener Entscheidungen und historischer Daten adaptieren, spielen eine im-
mer größer werdende Rollen in unserem Alltag. Wir sind umgeben von einer
großen Zahl algorithmischer Entscheidungshilfen, sowie einer stetig wachsenden
Zahl algorithmischer Entscheidungssysteme. Rankings und sortierte Listen von
Suchergebnissen stellen dabei das wesentliche Instrument unserer Onlinesuche nach
Inhalten, Produkten, Freizeitaktivitäten und relevanten Personen dar. Aus diesem
Grund bestimmt die Reihenfolge der Suchergebnisse nicht nur die Zufriedenheit der
Suchenden, sondern auch die Chancen der Sortierten auf Bildung, ökonomischen
und sogar sozialen Erfolg. Wissenschaft und Politik sorgen sich aus diesem Grund
mehr und mehr um systematische Diskriminierung und Bias durch selbst-lernende
Systeme.
Um der Diskriminierung im Kontext von Rankings und sortierten Suchergeb-
nissen Herr zu werden, sind folgende drei Probleme zu addressieren: Zunächst
müssen wir die ethischen Eigenschaften und moralischen Ziele verschiedener Sit-
uationen erarbeiten, in denen Rankings eingesetzt werden. Diese sollen mit den
ethischen Werten der Algorithmen übereinstimmen, die zur Vermeidung von diskri-
minierenden Rankings Anwendung finden. Zweitens ist es notwendig, ethische
Wertesysteme in Mathematik und Algorithmen zu übersetzen, um sämtliche moralis-
chen Ziele bedienen zu können. Drittens sollten diese Methoden einem breiten
Publikum zugänglich sein, das sowohl Programmierer:innen, als auch Jurist:innen
und Politiker:innen umfasst. / Artificial intelligence and adaptive systems, that learn patterns from past behavior
and historic data, play an increasing role in our day-to-day lives. We are surrounded
by a vast amount of algorithmic decision aids, and more and more by algorithmic
decision making systems, too. As a subcategory, ranked search results have become
the main mechanism, by which we find content, products, places, and people online.
Thus their ordering contributes not only to the satisfaction of the searcher, but also
to career and business opportunities, educational placement, and even social success
of those being ranked. Therefore researchers have become increasingly concerned
with systematic biases and discrimination in data-driven ranking models.
To address the problem of discrimination and fairness in the context of rank-
ings, three main problems have to be solved: First, we have to understand the
philosophical properties of different ranking situations and all important fairness
definitions to be able to decide which method would be the most appropriate for a
given context. Second, we have to make sure that, for any fairness requirement in
a ranking context, a formal definition that meets such requirements exists. More
concretely, if a ranking context, for example, requires group fairness to be met, we
need an actual definition for group fairness in rankings in the first place. Third,
the methods together with their underlying fairness concepts and properties need
to be available to a wide range of audiences, from programmers, to policy makers
and politicians.
|
5 |
Evaluating, Understanding, and Mitigating Unfairness in Recommender SystemsYao, Sirui 10 June 2021 (has links)
Recommender systems are information filtering tools that discover potential matchings between users and items and benefit both parties. This benefit can be considered a social resource that should be equitably allocated across users and items, especially in critical domains such as education and employment. Biases and unfairness in recommendations raise both ethical and legal concerns. In this dissertation, we investigate the concept of unfairness in the context of recommender systems. In particular, we study appropriate unfairness evaluation metrics, examine the relation between bias in recommender models and inequality in the underlying population, as well as propose effective unfairness mitigation approaches.
We start with exploring the implication of fairness in recommendation and formulating unfairness evaluation metrics. We focus on the task of rating prediction. We identify the insufficiency of demographic parity for scenarios where the target variable is justifiably dependent on demographic features. Then we propose an alternative set of unfairness metrics that measured based on how much the average predicted ratings deviate from average true ratings. We also reduce these unfairness in matrix factorization (MF) models by explicitly adding them as penalty terms to learning objectives.
Next, we target a form of unfairness in matrix factorization models observed as disparate model performance across user groups. We identify four types of biases in the training data that contribute to higher subpopulation error. Then we propose personalized regularization learning (PRL), which learns personalized regularization parameters that directly address the data biases. PRL poses the hyperparameter search problem as a secondary learning task. It enables back-propagation to learn the personalized regularization parameters by leveraging the closed-form solutions of alternating least squares (ALS) to solve MF. Furthermore, the learned parameters are interpretable and provide insights into how fairness is improved.
Third, we conduct theoretical analysis on the long-term dynamics of inequality in the underlying population, in terms of the fitting between users and items. We view the task of recommendation as solving a set of classification problems through threshold policies. We mathematically formulate the transition dynamics of user-item fit in one step of recommendation. Then we prove that a system with the formulated dynamics always has at least one equilibrium, and we provide sufficient conditions for the equilibrium to be unique. We also show that, depending on the item category relationships and the recommendation policies, recommendations in one item category can reshape the user-item fit in another item category.
To summarize, in this research, we examine different fairness criteria in rating prediction and recommendation, study the dynamic of interactions between recommender systems and users, and propose mitigation methods to promote fairness and equality. / Doctor of Philosophy / Recommender systems are information filtering tools that discover potential matching between users and items. However, a recommender system, if not properly built, may not treat users and items equitably, which raises ethical and legal concerns. In this research, we explore the implication of fairness in the context of recommender systems, study the relation between unfairness in recommender output and inequality in the underlying population, and propose effective unfairness mitigation approaches.
We start with finding unfairness metrics appropriate for recommender systems. We focus on the task of rating prediction, which is a crucial step in recommender systems. We propose a set of unfairness metrics measured as the disparity in how much predictions deviate from the ground truth ratings. We also offer a mitigation method to reduce these forms of unfairness in matrix factorization models
Next, we look deeper into the factors that contribute to error-based unfairness in matrix factorization models and identify four types of biases that contribute to higher subpopulation error. Then we propose personalized regularization learning (PRL), which is a mitigation strategy that learns personalized regularization parameters to directly addresses data biases. The learned per-user regularization parameters are interpretable and provide insight into how fairness is improved.
Third, we conduct a theoretical study on the long-term dynamics of the inequality in the fitting (e.g., interest, qualification, etc.) between users and items. We first mathematically formulate the transition dynamics of user-item fit in one step of recommendation. Then we discuss the existence and uniqueness of system equilibrium as the one-step dynamics repeat. We also show that depending on the relation between item categories and the recommendation policies (unconstrained or fair), recommendations in one item category can reshape the user-item fit in another item category.
In summary, we examine different fairness criteria in rating prediction and recommendation, study the dynamics of interactions between recommender systems and users, and propose mitigation methods to promote fairness and equality.
|
6 |
Adapative Summarization for Low-resource Domains and Algorithmic FairnessKeymanesh, Moniba January 2022 (has links)
No description available.
|
7 |
Biases in AI: An Experiment : Algorithmic Fairness in the World of Hateful Language Detection / Bias i AI: ett experiment : Algoritmisk rättvisa inom detektion av hatbudskapStozek, Anna January 2023 (has links)
Hateful language is a growing problem in digital spaces. Human moderators are not enough to eliminate the problem. Automated hateful language detection systems are used to aid the human moderators. One of the issues with the systems is that their performance can differ depending on who is the target of a hateful text. This project evaluated the performance of the two systems (Perspective and Hatescan) with respect to who is the target of hateful texts. The analysis showed, that the systems performed the worst for texts directed at women and immigrants. The analysis involved tools such as a synthetic dataset based on the HateCheck test suite, as well as wild datasets created from forum data. Improvements to the test suite HateCheck have also been proposed. / Hatiskt språk är ett växande problem i digitala miljöer. Datamängderna är för stora för att enbart hanteras av mänskliga moderatorer. Automatiska system för hatdetektion används därför som stöd. Ett problem med dessa system är att deras prestanda kan variera beroende på vem som är målet för en hatfull text. Det här projektet evaluerade prestandan av de två systemen Perspective och Hatescan med hänsyn till olika mål för hatet. Analysen visade att systemen presterade sämst för texter där hatet riktades mot kvinnor och invandrare. Analysen involverade verktyg som ett syntetiskt dataset baserat på testsviten HateCheck och vilda dataset med texter inhämtade från diskussionsforum på internet. Dessutom har projektet utvecklat förslag på förbättringar till testsviten HateCheck.
|
8 |
Online Communities and HealthVillacis Calderon, Eduardo David 26 August 2022 (has links)
People are increasingly turning to online communities for entertainment, information, and social support, among other uses and gratifications. Online communities include traditional online social networks (OSNs) such as Facebook but also specialized online health communities (OHCs) where people go specifically to seek social support for various health conditions. OHCs have obvious health ramifications but the use of OSNs can also influence people's mental health and health behaviors. The use of online communities has been widely studied but in the health context their exploration has been more limited. Not only are online communities being extensively used for health purposes, but there is also increasing concern that the use of online communities can itself affect health. Therefore, there is a need to better understand how such technologies influence people's health and health behaviors.
The research in this dissertation centers on examining how online community use influences health and health behaviors. There are three studies in this dissertation. The first study develops a conceptual model to explain the process whereby the characteristics of a request from an OHC user for social support is answered by a wounded healer, who is a person leveraging their own experiences with health challenges to help others. The second study investigates how algorithmic fairness, accountability, and transparency of an OSN newsfeed algorithm influence the users' attitudes and beliefs about childhood vaccines and ultimately their vaccine hesitancy. The third study examines how OSN social overload, through OSN use, can lead to psychological distress and received social support. The research contributes theoretical and practical insights to the literature on the use of online communities in the health context. / Doctor of Philosophy / People use online communities to socialize and to seek out information and help. Online social networks (OSNs) such as Facebook are large communities on which people segregate into smaller groups to discuss joint interests. Some online communities cater to specific needs, such as online health communities (OHCs), which provide platforms for people to talk about the health challenges they or their loved ones are facing. Online communities do not intentionally seek controversy, but because they welcome all perspectives, they have contributed to phenomena such as vaccine hesitancy. Moreover, social overload from the use of OSNs can have both positive and negative psychological effects on users. This dissertation examines the intersection of online communities and health. The first study explains how the interaction of the characteristics of a request for social support made by an OHC user and the characteristics of the wounded healer drive the provision of social support. The model that is developed shows the paths through which the empathy of the wounded healer and the characteristics of the request lead to motivation to provide help to those in need on an OHC. In the second study, the role of characteristics of a newsfeed algorithm, specifically fairness, accountability, and transparency (FAT), in the development of childhood vaccine hesitancy is examined. The findings show that people's perceptions of the newsfeed algorithm's FAT increase their negative attitudes toward vaccination and their perceived behavioral control over vaccination. The third study examines how different uses of OSNs can influence the relationships between social overload and psychological distress and received social support. The findings show how OSN use can be tailored to decrease negative and increase positive psychological consequences without discontinuing use.
|
9 |
INVESTIGATING DATA ACQUISITION TO IMPROVE FAIRNESS OF MACHINE LEARNING MODELSEkta (18406989) 23 April 2024 (has links)
<p dir="ltr">Machine learning (ML) algorithms are increasingly being used in a variety of applications and are heavily relied upon to make decisions that impact people’s lives. ML models are often praised for their precision, yet they can discriminate against certain groups due to biased data. These biases, rooted in historical inequities, pose significant challenges in developing fair and unbiased models. Central to addressing this issue is the mitigation of biases inherent in the training data, as their presence can yield unfair and unjust outcomes when models are deployed in real-world scenarios. This study investigates the efficacy of data acquisition, i.e., one of the stages of data preparation, akin to the pre-processing bias mitigation technique. Through experimental evaluation, we showcase the effectiveness of data acquisition, where the data is acquired using data valuation techniques to enhance the fairness of machine learning models.</p>
|
10 |
Data-based Explanations of Random Forest using Machine UnlearningTanmay Laxman Surve (17537112) 03 December 2023 (has links)
<p dir="ltr">Tree-based machine learning models, such as decision trees and random forests, are one of the most widely used machine learning models primarily because of their predictive power in supervised learning tasks and ease of interpretation. Despite their popularity and power, these models have been found to produce unexpected or discriminatory behavior. Given their overwhelming success for most tasks, it is of interest to identify root causes of the unexpected and discriminatory behavior of tree-based models. However, there has not been much work on understanding and debugging tree-based classifiers in the context of fairness. We introduce FairDebugger, a system that utilizes recent advances in machine unlearning research to determine training data subsets responsible for model unfairness. Given a tree-based model learned on a training dataset, FairDebugger identifies the top-k training data subsets responsible for model unfairness, or bias, by measuring the change in model parameters when parts of the underlying training data are removed. We describe the architecture of FairDebugger and walk through real-world use cases to demonstrate how FairDebugger detects these patterns and their explanations.</p>
|
Page generated in 0.0846 seconds