21 |
The effects of computer-mediated communication and culture on personnel selection and recruitmentColeby, Grant Christopher Paul January 2002 (has links)
No description available.
|
22 |
Uncertainty and fairness judgments the role of information ambiguity /Nason, Emily Mung-lam, January 2008 (has links)
Thesis (Ph. D.)--UCLA, 2008. / Vita. Description based on print version record. Includes bibliographical references (leaves 127-137).
|
23 |
Effects of perceived fairness, cultural paradigms, and attributions on bargaining behaviorAkutso, Satoshi. January 1998 (has links)
Thesis (Ph. D.)--University of California, Berkeley, 1998. / Includes bibliographical references (leaves 189-200).
|
24 |
Theorien der Verteilungsgerechtigkeit - eine KontroverseIsler, Damian. January 2005 (has links) (PDF)
Bachelor-Arbeit Univ. St. Gallen, 2005.
|
25 |
Preisfairness im Handel Determinanten, Messungen und preisstrategische Implikationen für den Schweizer Handel /Grossauer, Patrick. January 2008 (has links) (PDF)
Master-Arbeit Univ. St. Gallen, 2008.
|
26 |
Costs analysis and the role of heuristics in fairnessLi, Sai January 2018 (has links)
Although numerous theoretical traditions postulate that human fairness depends on the ratio of costs-to-benefits, theory and empirical data remain divided on the direction of the effect. Particularly, answers to the following questions have remained unclear: how cost/benefit ratios affect people’s fairness decision-making during resource allocations, how cost/benefit ratios affect people’s emotions and cognition when they receive fair or unfair treatments, whether people are intuitively selfish or fair, and how cost/benefit ratios of sharing affect it. To address these questions, I conducted three lines of studies in Chapters 2 to 4 of this dissertation. In Chapter 2, I examined how cost/benefit ratios of sharing affect people to make fair or unfair decisions in resource allocations. Results showed that more participants acted fairly when the costs were equal to the benefits as compared to when the costs were higher or lower than the benefits. Shifting from resource dividers to receivers, in Chapter 3 I tested people’s emotional responses and cognitive judgements when they receive fair or unfair treatments at different cost/benefit ratios. My findings revealed that people felt more negative under unfair treatments when the costs were equal to the benefits as compared to when the costs were higher or lower than the benefits. Findings from Chapter 2 and 3 suggested an even-split heuristic: When the costs were equal to the benefits and thus the even-split was fair, more people tended to make fair decisions, and people felt more negative about receiving an unfair offer. Building on these findings, Chapter 4 tested the even-split heuristic using a fast-slow dual process framework and proposed the Value-Heuristic Framework. Results in Chapter 4 showed that people took the shortest time to make the even-and-fair decision (i.e., the even-split was also fair). I also found that people took longer to make the even-but-not-fair decision (i.e., giving an even-split, which results in uneven payoffs), and the longest time to make the not-even-but-fair decision (i.e., giving an uneven-split that results in even payoffs). Based upon the overall findings from my three empirical chapters. I formulated a conceptual framework for explaining and predicting people’s fairness decision-making.
|
27 |
Fairness and Privacy Violations in Black-Box Personalization Systems: Detection and DefensesDatta, Amit 01 March 2018 (has links)
Black box personalization systems have become ubiquitous in our daily lives. They utilize collected data about us to make critical decisions such as those related to credit approval and insurance premiums. This leads to concerns about whether these systems respect expectations of fairness and privacy. Given the black box nature of these systems, it is challenging to test whether they satisfy certain fundamental fairness and privacy properties. For the same reason, while many black box privacy enhancing technologies offer consumers the ability to defend themselves from data collection, it is unclear how effective they are. In this doctoral thesis, we demonstrate that carefully designed methods and tools that soundly and scalably discover causal effects in black box software systems are useful in evaluating personalization systems and privacy enhancing technologies to understand how well they protect fairness and privacy. As an additional defense against discrimination, this thesis also explores legal liability for ad platforms in serving discriminatory ads. To formally study fairness and privacy properties in black box personalization systems, we translate these properties into information flow instances and develop methods to detect information flow. First, we establish a formal connection between information flow and causal effects. As a consequence, we can use randomized controlled experiments, traditionally used to detect causal effects, to detect information flow through black box systems. We develop AdFisher as a general framework to perform information flow experiments scalably on web systems and use it to evaluate discrimination, transparency, and choice on Google’s advertising ecosystem. We find evidence of gender-based discrimination in employment-related ads and a lack of transparency in Google’s transparency tool when serving ads for rehabilitation centers after visits to websites about substance abuse. Given the presence of discrimination and the use of sensitive attributes in personalization systems, we explore possible defenses for consumers. First, we evaluate the effectiveness of publicly available privacy enhancing technologies in protecting consumers from data collection by online trackers. Specifically, we use a combination of experimental and observational approaches to examine how well the technologies protect consumers against fingerprinting, an advanced form of tracking. Next, we explore legal liability for an advertising platform like Google for delivering employment and housing ads in a discriminatory manner under Title VII and the Fair Housing Act respectively. We find that an ad platform is unlikely to incur liability under Title VII due to its limited coverage. However, we argue that housing ads violating the Fair Housing Act could create liability if the ad platform targets ads toward or away from protected classes without explicit instructions from the advertiser.
|
28 |
Understanding and implementing managing diversity in organisations : a study in the retail sectorFoster, Carley Jayne January 2003 (has links)
Managing diversity has multiple meanings. Nevertheless, there is some agreement in the literature relating to its broad principles. In particular, there is agreement that there are business benefits to be gained from adopting a managing diversity approach. In other words, an organisation can achieve certain advantages by treating people differently, rather than the same. In this sense, managing diversity is an alternative approach to equal opportunities because the main thrust for adopting an equal opportunities approach arose from a moral imperative. The rhetoric also implies that implementing a managing diversity approach is straightforward. However, this study argues that there is a considerable difference between the persuasive rhetoric of managing diversity and the approach in practice. Adopting a qualitative case study strategy, this study has explored how managing diversity is understood and implemented by different organisational groups. In addition, the study has considered how perceptions of 'fairness' inform and interact with the application of managing diversity and it has considered how realistic, in practice, the business case for managing diversity is. Materials have been obtained from three separate organisations within a large UK based retailer. This study argues that managing diversity requires a stronger theoretical underpinning since there are a number of conceptual flaws that exist within the literature. The case analysis also indicates that the business case for managing diversity is based upon naïve assumptions that frequently fail to consider the 'costs' of managing diversity. The findings additionally suggest that treating people differently in an organisational environment that emphasises procedural justice and treating people the same is highly problematic. Furthermore, implementation is dependent on multiple interrelated internal and external organisational factors that are given little consideration in the literature. These factors have been identified in a map which can help organisations to make sense of managing diversity. Managing diversity, therefore, is an approach that is 'easy to talk about' but 'difficult to do'.
|
29 |
Using fairness instrumentally versus being treated fairly : a structural resolutionPillutla, Madan Mohan 11 1900 (has links)
Research on justice in social exchange distinguishes between fairness as a goal and
fairness as an interpersonal influence strategy. Strategic fairness is considered to be
epiphenomenal and explainable by more basic motives, most notably, self-interest; fairness
as a goal is based only on Lerner’s (1982) model. Recent findings contribute to a new
model which specifies that allocators of resources use fairness strategically while recipients
treat justice as a goal by reacting to perceived injustice. This dissertation presents the model
along with an experimental test of its predictions, which also addresses an ongoing debate
in experimental economics on the role of fairness in ultimatum and dictator games.
The experiment was designed to distinguish between fairness as an interpersonal
strategy and fairness as a goal. Participants moved from allocator to recipient roles in
various experimental conditions that varied their information and interdependence.
Results show that ultimatum offerers made smaller offers when respondents knew
how much they were dividing and larger offers when fairness was salient. Dictators made
smaller offers than ultimatum offerers, but did not reduce their offers as much as ultimatum
offerers when the respondent did not know how much was being divided. They appeared
unaffected by the salience of fairness. Respondents rejected more small offers than large
ones and more offers when they knew the amount being divided. The rejection rates of
ultimatum and dictator offers did not vary. The results show substantive support for the idea
that justice motives are role specific. Unexpected findings led to modifications of the model
with respect to the interdependence of the actors.
The results are discussed in terms of their implications for the study of justice in
general and for the specific case of fairness concerns in bargaining games. / Business, Sauder School of / Graduate
|
30 |
Ranking for Decision Making: Fairness and UsabilityKuhlman, Caitlin A. 06 May 2020 (has links)
Today, ranking is the de facto way that information is presented to users in automated systems, which are increasingly used for high stakes decision making. Such ranking algorithms are typically opaque, and users don’t have control over the ranking process. When complex datasets are distilled into simple rankings, patterns in the data are exploited which may not reflect the user’s true preferences, and can even include subtle encodings of historical inequalities. Therefore it is paramount that the user’s preferences and fairness objectives are reflected in the rankings generated. This research addresses concerns around fairness and usability of ranking algorithms. The dissertation is organized in two parts. Part one investigates the usability of interactive systems for automatic ranking. The aim is to better understand how to capture user knowledge through interaction design, and empower users to generate personalized rankings. A detailed requirements analysis for interactive ranking systems is conducted. Then alternative preference elicitation techniques are evaluated in a crowdsourced user study. The study reveals surprising ways in which collection interfaces may prompt users to organize more data, thereby requiring minimal effort to create sufficient training data for the underlying machine learning algorithm. Following from these insights, RanKit is presented. This system for personalized ranking automatically generates rankings based on user-specified preferences among a subset of items. Explanatory features give feedback on the impact of user preferences on the ranking model and confidence of predictions. A case study demonstrates the utility of this interactive tool. In part two, metrics for evaluating the fairness of rankings are studied in depth, and a new problem of fair ranking by consensus is introduced. Three group fairness metrics are presented: rank equality, rank calibration, and rank parity which cover a broad spectrum of fairness considerations from proportional representation to error rate similarity across groups. These metrics are designed using a pairwise evaluation strategy to adapt algorithmic fairness concepts previously only applicable for classification. The metrics are employed in the FARE framework, a novel diagnostic tool for auditing rankings which exposes tradeoffs between different notions of fairness. Next, different ways of measuring a single definition of fairness are evaluated in a comparative study of state-of-the-art statistical parity metrics for ranking. This study identifies a core set of parity metrics which all behave similarly with respect to group advantage, reflecting well an intuitive definition of unfairness. However, this analysis also reveals that under relaxed assumptions about group advantage, different ways of measuring group advantage yield different fairness results. Finally, I introduce a new problem of fair ranking by consensus among multiple decision makers. A family of algorithms are presented which solve this open problem of guaranteeing fairness for protected groups of candidates, while still producing a good aggregation of the base rankings. Exact solutions are presented as well as a method which guarantees fairness with minimal approximation error. Together, this research expands the utility of ranking algorithms to support fair decision making.
|
Page generated in 0.0521 seconds