Return to search

Crowd Compositions for Bias Detection and Mitigation in Predicting Recidivism

This thesis explores an approach to predicting recidivism by leveraging crowdsourcing, contrasting traditional judicial discretion and algorithmic models. Instead of relying on judges or algorithms, participants predicted the likelihood of re-offending using the COMPAS dataset, which includes demographic and criminal record information. The study analyzed both quantitative and qualitative data to assess biases in human versus algorithmic predictions. Findings reveal that homogeneous crowds reflect the biases of their composition, leading to more pronounced gender and racial biases. In contrast, heterogeneous crowds, with equal and random distributions, present a more balanced view, though underlying biases still emerge. Both gender and racial biases influence how re-offending risk is perceived, significantly impacting risk evaluations. Specifically, crowds rated African American offenders as less likely to re-offend compared to COMPAS, which assigned them higher risk scores, while Caucasian and Hispanic offenders were perceived as more likely to re-offend by crowds. Gender differences also emerged, with males rated as less likely to re-offend and females as more likely. This study highlights crowdsourcing's potential to mitigate biases and provides insights into balancing consistency and fairness in risk assessments. / Master of Science / Within the criminal justice system, predicting whether someone will re-offend has typically depended on the judgment of judges and computerized systems. This thesis investigates another avenue for predicting re-offending by using crowdsourcing, which gathers input from a group of people. In this study, participants were asked to predict the likelihood of re-offending for several offenders using demographic and criminal record information from the publicly available COMPAS dataset. Participants provided scores, and some also explained their reasoning. Bias, defined as a systematic unfairness that leads to prejudiced outcomes, was a key focus. To understand bias, the study created different groups within the participant crowd based on age, gender, and race, and compared their predictions with COMPAS scores. The analysis revealed important insights into the biases present in both human and algorithmic predictions. A homogeneous crowd, is associated with minimal differences in ratings across genders and races, suggesting a consistent but potentially biased perspective. While a diverse crowd, leads to varied ratings without a clear trend, reflecting a broader range of viewpoints but also increased variability. This suggests that while a diverse crowd may help reduce bias, it can also result in less predictable assessments.

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/121230
Date30 September 2024
CreatorsMhatre, Sakshi Manish
ContributorsComputer Science and#38; Applications, Luther, Kurt, Lanus, Erin F., Lu, Chang Tien
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
LanguageEnglish
Detected LanguageEnglish
TypeThesis
FormatETD, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/

Page generated in 0.0033 seconds