Return to search

Automated decision-making vs indirect discrimination : Solution or aggravation?

The usage of automated decision making-systems by public institutions letting the system decide on the approval, determination or denial of individuals benefits as an example, is an effective measure in making more amount of work done in a shorter time period and to a lower cost than if it would have been done by humans. But still, although the technology has developed into being able to help us in this way, so has also the potential problems that these systems can cause while they are operating. The ones primarily affected here will be the individuals that are denied their benefits, health care, or pensions. The systems can maintain hidden, historical stigmatizations and prejudices, disproportionally affecting members of a certain historically marginalized group in a negative way through its decisions, simply because the systems have learned to do so. There is also a risk that the actual programmer includes her or his own bias, as well as incorrect translation of applicable legislations or policies causing the finalized system to make decisions on unknown bases, demanding more, less or completely other things than those requirements that are set up by the public and written laws. The language in which these systems works are in mathematical algorithms, which most ordinary individuals, public employees or courts will not understand. If suspecting that you could have been discriminated against by an automated decision, the requirements for successfully claim a violation of discrimination in US-, Canadian- and Swedish courts, ECtHR and ECJ demands you to show on which of your characteristics you were discriminated, and in comparison to which other group, a group that instead has been advantaged. Still, without any reasons or explanations to why the decision has been taken available for you as an applicant or for the court responsible, the inability to identify such comparator can lead to several cases of actual indirect discriminations being denied. A solution to this could be to follow the advice of Sophia Moreau’s theory, focusing on the actual harm that the individual claim to have suffered instead of on categorizing her or him due to certain traits, or on finding a suitable comparator. This is similar to a ruling of the Swedish Court of Appeal, where a comparator was not necessary in order to establish that the applicant had been indirectly discriminated by a public institution. Instead, the biggest focus in this case was on the harm that the applicant claimed to have suffered, and then on investigating whether this difference in treatment could be objectively justified. In order for Swedish and European legislation to be able to meet the challenges that can arise through the usage of automated decision making-systems, this model of the Swedish Court of Appeal could be a better suited model to help individuals being affected by an automated decision of a public institution, being potentially indirectly discriminative.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:umu-161110
Date January 2019
CreatorsLundberg, Emma
PublisherUmeå universitet, Juridiska institutionen
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0019 seconds