Return to search

Ramverk för att motverka algoritmisk snedvridning

Användningen av artificiell intelligens (AI) har tredubblats på ett år och och anses av vissa vara det viktigaste paradigmskiftet i teknikhistorien. Den rådande AI-kapplöpningen riskerar att underminera frågor om etik och hållbarhet, vilket kan ge förödande konsekvenser. Artificiell intelligens har i flera fall visat sig avbilda, och till och med förstärka, befintliga snedvridningar i samhället i form av fördomar och värderingar. Detta fenomen kallas algoritmisk snedvridning (algorithmic bias). Denna studie syftar till att formulera ett ramverk för att minimera risken att algoritmisk snedvridning uppstår i AI-projekt och att anpassa det efter ett medelstort konsultbolag. Studiens första del är en litteraturstudie på snedvridningar - både ur ett kognitivt och ur ett algoritmiskt perspektiv. Den andra delen är en undersökning av existerande rekommendationer från EU, AI Sustainability Center, Google och Facebook. Den tredje och sista delen består av ett empiriskt bidrag i form av en kvalitativ intervjustudie, som har använts för att justera ett initialt ramverk i en iterativ process. / In the use of the third generation Artificial Intelligence (AI) for the development of products and services, there are many hidden risks that may be difficult to detect at an early stage. One of the risks with the use of machine learning algorithms is algorithmic bias which, in simplified terms, means that implicit prejudices and values are comprised in the implementation of AI. A well-known case is Google’s image recognition algorithm, which identified black people as gorillas. The purpose of this master thesis is to create a framework with the aim to minimise the risk of algorithmic bias in AI development projects. To succeed with this task, the project has been divided into three parts. The first part is a literature study of the phenomenon bias, both from a human perspective as well as from an algorithmic bias perspective. The second part is an investigation of existing frameworks and recommendations published by Facebook, Google, AI Sustainability Center and the EU. The third part consists in an empirical contribution in the form of a qualitative interview study which has been used to create and adapt an initial general framework. The framework was created using an iterative methodology where two whole iterations were performed. The first version of the framework was created using insights from the literature studies as well as from existing recommendations. To validate the first version, the framework was presented for one of Cybercom’s customers in the private sector, who also got the possibility to ask questions and give feedback regarding the framework. The second version of the framework was created using results from the qualitative interview studies with machine learning experts at Cybercom. As a validation of the applicability of the framework on real projects and customers, a second qualitative interview study was performed together with Sida - one of Cybercom’s customers in the public sector. Since the framework was formed in a circular process, the second version of the framework should not be treated as constant or complete. The interview study at Sida is considered the beginning of a third iteration, which in future studies could be further developed.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-385348
Date January 2019
CreatorsEngman, Clara, Skärdin, Linnea
PublisherUppsala universitet, Avdelningen för visuell information och interaktion, Uppsala universitet, Avdelningen för visuell information och interaktion
Source SetsDiVA Archive at Upsalla University
LanguageSwedish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess
RelationUPTEC STS, 1650-8319 ; 19015

Page generated in 0.0063 seconds