Spelling suggestions: "subject:"algorithmic dias"" "subject:"algorithmic bias""
1 |
Mitigating algorithmic bias in Artificial Intelligence systemsFyrvald, Johanna January 2019 (has links)
Artificial Intelligence (AI) systems are increasingly used in society to make decisions that can have direct implications on human lives; credit risk assessments, employment decisions and criminal suspects predictions. As public attention has been drawn towards examples of discriminating and biased AI systems, concerns have been raised about the fairness of these systems. Face recognition systems, in particular, are often trained on non-diverse data sets where certain groups often are underrepresented in the data. The focus of this thesis is to provide insights regarding different aspects that are important to consider in order to mitigate algorithmic bias as well as to investigate the practical implications of bias in AI systems. To fulfil this objective, qualitative interviews with academics and practitioners with different roles in the field of AI and a quantitative online survey is conducted. A practical scenario covering face recognition and gender bias is also applied in order to understand how people reason about this issue in a practical context. The main conclusion of the study is that despite high levels of awareness and understanding about challenges and technical solutions, the academics and practitioners showed little or no awareness of legal aspects regarding bias in AI systems. The implication of this finding is that AI can be seen as a disruptive technology, where organizations tend to develop their own mitigation tools and frameworks as well as use their own moral judgement and understanding of the area instead of turning to legal authorities.
|
2 |
Exploring artificial intelligence bias : a comparative study of societal bias patterns in leading AI-powered chatbots.Udała, Katarzyna Agnieszka January 2023 (has links)
The development of artificial intelligence (AI) has revolutionised the way we interact with technology and each other, both in society and in professional careers. Although they come with great potential for productivity and automation, AI systems have been found to exhibit biases that reflect and perpetuate existing societal inequalities. With the recent rise of artificial intelligence tools exploiting the large language model (LLM) technology, such as ChatGPT, Bing Chat and Bard AI, this research project aims to investigate the extent of AI bias in said tools and explore its ethical implications. By reviewing and analysing responses to carefully crafted prompts generated by three different AI chatbot tools, the author will intend to determine whether the content generated by these tools indeed exhibits patterns of bias related to various social identities, as well as compare the extent to which such bias is present across all three tools. This study will contribute to the growing body of literature on AI ethics and inform efforts to develop more equitable and inclusive AI systems. By exploring the ethical dimensions of AI bias in selected LLMs, this research will shed light on the broader societal implications of AI and the role of technology in shaping our future.
|
3 |
From Data to Loyalty : A quantative study of consumer's response to AI-driven personalized marketingHolmström, Emma, Larsson, Alma January 2024 (has links)
Background: The increasing reliance on Artificial Intelligence (AI) in personalized marketing has reshaped consumer interactions in the digital era. With technological advancements, there is a growing need to explore how AI-driven personalization influences consumer behavior, particularly in satisfaction, loyalty, and ethical considerations. Purpose: This thesis investigates the impact of AI-driven personalized marketing on consumer perceptions, attitudes, and behaviors. It aims to understand how trust and ethical considerations such as data privacy and algorithmic bias influence consumer responses and engagement with personalized marketing. Method: Employing a quantitative approach, this study integrates quantitative analysis from surveys conducted on 100 participants. This method provides a comprehensive understanding of the implications of AI-driven personalization. Statistical tools like SPSS were used for data analysis, ensuring rigorous examination of the collected data. Conclusion: The findings reveal a nuanced response to personalized marketing. While AI-driven personalization can enhance consumer engagement and satisfaction, transparency and ethical considerations are critical in securing consumer trust and loyalty. The study underscores the importance of ethical marketing practices and the need for continuous adaptation to technological advancements to align with consumer expectations and ethical standards. This research contributes to academic discussions on personalized marketing and offers strategic insights for integrating technological advancements with consumer-centric approaches in marketing practices.
|
4 |
Ramverk för att motverka algoritmisk snedvridningEngman, Clara, Skärdin, Linnea January 2019 (has links)
Användningen av artificiell intelligens (AI) har tredubblats på ett år och och anses av vissa vara det viktigaste paradigmskiftet i teknikhistorien. Den rådande AI-kapplöpningen riskerar att underminera frågor om etik och hållbarhet, vilket kan ge förödande konsekvenser. Artificiell intelligens har i flera fall visat sig avbilda, och till och med förstärka, befintliga snedvridningar i samhället i form av fördomar och värderingar. Detta fenomen kallas algoritmisk snedvridning (algorithmic bias). Denna studie syftar till att formulera ett ramverk för att minimera risken att algoritmisk snedvridning uppstår i AI-projekt och att anpassa det efter ett medelstort konsultbolag. Studiens första del är en litteraturstudie på snedvridningar - både ur ett kognitivt och ur ett algoritmiskt perspektiv. Den andra delen är en undersökning av existerande rekommendationer från EU, AI Sustainability Center, Google och Facebook. Den tredje och sista delen består av ett empiriskt bidrag i form av en kvalitativ intervjustudie, som har använts för att justera ett initialt ramverk i en iterativ process. / In the use of the third generation Artificial Intelligence (AI) for the development of products and services, there are many hidden risks that may be difficult to detect at an early stage. One of the risks with the use of machine learning algorithms is algorithmic bias which, in simplified terms, means that implicit prejudices and values are comprised in the implementation of AI. A well-known case is Google’s image recognition algorithm, which identified black people as gorillas. The purpose of this master thesis is to create a framework with the aim to minimise the risk of algorithmic bias in AI development projects. To succeed with this task, the project has been divided into three parts. The first part is a literature study of the phenomenon bias, both from a human perspective as well as from an algorithmic bias perspective. The second part is an investigation of existing frameworks and recommendations published by Facebook, Google, AI Sustainability Center and the EU. The third part consists in an empirical contribution in the form of a qualitative interview study which has been used to create and adapt an initial general framework. The framework was created using an iterative methodology where two whole iterations were performed. The first version of the framework was created using insights from the literature studies as well as from existing recommendations. To validate the first version, the framework was presented for one of Cybercom’s customers in the private sector, who also got the possibility to ask questions and give feedback regarding the framework. The second version of the framework was created using results from the qualitative interview studies with machine learning experts at Cybercom. As a validation of the applicability of the framework on real projects and customers, a second qualitative interview study was performed together with Sida - one of Cybercom’s customers in the public sector. Since the framework was formed in a circular process, the second version of the framework should not be treated as constant or complete. The interview study at Sida is considered the beginning of a third iteration, which in future studies could be further developed.
|
5 |
La performativité algorithmique : construction identitaire et interférenceBeaupré-Daignault, Alexis 08 1900 (has links)
Dans ce mémoire, nous tentons de déterminer si les SIA participent à la détérioration de la justice sociale. En ce sens, notre hypothèse est qu’une force performative émerge de la répétition des décisions algorithmiques, laquelle interfère avec la construction identitaire des individus. Selon le cadre d’analyse préconisé, lequel s’inscrit au sein de la conception de la justice d’Iris Marion Young, nous soutenons que l’interférence identitaire est injuste puisqu’elle contribue à l’oppression de l’impérialisme culturel. S’avérant, ce phénomène de performativité algorithmique expliquerait, d’une part, l’interaction entre l’identité et les algorithmes, et permettrait d’autre part son exploitation. Ultimement, nous soutenons que la performativité des algorithmes pourrait être manipulée afin qu’elle soit mise au service de la justice sociale. / In this thesis, we try to determine if artificial intelligence systems contribute to the deterioration of social justice. In this sense, our hypothesis is that a performative force emerges from the repetition of algorithmic decisions, which interferes with the identity construction of individuals. According to the recommended analytical framework, which is part of Iris Marion Young's conception of justice, we argue that identity interference is unjust because it contributes to the oppression of cultural imperialism. If it turns out, this phenomenon of algorithmic performativity would explain, on the one hand, the interaction between identity and algorithms, and on the other hand allow the exploitation of the latter. Ultimately, we argue that the performativity of algorithms could be manipulated so that it is put at the service of social justice.
|
6 |
Artificiell intelligens och gender bias : En studie av samband mellan artificiell intelligens, gender bias och könsdiskriminering / Addressing Gender Bias in Artificial IntelligenceLycken, Hanna January 2019 (has links)
AI spås få lika stor påverkan på samhället som elektricitet haft och avancemangen inom till exempel maskininlärning och neurala nätverk har tagit AI in i sektorer som rättsväsende, rekrytering och hälso- och sjukvård. Men AI-system är, precis som människor, känsliga för olika typer av snedvridningar, vilket kan leda till orättvisa beslut. En alarmerande mängd studier och rapporter visar att AI i flera fall speglar, sprider och förstärker befintliga snedvridningar i samhället i form av fördomar och värderingar vad gäller könsstereotyper och könsdiskriminering. Algoritmer som används i bildigenkänning baserar sina beslut på stereotyper om vad som är manligt och kvinnligt, röstigenkänning är mer trolig att korrekt känna igen manliga röster jämfört med kvinnliga röster och röstassistenter som Microsoft:s Cortona eller Apple:s Siri förstärker befintlig könsdiskriminering i samhällen. Syftet med denna studie är att undersöka hur könsdiskriminering kan uppstå i AI-system generellt, hur relationen mellan gender bias och AI-system ser ut samt hur ett företag som arbetar med utveckling av AI resonerar kring relationen mellan gender bias och AI-utveckling. Studiens syfte uppfylls genom en litteraturgenomgång samt djupintervjuer med nyckelpersoner som på olika sätt arbetar med AI-utveckling på KPMG. Resultaten visar att bias i allmänhet och gender bias i synnerhet finns närvarande i alla steg i utvecklingen av AI och kan uppstå på grund av en mängd olika faktorer, inklusive men inte begränsat till mångfald i utvecklingsteamen, utformningen av algoritmer och beslut relaterade till hur data samlas in, kodas, eller används för att träna algoritmer. De lösningar som föreslås handlar dels om att adressera respektive orsaksfaktor som identifierats, men även att se problemet med gender bias och könsdiskriminering i AI-system från ett helhetsperspektiv. Essensen av resultaten är att det inte räcker att ändra någon av parametrarna om inte systemets struktur samtidigt ändras. / Recent advances in, for example, machine learning and neural networks have taken artificial intelligence into disciplines such as justice, recruitment and health care. As in all fields subject to AI, correct decisions are crucial and there is no room for discriminatory conclusions. However, AI-systems are, just like humans, subject to various types of distortions, which can lead to unfair decisions. An alarming number of studies and reports show that AI in many cases reflects and reinforces existing gender bias in society. Algorithms used in image recognition base their decisions on character stereotypes of male and female. Voice recognition is more likely to correctly recognize male voices compared to female voices, and earlier 2019 the United Nations released a study showing that voice assistants, such as Microsoft's Cortona or Apple's Siri, reinforce existing gender bias. The purpose of this study is to investigate how gender discrimination can appear in AI-systems, and what constitutes the relationship between gender bias, gender discrimination and AI-systems. Furthermore it addresses how a company that works with the development of AI reason concerning the relationship between gender bias, gender discrimination and AI development. The study contains a thorough literature review, as well as in-depth interviews with key persons working with various aspects of AI development at KPMG. The results show that bias in general, and gender bias in particular, are present at all stages of AI development. It can occur due to a variety of factors, including but not limited to the lack of diversity in the workforce, the design of algorithms and the decisions related to how data is collected, encoded and used to train algorithms. The solutions proposed are partly about addressing the identified factors, but also about looking at the problem from a holistic perspective. The significance of seeing and understanding the links between gender bias in society and gender bias in AI-systems, as well as reconsidering how each factor depends on and correlates with other ones, is emphasized. The essence of the results is that it is not enough to alter any of the parameters unless the structure of the system is changed as well.
|
7 |
Enhancing Fairness in Facial Recognition: Balancing Datasets and Leveraging AI-Generated Imagery for Bias Mitigation : A Study on Mitigating Ethnic and Gender Bias in Public Surveillance SystemsAbbas, Rashad, Tesfagiorgish, William Issac January 2024 (has links)
Facial recognition technology has become a ubiquitous tool in security and personal identification. However, the rise of this technology has been accompanied by concerns over inherent biases, particularly regarding ethnic and gender. This thesis examines the extent of these biases by focusing on the influence of dataset imbalances in facial recognition algorithms. We employ a structured methodological approach that integrates AI-generated images to enhance dataset diversity, with the intent to balance representation across ethnics and genders. Using the ResNet and Vgg model, we conducted a series of controlled experiments that compare the performance impacts of balanced versus imbalanced datasets. Our analysis includes the use of confusion matrices and accuracy, precision, recall and F1-score metrics to critically assess the model’s performance. The results demonstrate how tailored augmentation of training datasets can mitigate bias, leading to more equitable outcomes in facial recognition technology. We present our findings with the aim of contributing to the ongoing dialogue regarding AI fairness and propose a framework for future research in the field.
|
8 |
Malicious Intent Detection Framework for Social NetworksFausak, Andrew Raymond 05 1900 (has links)
Many, if not all people have online social accounts (OSAs) on an online community (OC) such as Facebook (Meta), Twitter (X), Instagram (Meta), Mastodon, Nostr. OCs enable quick and easy interaction with friends, family, and even online communities to share information about. There is also a dark side to Ocs, where users with malicious intent join OC platforms with the purpose of criminal activities such as spreading fake news/information, cyberbullying, propaganda, phishing, stealing, and unjust enrichment. These criminal activities are especially concerning when harming minors. Detection and mitigation are needed to protect and help OCs and stop these criminals from harming others. Many solutions exist; however, they are typically focused on a single category of malicious intent detection rather than an all-encompassing solution. To answer this challenge, we propose the first steps of a framework for analyzing and identifying malicious intent in OCs that we refer to as malicious mntent detection framework (MIDF). MIDF is an extensible proof-of-concept that uses machine learning techniques to enable detection and mitigation. The framework will first be used to detect malicious users using solely relationships and then can be leveraged to create a suite of malicious intent vector detection models, including phishing, propaganda, scams, cyberbullying, racism, spam, and bots for open-source online social networks, such as Mastodon, and Nostr.
|
Page generated in 0.268 seconds