• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 1
  • Tagged with
  • 13
  • 13
  • 7
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Incorporating Ethics in Delegation To and From Artificial Intelligence-Enabled Information Systems

Saeed, Kashif 07 1900 (has links)
AI-enabled information systems (AI-enabled IS) offer enhanced utility and efficiency due to their knowledge-based endowments, enabling human agents to assign and receive tasks from AI-enabled IS. As a result, this leads to improved decision-making, ability to manage laborious jobs, and a decrease in human errors. Despite the performance-based endowments and efficiencies, there are significant ethical concerns regarding the use of and delegation to AI-enabled IS, which have been extensively addressed in the literature on the dark side of artificial intelligence (AI). Notable concerns include bias and discrimination, fairness, transparency, privacy, accountability, and autonomy. However, the Information Systems (IS) literature does not have a delegation framework that incorporates ethics in the delegation mechanism. This work seeks to integrate a mixed deontological-teleological ethical system into the delegation mechanism to (and from) AI-enabled IS. To that end, I present a testable model to ethically appraise various AI-enabled IS as well as ethically evaluate delegation to (and from) AI-enabled IS in various settings and situations.
2

Why you should care: Ethical AI principles in a business setting : A study investigating the relevancy of the Ethical framework for AI in the context of the IT and telecom industry in Sweden

Hugosson, Beatrice, Dinh, Donna, Esmerson, Gabriella January 2019 (has links)
Background: The development of artificial intelligence (AI) is ever increasing, especially in the telecom and IT industry due to its great potential competitive advantage. However, AI is implemented at a fast phase in society with insufficient consideration for the ethical implications. Luckily, different initiatives and organizations are now launching ethical principles to prevent possible negative effects stemming from AI usage. One example is the Ethical Framework for AI by Floridi et al., (2018) who established five ethical principles for sustainable AI with inspiration from bioethics. Moreover, Sweden as a country is taking AI ethics seriously since the government is on a mission to be the world leader in harnessing artificial intelligence. Problem: The research in the field of ethical artificial intelligence is increasing but is still in its infancy where the majority of the academic articles are conceptual papers. Moreover, the few frameworks that exist for responsible AI are not always action-guiding and applicable to all AI applications and contexts. Purpose: This study aims to contribute with empirical evidence within the topic of artificial intelligence ethics and investigate the relevancy of an existing framework, namely the Ethical Framework for AI by Floridi et al., (2018), in the IT and telecom industry in Sweden. Method: A qualitative multiple-case study of ten semi-structured interviews with participants from the companies EVRY and Ericsson. The findings have later been connected to the literature within the field of artificial intelligence and ethics. Results: The most reasonable interpretation from the findings and analysis is that some parts of the framework are relevant, while others are not. Specifically, the principles of autonomy and non- maleficence seem to be applicable, meanwhile justice and explicability appear to only be partially supported by the participants and beneficence is suggested to not be relevant due to several reasons.
3

The Governance of AI-based Information Technologies within Corporate Environments

Lobana, Jodie January 2021 (has links)
Artificial Intelligence (AI) is making significant progress in recent times and is gaining a strong foothold in business. Currently, there is no generally accepted scholarly framework for the governance of AI-based information technologies within corporate environments. Boards of directors who have the responsibility of overseeing corporate operations need to know how best to govern AI technologies within their companies. In response, this dissertation aims to understand the key elements that can assist boards in the governance of AI-based information technologies. Further, it attempts to understand how AI governance elements dynamically interact within a holistic system. As AI governance is a novel phenomenon, an exploratory investigation was conducted via a qualitative approach. Specifically, the study adopted a grounded theory methodology, within the constructivist paradigm, with the intent of generating theory instead of validating existing theory. Data collection included in-depth interviews with key experts in AI research, development, management, and governance processes in corporate and academic settings. Data were further supplemented with data received from conference presentations given by AI experts. Findings from this dissertation elicited a theoretical model of AI governance that shows various AI governance areas and constituting elements, their dynamic interaction, as well as the impact of these elements in enhancing the organizational performance of AI-based projects and reducing the risks associated with those projects. This dissertation provides a scholarly contribution by comparing governance elements within the IT governance domain and the new AI governance domain. In addition to theoretical contributions, this study provides practical contributions for the benefit of the boards of directors. These include a holistic AI governance framework that pictorially represents twenty-two AI governance elements that boards can use to build their own custom AI governance frameworks. In addition, recommendations are provided to assist boards in starting or enhancing their AI governance journeys. / Thesis / Doctor of Philosophy (PhD) / Artificial Intelligence (AI) refers to a set of technologies that seek to perform cognitive functions associated with human minds, such as learning, planning, and problem-solving. AI brings abundant opportunities as well as substantial risks. Major companies are trying to figure out how best to benefit from AI technologies. Boards of directors, with the responsibility of overseeing company operations, need to know how best to govern such technologies. In response, this study was conducted to uncover key AI governance elements that can assist boards in the governance of AI. Data were collected through in-depth interviews with AI experts and by attending AI conference presentations. Findings yield a theoretical model of AI governance that can assist scholars in enhancing their understanding of this emerging governance area. Findings also provide a holistic framework of AI governance that boards can use as a practical tool to enhance their effectiveness of the AI governance process.
4

A Jagged Little Pill: Ethics, Behavior, and the AI-Data Nexus

Kormylo, Cameron Fredric 21 December 2023 (has links)
The proliferation of big data and the algorithms that utilize it have revolutionized the way in which individuals make decisions, interact, and live. This dissertation presents a structured analysis of behavioral ramifications of artificial intelligence (AI) and big data in contemporary society. It offers three distinct but interrelated explorations. The first chapter investigates consumer reactions to digital privacy risks under the General Data Protection Regulation (GDPR), an encompassing regulatory act in the European Union aimed at enhancing consumer privacy controls. This work highlights how consumer behavior varies substantially between high- and low-risk privacy settings. These findings challenge existing notions surrounding privacy control efficacy and suggest a more complex consumer risk assessment process. The second study shifts to an investigation of historical obstacles to consumer adherence to expert advice, specifically betrayal aversion, in financial contexts. Betrayal aversion, a well-studied phenomenon in economics literature, is defined as the strong dislike for the violation of trust norms implicit in a relationship between two parties. Through a complex simulation, it contrasts human and algorithmic financial advisors, revealing a significant decrease in betrayal aversion when human experts are replaced by algorithms. This shift indicates a transformative change in the dynamics of AI-mediated environments. The third chapter addresses nomophobia – the fear of being without one's mobile device – in the workplace, quantifying its stress-related effects and impacts on productivity. This investigation not only provides empirical evidence of nomophobia's real-world implications but also underscores the growing interdependence between technology and mental health. Overall, the dissertation integrates interdisciplinary theoretical frameworks and robust empirical methods to delineate the profound and often nuanced implications of the AI-data nexus on human behavior, underscoring the need for a deeper understanding of our relationship with evolving technological landscapes. / Doctor of Philosophy / The massive amounts of data collected online and the smart technologies that use this data often affect the way we make decisions, interact with others, and go about our daily lives. This dissertation explores that relationship, investigating how artificial intelligence (AI) and big data are changing behavior in today's society. In my first study, I examine how individuals respond to high and low risks of sharing their personal information online, specifically under the General Data Protection Regulation (GDPR), a new regulation meant to protect online privacy in the European Union. Surprisingly, the results show that changes enacted by GDPR, such as default choices that automatically select the more privacy-preserving choice, are more effective in settings in which the risk to one's privacy is low. This implies the process in which people decide when and with whom to share information online is more complex than previously thought. In my second study, I shift focus to examine how people follow advice from experts, especially in financial decision contexts. I look specifically at betrayal aversion, a common trend studied in economics, that highlights individuals' unwillingness to trust someone when they fear they might be betrayed. I examine if betrayal aversion changes when human experts are replaced by algorithms. Interestingly, individuals displayed no betrayal aversion when given a financial investment algorithm, showing that non-human experts may have certain benefits for consumers over their human counterparts. Finally, I study a modern phenomenon called 'nomophobia' – the fear of being without your mobile phone – and how it affects people at work. I find that this fear can significantly increase stress, especially as phone-battery level levels decrease. This leads to a reduction in productivity, highlighting how deeply technology is intertwined with our mental health. Overall, this work utilizes a mix of theories and detailed analyses to show the complex and often subtle ways AI and big data are influencing our actions and thoughts. It emphasizes the importance of understanding our relationship with technology as it continues to evolve rapidly.
5

Ethical Questions Raised by AI-Supported Mentoring in Higher Education

Köbis, Laura, Mehner, Caroline 30 March 2023 (has links)
Mentoring is a highly personal and individual process, in which mentees take advantage of expertise and experience to expand their knowledge and to achieve individual goals. The emerging use of AI in mentoring processes in higher education not only necessitates the adherence to applicable laws and regulations (e.g., relating to data protection and nondiscrimination) but further requires a thorough understanding of ethical norms, guidelines, and unresolved issues (e.g., integrity of data, safety, and security of systems, and confidentiality, avoiding bias, insuring trust in and transparency of algorithms). Mentoring in Higher Education requires one of the highest degrees of trust, openness, and social–emotional support, as much is at the stake for mentees, especially their academic attainment, career options, and future life choices. However, ethical compromises seem to be common when digital systems are introduced, and the underlying ethical questions in AI-supported mentoring are still insufficiently addressed in research, development, and application. One of the challenges is to strive for privacy and data economy on the one hand, while Big Data is the prerequisite of AI-supported environments on the other hand. How can ethical norms and general guidelines of AIED be respected in complex digital mentoring processes? This article strives to start a discourse on the relevant ethical questions and in this way raise awareness for the ethical development and use of future data-driven, AI-supported mentoring environments in higher education.
6

Ethical Risk Analysis of the Use of AI in Music Production

Reje, Alexandra January 2022 (has links)
With the growing use of AI within new fields, the ethical problems that arise with AI have become a more prominent topic. Multiple ethical guidelines and frameworks have been proposed to aid companies and researchers to develop ethical products, but it is still lacking in execution. This project is a study into the ethical risks that are prevalent when AI is used for music production tools. The study was done by interviewing five different start-up companies about aspects of their company policies and products from an ethics point of view. The interviews were analysed, and six areas of interest were found. A final risk analysis was then done based on an existing ethical guideline, the Ethics Guidelines for Trustworthy AI by AI HLEG. Multiple risk areas were discovered, the largest ones being in connection to Diversity, Bias, Explainability, and Privacy. Another discovery was that multiple companies do not currently have an ethical framework, but that it was something that they were positive about implementing in the future. / Med den växande användningen av AI inom nya områden har de etiska problem som uppstår med AI blivit ett stort diskussionsämne. Flera förslag på riktlinjer som företag och forskare kan använda för att utveckla etiska produkter har lagts fram, men de är sällan implementerade. Detta projekt är en studie av de etiska risker som förekommer vid användningen av AI för musikproduktionsverktyg. Studien gjordes genom att intervjua fem olika startups om deras företagspolicys och produkter utifrån en etisk synvinkel. Intervjuerna analyserades och sex intresseområden hittades. En slutgiltig riskanalys gjordes sedan utifrån en befintlig etisk riktlinje, Ethics Guidelines for Trustworthy AI av AI HLEG. Flera riskområden relaterade till Diversity, Bias, Explainability och Privacy upptäcktes. Ytterligare en upptäckt var att flera företag för närvarande inte använder ett etiskt ramverk, men att de var positiva till att implementera ett i framtiden.
7

How do Communicators in Social Change Organisations Navigate the Use of Artificial Intelligence? : A Thematic Analysis Through the Lens of Ethical Storytelling

Svensson, Anna January 2023 (has links)
The increasing accessibility and sophistication of Artificial Intelligence (AI) in generating high-quality images and texts presents opportunities, risks, and ethical dilemmas in the Communication about Development (ComDev) sector. This dissertation asks: how do ComDev professionals navigate the ethics of using AI to create visual and written content to raise funds and motivate action on global social issues? Drawing on the idea that AI ethics cannot be understood or achieved independent of a broader ethical structure, the project develops a theoretical framework of ethical storytelling, suggesting this can be successfully applied to AI and non-AI-generated content. Based on semi-structured interviews and thematic analysis of the resulting transcript, findings can broadly be categorised into concerns regarding intentional manipulation and misleading through the use of AI tools, as well as unintentional harm caused by biased models and outputs. The participants' reflections revealed an interest in and concern with ethical storytelling. The way in which these ethical concerns and proposed strategies for mitigation are navigated supports the thesis that the same ethical storytelling framework can be applied to content creation and outputs regardless of the techniques used to generate it. The findings illustrate a tension between ethical storytelling and practical considerations related to fundraising experienced by ComDev professionals as they consider the risks and opportunities of this emerging technology. This conclusion contributes to, and supports findings within, an existing body of research on how people working in this sector navigate the complexities and ethical dilemmas of their work.
8

From Support to Disruption : Highlighting the discrepancy between user needs, current AI offerings and research focus for professional video creators, a situated user study. / Från stöd till störning : Utforska nuvarande AI-tillgänglighet för professionella innehållsskapare och vilka framtida verktyg som kan störa branschen.

van den Nieuwenhuijzen, Sietse January 2023 (has links)
Artificial intelligence (AI) has the potential to transform various industries, including the creative industry [1]. The potential disruption is facilitated by technology-driven products, with these products it is increasingly important to keep user goals, behaviour and needs at the centre during the development and designing phase. Through a situated user study using online content analysis, a survey, and semi-structured interviews I describe the current role of AI tools in the video production industry, and I identify potential future tools that could benefit professional content creators. According to these professional content creators AI tools currently primarily serve a supportive role, enhancing efficiency and video quality while still requiring human input for storytelling and quality assurance. The professionals are increasingly integrating AI into their workflows to be more increase their value as creators. Drawing from a User-Centered Design (UCD) approach I highlight a discrepancy between user needs and research focus, suggesting that future AI tools should prioritize improving client communication, automating non-creative tasks in post-production, and making animation more accessible. As generative video AI continues to develop, ethical concerns surrounding gender and ethnicity bias continue to be an important aspect for future research to ensure equal representation in generated video content. / Artificiell intelligens (AI) har potential att förändra olika branscher, inklusive den kreativa industrin [1]. Den potentiella omvälvningen underlättas av teknikdrivna produkter, och med dessa produkter blir det allt viktigare att hålla användarnas mål, beteende och behov i centrum under utvecklings- och designfasen. Genom en situerad användarstudie med hjälp av innehållsanalys online, en enkät och semistrukturerade intervjuer beskriver jag den nuvarande rollen för AI-verktyg i videoproduktionsbranschen, och jag identifierar potentiella framtida verktyg som kan gynna professionella innehållsskapare. Enligt dessa professionella innehållsskapare har AI-verktyg för närvarande främst en stödjande roll och förbättrar effektiviteten och videokvaliteten samtidigt som det fortfarande krävs mänsklig input för storytelling och kvalitetssäkring. De professionella innehållsskaparna integrerar i allt högre grad AI i sina arbetsflöden för att öka sitt värde som kreatörer. Med utgångspunkt i en användarcentrerad design (UCD) belyser jag en diskrepans mellan användarnas behov och forskningsfokus, och föreslår att framtida AI-verktyg bör prioritera att förbättra kundkommunikationen, automatisera icke-kreativa uppgifter i efterproduktionen och göra animering mer tillgänglig. Eftersom generativ video-AI fortsätter att utvecklas, fortsätter etiska problem kring kön och etnicitet att vara en viktig aspekt för framtida forskning för att säkerställa lika representation i genererat videoinnehåll.
9

IS THE FUTURE OF BEAUTY PERSONALIZED? : CASE STUDY FOR MICROBIOME SKINCARE BRAND SKINOME

Kanaska, Santa Daniela January 2022 (has links)
The researcher takes a user-centric empirical approach to estimate different consumer group participant views on the personalization technology adoption within the skincare industry. In addition, the study aims to highlight the main identified opportunities and concerns that users associate with the personalized technology solutions within the industry, such as skincare and product quizzes, in-depth questionnaires, smart skin analysis tools, and others. The empirical study sample consists of 17 subjects who represent three different generation groups (Generations X, Y, and Z). For data analysis purposes, the author has performed content and discourse analysis, sentiment assessment, and word cloud visualizations using the Python word cloud library. The conducted sentiment analysis shows that the Gen X group’s users overall have a negative attitude towards personalization technology adoption for the skincare (average sentiment: 0.294) in comparison to Gen Y and Gen Z consumers whose sentiment analysis results showed neutral and positive tendencies. The content analysis showed that Gen Y and Gen Z consumers are more concerned about the data governance and its associated risks than Gen X consumers for whom the results and skin health-related improvements were indicated as having higher importance. According to the gathered data, the majority of Gen Y and Gen Z consumer group participants see personalization technology as the future of the skincare industry; nevertheless, Gen X consumers believe that personalization within the skincare will not be attached to one brand and will be more focussed on addressing specific skin conditions and concerns as well as will be more evidence-based. / Forskaren använder sig av en användarcentrerad empirisk metod för att uppskatta olika konsumentgruppers åsikter om hur tekniken för att ge personliga hudvårdsråd används inom hudvårdsbranschen. Dessutom syftar studien till att belysa de viktigaste identifierade möjligheterna och farhågorna som användarna förknippar med dessa tekniska lösningar inom branschen, såsom hudvårds- och produkttester, djupgående frågeformulär, smarta hudanalysverktyg och andra. Den empiriska studiens urval består av 17 personer som representerar tre olika generationsgrupper (generationerna X, Y och Z). Författaren har för analysen av datan genomfört en innehålls- och diskursanalys, en känsloutvärdering samt en ordmolnsanalys med hjälp av Pythons ordmolnsbibliotek. Den genomförda känslighetsanalysen visar att användare i gruppen Gen X överlag har en negativ inställning till att införa teknik för att erhålla personliga hudvårdsråd (genomsnittlig känsla: 0,294) i jämförelse med konsumenter i generationerna Y och Z, vars känslighetsanalysresultat visade neutrala och positiva tendenser. Innehållsanalysen visade att Gen Y- och Gen Z-konsumenterna är mer oroade över datastyrningen och de därmed förknippade riskerna än Gen X-konsumenterna, för vilka resultaten och förbättringarna av hudhälsan angavs ha större betydelse. Resultaten av studien visar att en majoritet av Gen Y- och Gen Z-konsumentgruppens deltagare ser att utvecklandet och användandet av teknik för att ge personliga hudvårdsråd är framtiden för hudvårdsbranschen. Gen Xkonsumenterna tror dock att tekniken för personliga hudvårdsråd inte kommer att vara knuten till ett märke och att den kommer att vara mer inriktad på att hantera specifika hudtillstånd och problem samt vara mer evidensbaserad.
10

Tools and Methods for Companies to Build Transparent and Fair Machine Learning Systems / Verktyg och metoder för företag att utveckla transparenta och rättvisa maskininlärningssystem

Schildt, Alexandra, Luo, Jenny January 2020 (has links)
AI has quickly grown from being a vast concept to an emerging technology that many companies are looking to integrate into their businesses, generally considered an ongoing “revolution” transforming science and society altogether. Researchers and organizations agree that AI and the recent rapid developments in machine learning carry huge potential benefits. At the same time, there is an increasing worry that ethical challenges are not being addressed in the design and implementation of AI systems. As a result, AI has sparked a debate about what principles and values should guide its development and use. However, there is a lack of consensus about what values and principles should guide the development, as well as what practical tools should be used to translate such principles into practice. Although researchers, organizations and authorities have proposed tools and strategies for working with ethical AI within organizations, there is a lack of a holistic perspective, tying together the tools and strategies proposed in ethical, technical and organizational discourses. The thesis aims to contribute with knowledge to bridge this gap by addressing the following purpose: to explore and present the different tools and methods companies and organizations should have in order to build machine learning applications in a fair and transparent manner. The study is of qualitative nature and data collection was conducted through a literature review and interviews with subject matter experts. In our findings, we present a number of tools and methods to increase fairness and transparency. Our findings also show that companies should work with a combination of tools and methods, both outside and inside the development process, as well as in different stages of the machine learning development process. Tools used outside the development process, such as ethical guidelines, appointed roles, workshops and trainings, have positive effects on alignment, engagement and knowledge while providing valuable opportunities for improvement. Furthermore, the findings suggest that it is crucial to translate high-level values into low-level requirements that are measurable and can be evaluated against. We propose a number of pre-model, in-model and post-model techniques that companies can and should implement in each other to increase fairness and transparency in their machine learning systems. / AI har snabbt vuxit från att vara ett vagt koncept till en ny teknik som många företag vill eller är i färd med att implementera. Forskare och organisationer är överens om att AI och utvecklingen inom maskininlärning har enorma potentiella fördelar. Samtidigt finns det en ökande oro för att utformningen och implementeringen av AI-system inte tar de etiska riskerna i beaktning. Detta har triggat en debatt kring vilka principer och värderingar som bör vägleda AI i dess utveckling och användning. Det saknas enighet kring vilka värderingar och principer som bör vägleda AI-utvecklingen, men också kring vilka praktiska verktyg som skall användas för att implementera dessa principer i praktiken. Trots att forskare, organisationer och myndigheter har föreslagit verktyg och strategier för att arbeta med etiskt AI inom organisationer, saknas ett helhetsperspektiv som binder samman de verktyg och strategier som föreslås i etiska, tekniska och organisatoriska diskurser. Rapporten syftar till överbrygga detta gap med följande syfte: att utforska och presentera olika verktyg och metoder som företag och organisationer bör ha för att bygga maskininlärningsapplikationer på ett rättvist och transparent sätt. Studien är av kvalitativ karaktär och datainsamlingen genomfördes genom en litteraturstudie och intervjuer med ämnesexperter från forskning och näringsliv. I våra resultat presenteras ett antal verktyg och metoder för att öka rättvisa och transparens i maskininlärningssystem. Våra resultat visar också att företag bör arbeta med en kombination av verktyg och metoder, både utanför och inuti utvecklingsprocessen men också i olika stadier i utvecklingsprocessen. Verktyg utanför utvecklingsprocessen så som etiska riktlinjer, utsedda roller, workshops och utbildningar har positiva effekter på engagemang och kunskap samtidigt som de ger värdefulla möjligheter till förbättringar. Dessutom indikerar resultaten att det är kritiskt att principer på hög nivå översätts till mätbara kravspecifikationer. Vi föreslår ett antal verktyg i pre-model, in-model och post-model som företag och organisationer kan implementera för att öka rättvisa och transparens i sina maskininlärningssystem.

Page generated in 0.0274 seconds