• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

[pt] A SUPERVISÃO HUMANA DAS DECISÕES AUTÔNOMAS DE IA COMO INSTRUMENTO DE TUTELA DA AUTONOMIA EXISTENCIAL / [en] HUMAN SUPERVISION OF AUTOMATED AI DECISIONS AS AN INSTRUMENT FOR THE PROTECTION OF HUMAN AUTONOMY

RUBIA LUANA CARVALHO VIEGAS SCHMALL 27 September 2023 (has links)
[pt] A sociedade passa por um momento de intensas transformações pautadas no uso de novas tecnologia. Nesse contexto, a inteligência artificial (IA) assume o papel de protagonista gerando debates acerca dos limites da sua aplicação. Cada vez mais presente na vida dos indivíduos e instituições, a IA participa de processos de tomada de decisão que, não raro, têm reflexos em direitos fundamentais e questões existenciais dos indivíduos. Tendo em vista a busca de soluções pautadas na centralidade humana no uso da tecnologia, o presente estudo se propõe a investigar a supervisão humana como instrumento de tutela da autonomia existencial no contexto de tomada de decisões automatizadas por sistemas de IA. Por meio da análise da relação entre a autonomia da máquina e a autonomia existencial, o texto traça o caminho para a investigação dos princípios e dispositivos legais vigentes que legitimam a necessidade da supervisão humana sobre os resultados decisórios gerados por sistemas de IA de alto risco, bem como o tratamento do tema nas propostas legislativas em trâmite no Brasil e União Europeia que visam regular os usos e aplicações da IA. / [en] The society is going through a moment of intense transformations based on the use of new technologies. In this context, artificial intelligence (AI) takes on the role of protagonist, generating debates about the limits of its application. Increasingly present in the lives of individuals. In view of the search for solutions based on the human centrality in the use of technology, the present study proposes to investigate human supervision as an intrument to protect human autonomy in the context of automated decision-making by AI systems.Through the analysis of the relation between machine autonomy and existential autonomy, the text outlines the path for the investigation of the current legal principles and laws that legitimize the need for human supervision over the decision-makinngg results generated by high-risk AI systems as well as the treatment of the subject in the legislative proposals in progress in Brazil and European Union that aim to regulate the uses aplications of AI.
2

Explaining Automated Decisions in Practice : Insights from the Swedish Credit Scoring Industry / Att förklara utfall av AI system för konsumenter : Insikter från den svenska kreditupplyssningsindustrin

Matz, Filip, Luo, Yuxiang January 2021 (has links)
The field of explainable artificial intelligence (XAI) has gained momentum in recent years following the increased use of AI systems across industries leading to bias, discrimination, and data security concerns. Several conceptual frameworks for how to reach AI systems that are fair, transparent, and understandable have been proposed, as well as a number of technical solutions improving some of these aspects in a research context. However, there is still a lack of studies examining the implementation of these concepts and techniques in practice. This research aims to bridge the gap between prominent theory within the area and practical implementation, exploring the implementation and evaluation of XAI models in the Swedish credit scoring industry, and proposes a three-step framework for the implementation of local explanations in practice. The research methods used consisted of a case study with the model development at UC AB as a subject and an experiment evaluating the consumers' levels of trust and system understanding as well as the usefulness, persuasive power, and usability of the explanation for three different explanation prototypes developed. The framework proposed was validated by the case study and highlighted a number of key challenges and trade-offs present when implementing XAI in practice. Moreover, the evaluation of the XAI prototypes showed that the majority of consumers prefers rulebased explanations, but that preferences for explanations is still dependent on the individual consumer. Recommended future research endeavors include studying a longterm XAI project in which the models can be evaluated by the open market and the combination of different XAI methods in reaching a more personalized explanation for the consumer. / Under senare år har antalet AI implementationer stadigt ökat i flera industrier. Dessa implementationer har visat flera utmaningar kring nuvarande AI system, specifikt gällande diskriminering, otydlighet och datasäkerhet vilket lett till ett intresse för förklarbar artificiell intelligens (XAI). XAI syftar till att utveckla AI system som är rättvisa, transparenta och begripliga. Flera konceptuella ramverk har introducerats för XAI som presenterar etiska såväl som politiska perspektiv och målbilder. Dessutom har tekniska metoder utvecklats som gjort framsteg mot förklarbarhet i forskningskontext. Däremot saknas det fortfarande studier som undersöker implementationer av dessa koncept och tekniker i praktiken. Denna studie syftar till att överbrygga klyftan mellan den senaste teorin inom området och praktiken genom en fallstudie av ett företag i den svenska kreditupplysningsindustrin. Detta genom att föreslå ett ramverk för implementation av lokala förklaringar i praktiken och genom att utveckla tre förklaringsprototyper. Rapporten utvärderar även prototyperna med konsumenter på följande dimensioner: tillit, systemförståelse, användbarhet och övertalningsstyrka. Det föreslagna ramverket validerades genom fallstudien och belyste ett antal utmaningar och avvägningar som förekommer när XAI system utvecklas för användning i praktiken. Utöver detta visar utvärderingen av prototyperna att majoriteten av konsumenter föredrar regelbaserade förklaringar men indikerar även att preferenser mellan konsumenter varierar. Rekommendationer för framtida forskning är dels en längre studie, vari en XAI modell introduceras på och utvärderas av den fria marknaden, dels forskning som kombinerar olika XAI metoder för att generera mer personliga förklaringar för konsumenter.

Page generated in 0.0904 seconds