• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 4
  • Tagged with
  • 31
  • 20
  • 17
  • 16
  • 15
  • 15
  • 12
  • 11
  • 9
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Explaining Automated Decisions in Practice : Insights from the Swedish Credit Scoring Industry / Att förklara utfall av AI system för konsumenter : Insikter från den svenska kreditupplyssningsindustrin

Matz, Filip, Luo, Yuxiang January 2021 (has links)
The field of explainable artificial intelligence (XAI) has gained momentum in recent years following the increased use of AI systems across industries leading to bias, discrimination, and data security concerns. Several conceptual frameworks for how to reach AI systems that are fair, transparent, and understandable have been proposed, as well as a number of technical solutions improving some of these aspects in a research context. However, there is still a lack of studies examining the implementation of these concepts and techniques in practice. This research aims to bridge the gap between prominent theory within the area and practical implementation, exploring the implementation and evaluation of XAI models in the Swedish credit scoring industry, and proposes a three-step framework for the implementation of local explanations in practice. The research methods used consisted of a case study with the model development at UC AB as a subject and an experiment evaluating the consumers' levels of trust and system understanding as well as the usefulness, persuasive power, and usability of the explanation for three different explanation prototypes developed. The framework proposed was validated by the case study and highlighted a number of key challenges and trade-offs present when implementing XAI in practice. Moreover, the evaluation of the XAI prototypes showed that the majority of consumers prefers rulebased explanations, but that preferences for explanations is still dependent on the individual consumer. Recommended future research endeavors include studying a longterm XAI project in which the models can be evaluated by the open market and the combination of different XAI methods in reaching a more personalized explanation for the consumer. / Under senare år har antalet AI implementationer stadigt ökat i flera industrier. Dessa implementationer har visat flera utmaningar kring nuvarande AI system, specifikt gällande diskriminering, otydlighet och datasäkerhet vilket lett till ett intresse för förklarbar artificiell intelligens (XAI). XAI syftar till att utveckla AI system som är rättvisa, transparenta och begripliga. Flera konceptuella ramverk har introducerats för XAI som presenterar etiska såväl som politiska perspektiv och målbilder. Dessutom har tekniska metoder utvecklats som gjort framsteg mot förklarbarhet i forskningskontext. Däremot saknas det fortfarande studier som undersöker implementationer av dessa koncept och tekniker i praktiken. Denna studie syftar till att överbrygga klyftan mellan den senaste teorin inom området och praktiken genom en fallstudie av ett företag i den svenska kreditupplysningsindustrin. Detta genom att föreslå ett ramverk för implementation av lokala förklaringar i praktiken och genom att utveckla tre förklaringsprototyper. Rapporten utvärderar även prototyperna med konsumenter på följande dimensioner: tillit, systemförståelse, användbarhet och övertalningsstyrka. Det föreslagna ramverket validerades genom fallstudien och belyste ett antal utmaningar och avvägningar som förekommer när XAI system utvecklas för användning i praktiken. Utöver detta visar utvärderingen av prototyperna att majoriteten av konsumenter föredrar regelbaserade förklaringar men indikerar även att preferenser mellan konsumenter varierar. Rekommendationer för framtida forskning är dels en längre studie, vari en XAI modell introduceras på och utvärderas av den fria marknaden, dels forskning som kombinerar olika XAI metoder för att generera mer personliga förklaringar för konsumenter.
12

Hur textbaserade förklaringar bör designas för förbättrad förståelse av AI-baserade finanssystem / How text based explanations for increased understanding should be designed for financial AI-systems

Svensson, Casper, Kristiansson, Christoffer January 2021 (has links)
Den finansiella sektorn har kunnat ta del av stora fördelar med avancerade AI system som kan förutspå trender och ta egna beslut baserat på tidigare processer. Detta betyder att system som använder AI med 100-tals parametrar kan utföra mer arbete med mindre arbetstimmar än en människa. För att människan ska kunna lita på sådana tänkande maskiner måste människan ha möjligheten att förstå programmen så att hen kan kontrollera att systemet följer människors värderingar och behov. Kommunikationen mellan människa och system görs genom att maskinen förklarar varför den tog ett visst beslut även kallat “Explainable AI” (XAI). Studien som genomförts påvisar att XAI saknar tydliga riktlinjer hur dessa förklaringar bör designas för att skapa förståelse till de som arbetar med dessa kraftfulla system. Detta examensarbete fokuserar på att undersöka kreditchefers roll i banken, vad dom baserar sina beslut på och hur användarcentrerade förklaringar kan bidra till ökad förståelse av AIproducerade beslut av låneansökningar. För att uppnå detta resultat har UX-metoder används som sätter verkliga användare i fokus. Datainsamlingen bestod av kvalitativa intervjuer med kreditchefer på banker i Sverige som sedan analyserades och jämfördes med rådande forskning inom AI och XAI. Ett 30-tal domänspecifika parametrar identifierades som låg till grund för sex designade förklaringar där förståelsen utvärderades genom att jämföra förklaringarnas olika resultat från kreditcheferna. Fem rekommendationer presenteras om hur AI-system bör presentera förklaringar till kreditchefer på mindre banker i Sverige. / The financial sector has been able to enjoy great benefits with advanced AI systems that can predict trends and make their own decisions based on previous processes. This means that systems that use AI with hundreds of parameters can perform more work with fewer working hours than a human. In order for humans to be able to trust such thinking machines, humans must have the opportunity to understand the programs so that they can check that the system follows people's values and needs. The communication between humans and the system is done by the machine explaining why it made a certain decision also called "Explainable AI" (XAI). The study carried out shows that XAI lacks clear guidelines on how these explanations should be designed to create better understanding for those who work with these powerful systems. This thesis focuses on examining the role of credit managers in the bank, what they base their decisions on and how user-centered explanations can contribute to a greater understanding of AI-produced decisions of loan applications. To achieve this result, UX methods have been used that put real users in focus. The data collection consisted of qualitative interviews with credit managers at banks in Sweden, which were then analyzed and compared with current research in AI and XAI. About 30 domain-specific parameters were identified as the basis for six designed explanations where the understanding was evaluated by comparing the explanations' different results from the credit managers. Five recommendations are presented on how AI systems should present explanations to credit managers at smaller banks in Sweden.
13

Detektera mera! : Maskininlärningsmetoder mot kreditkortsbedrägerier

Jönsson, Elin January 2022 (has links)
I denna kandidatuppsats undersöks och utvärderas maskininlärningsmetoder för bedrägeridetektering inom kreditkortsbedrägerier med syfte att identifiera problemområden och ange förbättringar. Trots utvecklingen och framfarten av artificiell intelligens (AI), finns det fortfarande problem med att framgångsrikt klassificera kreditkortsbedrägerier. I arbetet utförs en litteraturstudie för att identifiera aktuella maskininlärningsmetoder och utmaningar. Därefter görs ett experiment för att utvärdera dessa metoder och föreslå förbättringar. Resultatmässigt kan man se att de aktuella maskininlärningsmetoderna är en blandning av nyare och äldre metoder som Deep Neural Networks, Logistisk Regression, Naive Bayes, Random Forest, Decision Tree och Multi-Layer Perceptron. Dessa utvärderas oftast med prestationsmått som Accuracy score, F1-Score, Confusion Matrix och Area Under the Curve (AUC). Dagens bedrägeridetektering står främst inför klassificeringsproblem på grund av komplexa, föränderliga och manipulerad data. Genom att utvärdera bedrägeridetektorn med XAI-modeller som SHAP, kan problemområdet vid felklassificering lokaliseras och åtgärdas enklare.
14

Comparing Human Reasoning and Explainable AI

Helgstrand, Carl Johan, Hultin, Niklas January 2022 (has links)
Explainable AI (XAI) is a research field dedicated to formulating avenues of breaching the black box nature of many of today’s machine learning models. As society finds new ways of applying these models in everyday life, certain risk thresholds are crossed when society replaces human decision making with autonomous systems. How can we trust the algorithms to make sound judgement when all we provide is input and all they provide is an output? XAI methods examine different data points in the machine learning process to determine what factors influenced the decision making. While these methods of post-hoc explanation may provide certain insights, previous studies into XAI have found the designs to often be biased towards the designers and do not incorporate necessary interdisciplinary fields to improve user understanding. In this thesis, we look at animal classification and what features in animal images were found to be important by humans. We use a novel approach of first letting the participants create their own post-hoc explanations, before asking them to evaluate real XAI explanations as well as a pre-made human explanation generated from a test group. The results show strong cohesion in the participants' answers and can provide guidelines for designing XAI explanations more closely related to human reasoning. The data also indicates a preference for human-like explanations within the context of this study. Additionally, a potential bias was identified as participants preferred explanations marking large portions of an image as important, even if many of the important areas coincided with what the participants themselves considered to be unimportant. While the sample pool and data gathering tools are limiting, the results points toward a need for additional research into comparisons of human reasoning and XAI explanations and how it may affect the evaluation of, and bias towards, explanation methods.
15

What do you mean? : The consequences of different stakeholders’ logics in machine learning and how disciplinary differences should be managed within an organization

Eliasson, Nina January 2022 (has links)
This research paper identifies the disciplinary differences of stakeholders and its effects on working cross-functional in the context of machine learning. This study specifically focused on 1) how stakeholders with disciplinary differences interpret a search system, and 2) how the multi-disciplines should be managed in an organization. This was studied through 12 interviews with stakeholders from design disciplines, product management, data science and machine learning engineering, followed by a focus group with a participant from each of the different disciplines. The findings were analyzed through a thematic analysis and institutional logics and concluded that the different logics had a high impact on the stakeholders’ understanding of the search system. The research also concluded that bridging the gap between the multi-disciplinary stakeholders are of high importance in context of machine learning.
16

Developing a highly accurate, locally interpretable neural network for medical image analysis

Ventura Caballero, Rony David January 2023 (has links)
Background Machine learning techniques, such as convolutional networks, have shown promise in medical image analysis, including the detection of pediatric pneumonia. However, the interpretability of these models is often lacking, compromising their trustworthiness and acceptance in medical applications. The interpretability of machine learning models in medical applications is crucial for trust and bias identification. Aim The aim is to create a locally interpretable neural network that performs comparably to black-box models while being inherently interpretable, enhancing trust in medical machine learning models. Method An MLP ReLU network is trained with Guangzhou Women and Children's Medical Center pediatric chest x-ray image dataset and utilize Aletheia unwrapper for interpretability. A 5-fold cross-validation assesses the network's performance, measuring accuracy and F1 score. The average accuracy and F1 score are 0.90 and 0.91, respectively. To assessthe interpretability results are compared against a CNN network aided with LIME and SHAP to generate explanations. Results Despite lacking convolutional layers, the MLP network satisfactorily categorizes pneumonia images and explanations align with relevant areas of interest from previous studies. Moreover, by comparing it with a state of the art network aided with LIME and SHAP explanations, the local explanations demonstrate to be consistent within areas of the lungs while the post-hoc alternatives often highlighted areas not relevant for the specific task. Conclusion The developed locally interpretable neural network demonstrates promising performance and interpretability. However, additional research and implementation are required for it to outperform the so-called black box models. In a medical setting, a more accurate model despite the score could be crucial, as it could potentially save more lives, which is the ultimate goal of healthcare.
17

Using XAI Tools to Detect Harmful Bias in ML Models

Virtanen, Klaus January 2022 (has links)
In the past decade, machine learning (ML) models have become farmore powerful, and are increasingly being used in many important contexts. At the same time, ML models have become more complex, and harder to understand on their own, which has necessitated an interesting explainable AI (XAI), a field concerned with ensuring that ML and other AI system can be understood by human users and practitioners. One aspect of XAI is the development of ”explainers”, tools that take a more complex system (here: an ML model) and generate a simpler but sufficiently accurate model of this system — either globally or locally —to yield insight into the behaviour of the original system. As ML models have become more complex and prevalent, concerns that they may embody and perpetuate harmful social biases have also risen, with XAI being one proposed tool for bias detection. This paper investigates the ability of two explainers, LIME and SHAP, which explain the prediction of potentially more complex models by way of locally faithful linear models, to detect harmful social bias (here in the form of the influence of the racial makeup of a neighbourhood on property values), in a simple experiment involving two kinds of ML models, line arregression and an ensemble method, trained on the well-known Boston-housing dataset. The results show that LIME and SHAP appear to be helpful in bias detection, while also revealing an instance where the explanations do not quite reflect the workings of the model, while still yielding accurate insight into the predictions the model makes.
18

Explaining Neural Networks used for PIM Cancellation / Förklarandet av Neurala Nätverk menade för PIM-elimination

Diffner, Fredrik January 2022 (has links)
Passive Intermodulation is a type of distortion affecting the sensitive receiving signals in a cellular network, which is a growing problem in the telecommunication field. One way to mitigate this problem is through Passive Intermodulation Cancellation, where the predicted noise in a signal is modeled with polynomials. Recent experiments using neural networks instead of polynomials to model this noise have shown promising results. However, one drawback with neural networks is their lack of explainability. In this work, we identify a suitable method that provides explanations for this use case. We apply this technique to explain the neural networks used for Passive Intermodulation Cancellation and discuss the result with domain expertise. We show that the input space as well as the architecture could be altered, and propose an alternative architecture for the neural network used for Passive Intermodulation Cancellation. This alternative architecture leads to a significant reduction in trainable parameters, a finding which is valuable in a cellular network where resources are heavily constrained. When performing an explainability analysis of the alternative model, the explanations are also more in line with domain expertise. / Passiv Intermodulation är en typ av störning som påverkar de känsliga mottagarsignalerna i ett mobilnät. Detta är ett växande problem inom telekommunikation. Ett tillvägagångssätt för att motverka detta problem är genom passiv intermodulations-annullering, där störningarna modelleras med hjälp av polynomiska funktioner. Nyligen har experiment där neurala nätverk används istället för polynomiska funktioner för att modellera dessa störningar påvisat intressanta resultat. Användandet av neurala nätverk är dock förenat med vissa nackdelar, varav en är svårigheten att tyda och tolka utfall av neurala nätverk. I detta projekt identifieras en passande metod för att erbjuda förklaringar av neurala nätverk tränade för passiv intermodulations-annullering. Vi applicerar denna metod på nämnda neurala nätverk och utvärderar resultatet tillsammans med domänexpertis. Vi visar att formatet på indatan till neurala nätverket kan manipuleras, samt föreslår en alternativ arkitektur för neurala nätverk tränade för passiv intermodulations-annullering. Denna alternativa arkitektur innebär en avsevärd reduktion av antalet träningsbara parametrar, vilket är ett värdefullt resultat i samband med mobilnät där det finns kraftiga begränsningar på hårdvaruresurser. När vi applicerar metoder för att förklara utfall av denna alternativa arkitektur finner vi även att förklaringarna bättre motsvarar förväntningarna från domänexpertis.
19

Towards Explainable AI Using Attribution Methods and Image Segmentation

Rocks, Garrett J 01 January 2023 (has links) (PDF)
With artificial intelligence (AI) becoming ubiquitous in a broad range of application domains, the opacity of deep learning models remains an obstacle to adaptation within safety-critical systems. Explainable AI (XAI) aims to build trust in AI systems by revealing important inner mechanisms of what has been treated as a black box by human users. This thesis specifically aims to improve the transparency and trustworthiness of deep learning algorithms by combining attribution methods with image segmentation methods. This thesis has the potential to improve the trust and acceptance of AI systems, leading to more responsible and ethical AI applications. An exploratory algorithm called ESAX is introduced and shows how performance greater than other top attribution methods on PIC testing can be achieved in some cases. These results lay a foundation for future work in segmentation attribution.
20

Interpretable Outlier Detection in Financial Data : Implementation of Isolation Forest and Model-Specific Feature Importance

Söderström, Vilhelm, Knudsen, Kasper January 2022 (has links)
Market manipulation has increased in line with the number of active players in the financialmarkets. The most common methods for monitoring financial markets are rule-based systems,which are limited to previous knowledge of market manipulation. This work was carried out incollaboration with the company Scila, which provides surveillance solutions for the financialmarkets.In this thesis, we will try to implement a complementary method to Scila's pre-existing rule-based systems to objectively detect outliers in all available data and present the result onsuspect transactions and customer behavior to an operator. Thus, the method needs to detectoutliers and show the operator why a particular market participant is considered an outlier. Theoutlier detection method needs to implement interpretability. This led us to the formulation of ourresearch question as: How can an outlier detection method be implemented as a tool for amarket surveillance operator to identify potential market manipulation outside Scila's rule-basedsystems?Two models, an outlier detection model Isolation Forest, and a feature importance model (MI-Local-DIFFI and its subset Path Length Indicator) were chosen to fulfill the purpose of the study.The study used three datasets, two synthetic datasets, one scattered and one clustered, andone dataset from Scila.The results show that Isolation Forest has an excellent ability to find outliers in the various datadistributions we investigated. We used a feature importance model to make Isolation Forest’sscoring of outliers interpretable. Our intention was that the feature importance model wouldspecify how important different features were in the process of an observation being defined asan outlier. Our results have a relatively high degree of interpretability for the scattered datasetbut worse for the clustered dataset. The Path Length Indicator achieved better performancethan MI-Local-DIFFI for both datasets. We noticed that the chosen feature importance model islimited by the process of how Isolation Forest isolates an outlier.

Page generated in 0.0157 seconds