• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 35
  • 19
  • 15
  • 15
  • 15
  • 12
  • 12
  • 12
  • 11
  • 10
  • 9
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Generating an Interpretable Ranking Model: Exploring the Power of Local Model-Agnostic Interpretability for Ranking Analysis

Galera Alfaro, Laura January 2023 (has links)
Machine learning has revolutionized recommendation systems by employing ranking models for personalized item suggestions. However, the complexity of learning-to-rank (LTR) models poses challenges in understanding the underlying reasons contributing to the ranking outcomes. This lack of transparency raises concerns about potential errors, biases, and ethical implications. To address these issues, interpretable LTR models have emerged as a solution. Currently, the state-of-the-art for interpretable LTR models is led by generalized additive models (GAMs). However, ranking GAMs face limitations in terms of computational intensity and handling high-dimensional data. To overcome these drawbacks, post-hoc methods, including local interpretable modelagnostic explanations (LIME), have been proposed as potential alternatives. Nevertheless, a quantitative evaluation comparing post-hoc methods efficacy to state-of-the-art ranking GAMs remains largely unexplored. This study aims to investigate the capabilities and limitations of LIME in an attempt to approximate a complex ranking model using a surrogate model. The proposed methodology for this study is an experimental approach. The neural ranking GAM, trained on two benchmark information retrieval datasets, serves as the ground truth for evaluating LIME’s performance. The study adapts LIME in the context of ranking by translating the problem into a classification task and asses three different sampling strategies against the prevalence of imbalanced data and their influence on the correctness of LIME’s explanations. The findings of this study contribute to understanding the limitations of LIME in the context of ranking. It analyzes the low similarity between the explanations of LIME and those generated by the ranking model, highlighting the need to develop more robust sampling strategies specific to ranking. Additionally, the study emphasizes the importance of developing appropriate evaluation metrics for assessing the quality of explanations in ranking tasks.
12

Interactive or textual explanations? : Evaluation of two explanatory approaches within in-vehicle adaptive systems. / Interaktiva förklaringar eller textförklaringar? : Utvärdering av två förklaringsmetoder för adaptiva system i fordon.

Carollo, Gabriele January 2023 (has links)
Adaptiva system är idag en viktig resurs för att kunna utföra vardagliga uppgifter och tillgodose användarnas behov med personliga anpassningar. Sådana system är vanligtvis baserade på modeller som förlitar sig på komplexa algoritmer för artificiell intelligens som gör att systemets beteende uppfattas som ogenomskinligt för användarna. Därför behövs förklaringar av systemets beteende för att förbättra systemets transparens. I den här studien utvärderas två förklaringsmetoder, textbaserad och interaktiv, i samband med tre adaptiva system i ett fordon. Forskningsprojektet syftar till att bedöma potentialen hos de interaktiva förklaringarna jämfört med de textbaserade när det gäller acceptans av systemet, förtroende för systemet och dess användarupplevelse (UX). I detta syfte har en körstudie med 38 deltagare genomförts. Resultaten i allmänhet visar inga signifikanta skillnader för de tre variablerna mellan de två förklaringsmetoderna, även om en liten dominans av det textbaserade konceptet kan noteras. Resultaten i UX och Trust uppmuntrar dock till ytterligare undersökningar av interaktiva förklaringsmetoder. Från resultaten framkom dessutom behovet av kontroll av föraren i anpassningarna, och nödvändigheten av att skräddarsy förklaringsmetoden beroende på vilket förklaringsdesignmönster som används och vilket adaptivt system som avses. Dessa resultat uppmuntrar till framtida forskning om utformningen av användarcentrerade adaptiva system i fordon. / Nowadays adaptive systems represent an essential resource in achieving everyday life tasks, meeting users’ needs with personalized adaptations. Such systems are usually based on models that rely on complex artificial intelligence algorithms that make perceive the system behavior as opaque to the users. Therefore explanations of the system’s behavior are needed to improve system transparency. In this study, two explanatory approaches, named text-based and interactive, are evaluated in the context of three in-vehicle adaptive systems. The research aims to assess the potential of the interactive explanations compared with the text-based ones in terms of Acceptance of the system, Trust in the system, and User Experience (UX) of the system. To this purpose, a real-world driving study with 38 participants has been conducted. The results in general do not indicate significant differences in the three variables between the two explanatory approaches, although a slight predominance of the text-based concept is recognizable. However, the results in UX and Trust encourage further explorations of interactive explanatory approaches. From this research, it emerged the need for control of the driver over the adaptations, and the necessity of tailoring the explanatory approach depending on the explanations design pattern used, and the adaptive system targeted. These findings encourage future research in the design of user-centered in-vehicle adaptive systems.
13

Comparison of Logistic Regression and an Explained Random Forest in the Domain of Creditworthiness Assessment

Ankaräng, Marcus, Kristiansson, Jakob January 2021 (has links)
As the use of AI in society is developing, the requirement of explainable algorithms has increased. A challenge with many modern machine learning algorithms is that they, due to their often complex structures, lack the ability to produce human-interpretable explanations. Research within explainable AI has resulted in methods that can be applied on top of non- interpretable models to motivate their decision bases. The aim of this thesis is to compare an unexplained machine learning model used in combination with an explanatory method, and a model that is explainable through its inherent structure. Random forest was the unexplained model in question and the explanatory method was SHAP. The explainable model was logistic regression, which is explanatory through its feature weights. The comparison was conducted within the area of creditworthiness and was based on predictive performance and explainability. Furthermore, the thesis intends to use these models to investigate what characterizes loan applicants who are likely to default. The comparison showed that no model performed significantly better than the other in terms of predictive performance. Characteristics of bad loan applicants differed between the two algorithms. Three important aspects were the applicant’s age, where they lived and whether they had a residential phone. Regarding explainability, several advantages with SHAP were observed. With SHAP, explanations on both a local and a global level can be produced. Also, SHAP offers a way to take advantage of the high performance in many modern machine learning algorithms, and at the same time fulfil today’s increased requirement of transparency. / I takt med att AI används allt oftare för att fatta beslut i samhället, har kravet på förklarbarhet ökat. En utmaning med flera moderna maskininlärningsmodeller är att de, på grund av sina komplexa strukturer, sällan ger tillgång till mänskligt förståeliga motiveringar. Forskning inom förklarar AI har lett fram till metoder som kan appliceras ovanpå icke- förklarbara modeller för att tolka deras beslutsgrunder. Det här arbetet syftar till att jämföra en icke- förklarbar maskininlärningsmodell i kombination med en förklaringsmetod, och en modell som är förklarbar genom sin struktur. Den icke- förklarbara modellen var random forest och förklaringsmetoden som användes var SHAP. Den förklarbara modellen var logistisk regression, som är förklarande genom sina vikter. Jämförelsen utfördes inom området kreditvärdighet och grundades i prediktiv prestanda och förklarbarhet. Vidare användes dessa modeller för att undersöka vilka egenskaper som var kännetecknande för låntagare som inte förväntades kunna betala tillbaka sitt lån. Jämförelsen visade att ingen av de båda metoderna presterande signifikant mycket bättre än den andra sett till prediktiv prestanda. Kännetecknande särdrag för dåliga låntagare skiljde sig åt mellan metoderna. Tre viktiga aspekter var låntagarens °ålder, vart denna bodde och huruvida personen ägde en hemtelefon. Gällande förklarbarheten framträdde flera fördelar med SHAP, däribland möjligheten att kunna producera både lokala och globala förklaringar. Vidare konstaterades att SHAP gör det möjligt att dra fördel av den höga prestandan som många moderna maskininlärningsmetoder uppvisar och samtidigt uppfylla dagens ökade krav på transparens.
14

Explainable AI for supporting operators in manufacturing machines maintenance : Evaluating different techniques of explainable AI for a machine learning model that can be used in a manufacturing environment / Förklarlig AI för att stödja operatörer inom tillverkning underhåll av maskiner : Utvärdera olika tekniker för förklarabar AI för en maskininlärningsmodell som kan användas i en tillverkningsmiljö

Di Flumeri, Francesco January 2022 (has links)
Monitoring and predicting machine breakdowns are of vital importance in the manufacturing industry. Machine Learning models could be used to improve these breakdown predictions. However, the operators responsible for the machines need to trust and understand the predictions in order to base their decisions on the information. For this reason, Explainable Artificial Intelligence, XAIs, was introduced. It is defined as the set of Artificial Intelligence systems that can provide predictions in an intelligible and trustful form. Hence, the purpose of this research is to study different techniques of Explainable Artificial Intelligence XAIs in order to discover the most suitable methodology for allowing people without a machine learning background, employed in a manufacturing environment, to understand and trust predictions. Four XAI interfaces have been tested: three integrated XAI techniques were identified through a literature review, and one was presenting an experimental XAIs facility based on a machine learning model for outliers identification. In order to predict future machines’ states, classifiers based on Random Forest were built, while for identifying anomalies a model based on Isolation Forest was built. In addition, a user study was carried out in order to discern end-users perspectives about the four XAI interfaces. Final results showed that the XAI interface based on anomalous production values gained high approval among users with no or basic machine learning knowledge. / Övervakning och förutsägelse av maskinhaverier är av avgörande betydelse inom tillverkningsindustrin. Machine Learning-modeller kan användas för att förbättra dessa förutsägelser om sammanbrott. De operatörer som ansvarar för maskinerna måste dock lita på och förstå förutsägelserna för att kunna basera sina beslut på informationen. Av denna anledning introducerades Explainable Artificial Intelligence, XAIs. Det definieras som en uppsättning artificiell intelligenssystem som kan ge förutsägelser i en begriplig och pålitlig form. Därför är syftet med denna forskning att studera olika tekniker för Explainable Artificiell Intelligens XAIs för att upptäcka den mest lämpliga metoden för att låta människor utan maskininlärningsbakgrund, anställda i en tillverkningsmiljö, förstå och lita på förutsägelser. Fyra XAIgränssnitt har testats: tre integrerade XAI-tekniker identifierade genom en litteraturgenomgång, och en presenterade en experimentell XAI-anläggning baserad på en maskininlärningsmodell för identifiering av extremvärden. För att förutsäga framtida maskiners tillstånd byggdes klassificerare baserade på Random Forest, medan för att identifiera anomalier byggdes en modell baserad på Isolation Forest. Dessutom genomfördes en användarstudie för att urskilja slutanvändarnas perspektiv på de fyra XAI-gränssnitten. Slutresultaten visade att XAI-gränssnittet baserat på onormala produktionsvärden fick högt godkännande bland användare utan någon eller grundläggande kunskap om maskininlärning.
15

Evolving Rule Based Explainable Artificial Intelligence for Decision Support System of Unmanned Aerial Vehicles

Keneni, Blen M., Keneni 14 December 2018 (has links)
No description available.
16

Towards Explainable Decision-making Strategies of Deep Convolutional Neural Networks : An exploration into explainable AI and potential applications within cancer detection

Hammarström, Tobias January 2020 (has links)
The influence of Artificial Intelligence (AI) on society is increasing, with applications in highly sensitive and complicated areas. Examples include using Deep Convolutional Neural Networks within healthcare for diagnosing cancer. However, the inner workings of such models are often unknown, limiting the much-needed trust in the models. To combat this, Explainable AI (XAI) methods aim to provide explanations of the models' decision-making. Two such methods, Spectral Relevance Analysis (SpRAy) and Testing with Concept Activation Methods (TCAV), were evaluated on a deep learning model classifying cat and dog images that contained introduced artificial noise. The task was to assess the methods' capabilities to explain the importance of the introduced noise for the learnt model. The task was constructed as an exploratory step, with the future aim of using the methods on models diagnosing oral cancer. In addition to using the TCAV method as introduced by its authors, this study also utilizes the CAV-sensitivity to introduce and perform a sensitivity magnitude analysis. Both methods proved useful in discerning between the model’s two decision-making strategies based on either the animal or the noise. However, greater insight into the intricacies of said strategies is desired. Additionally, the methods provided a deeper understanding of the model’s learning, as the model did not seem to properly distinguish between the noise and the animal conceptually. The methods thus accentuated the limitations of the model, thereby increasing our trust in its abilities. In conclusion, the methods show promise regarding the task of detecting visually distinctive noise in images, which could extend to other distinctive features present in more complex problems. Consequently, more research should be conducted on applying these methods on more complex areas with specialized models and tasks, e.g. oral cancer.
17

Interpretation of Dimensionality Reduction with Supervised Proxies of User-defined Labels

Leoni, Cristian January 2021 (has links)
Research on Machine learning (ML) explainability has received a lot of focus in recent times. The interest, however, mostly focused on supervised models, while other ML fields have not had the same level of attention. Despite its usefulness in a variety of different fields, unsupervised learning explainability is still an open issue. In this paper, we present a Visual Analytics framework based on eXplainable AI (XAI) methods to support the interpretation of Dimensionality reduction methods. The framework provides the user with an interactive and iterative process to investigate and explain user-perceived patterns for a variety of DR methods by using XAI methods to explain a supervised method trained on the selected data. To evaluate the effectiveness of the proposed solution, we focus on two main aspects: the quality of the visualization and the quality of the explanation. This challenge is tackled using both quantitative and qualitative methods, and due to the lack of pre-existing test data, a new benchmark has been created. The quality of the visualization is established using a well-known survey-based methodology, while the quality of the explanation is evaluated using both case studies and a controlled experiment, where the generated explanation accuracy is evaluated on the proposed benchmark. The results show a strong capacity of our framework to generate accurate explanations, with an accuracy of 89% over the controlled experiment. The explanation generated for the two case studies yielded very similar results when compared with pre-existing, well-known literature on ground truths. Finally, the user experiment generated high quality overall scores for all assessed aspects of the visualization.
18

Improving nuclear medicine with deep learning and explainability: two real-world use cases in parkinsonian syndrome and safety dosimetry

Nazari, Mahmood 17 March 2022 (has links)
Computer vision in the area of medical imaging has rapidly improved during recent years as a consequence of developments in deep learning and explainability algorithms. In addition, imaging in nuclear medicine is becoming increasingly sophisticated, with the emergence of targeted radiotherapies that enable treatment and imaging on a molecular level (“theranostics”) where radiolabeled targeted molecules are directly injected into the bloodstream. Based on our recent work, we present two use-cases in nuclear medicine as follows: first, the impact of automated organ segmentation required for personalized dosimetry in patients with neuroendocrine tumors and second, purely data-driven identification and verification of brain regions for diagnosis of Parkinson’s disease. Convolutional neural network was used for automated organ segmentation on computed tomography images. The segmented organs were used for calculation of the energy deposited into the organ-at-risk for patients treated with a radiopharmaceutical. Our method resulted in faster and cheaper dosimetry and only differed by 7% from dosimetry performed by two medical physicists. The identification of brain regions, however was analyzed on dopamine-transporter single positron emission tomography images using convolutional neural network and explainability, i.e., layer-wise relevance propagation algorithm. Our findings confirm that the extra-striatal brain regions, i.e., insula, amygdala, ventromedial prefrontal cortex, thalamus, anterior temporal cortex, superior frontal lobe, and pons contribute to the interpretation of images beyond the striatal regions. In current common diagnostic practice, however, only the striatum is the reference region, while extra-striatal regions are neglected. We further demonstrate that deep learning-based diagnosis combined with explainability algorithm can be recommended to support interpretation of this image modality in clinical routine for parkinsonian syndromes, with a total computation time of three seconds which is compatible with busy clinical workflow. Overall, this thesis shows for the first time that deep learning with explainability can achieve results competitive with human performance and generate novel hypotheses, thus paving the way towards improved diagnosis and treatment in nuclear medicine.
19

Statistical and Machine Learning Approaches For Visualizing and Analyzing Large-Scale Simulation Data

Hazarika, Subhashis January 2019 (has links)
No description available.
20

Counterfactual and Causal Analysis for AI-based Modulation and Coding Scheme Selection / Kontrafaktisk och orsaksanalys för AI-baserad modulerings- och kodningsval

Hao, Kun January 2023 (has links)
Artificial Intelligence (AI) has emerged as a transformative force in wireless communications, driving innovation to address the complex challenges faced by communication systems. In this context, the optimization of limited radio resources plays a crucial role, and one important aspect is the Modulation and Coding Scheme (MCS) selection. AI solutions for MCS selection have been predominantly characterized as black-box models, which suffer from limited explainability and consequently hinder trust in these algorithms. Moreover, the majority of existing research primarily emphasizes enhancing explainability without concurrently improving the model’s performance which makes performance and explainability a trade-off. This work aims to address these issues by employing eXplainable AI (XAI), particularly counterfactual and causal analysis, to increase the explainability and trustworthiness of black-box models. We propose CounterFactual Retrain (CF-Retrain), the first method that utilizes counterfactual explanations to improve model performance and make the process of performance enhancement more explainable. Additionally, we conduct a causal analysis and compare the results with those obtained from an analysis based on the SHapley Additive exPlanations (SHAP) value feature importance. This comparison leads to the proposal of novel hypotheses and insights for model optimization in future research. Our results show that employing CF-Retrain can reduce the Mean Absolute Error (MAE) of the black-box model by 4% while utilizing only 14% of the training data. Moreover, increasing the amount of training data yields even more pronounced improvements in MAE, providing a certain level of explainability. This performance enhancement is comparable to or even superior to using a more complex model. Furthermore, by introducing causal analysis to the mainstream SHAP value feature importance, we provide a novel hypothesis and explanation of feature importance based on causal analysis. This approach can serve as an evaluation criterion for assessing the model’s performance. / Artificiell intelligens (AI) har dykt upp som en transformativ kraft inom trådlös kommunikation, vilket driver innovation för att möta de komplexa utmaningar som kommunikationssystem står inför. I detta sammanhang spelar optimeringen av begränsade radioresurser en avgörande roll, och en viktig aspekt är valet av Modulation and Coding Scheme (MCS). AI-lösningar för val av modulering och kodningsschema har övervägande karaktäriserats som black-box-modeller, som lider av begränsad tolkningsbarhet och följaktligen hindrar förtroendet för dessa algoritmer. Dessutom betonar majoriteten av befintlig forskning i första hand att förbättra förklaringsbarheten utan att samtidigt förbättra modellens prestanda, vilket gör prestanda och tolkningsbarhet till en kompromiss. Detta arbete syftar till att ta itu med dessa problem genom att använda XAI, särskilt kontrafaktisk och kausal analys, för att öka tolkningsbarheten och pålitligheten hos svarta-box-modeller. Vi föreslår CF-Retrain, den första metoden som använder kontrafaktiska förklaringar för att förbättra modellens prestanda och göra processen med prestandaförbättring mer tolkningsbar. Dessutom gör vi en orsaksanalys och jämför resultaten med de som erhålls från en analys baserad på värdeegenskapens betydelse. Denna jämförelse leder till förslaget av nya hypoteser och insikter för modelloptimering i framtida forskning. Våra resultat visar att användning av CF-Retrain kan minska det genomsnittliga absoluta felet för black-box-modellen med 4% samtidigt som man använder endast 14% av träningsdata. Dessutom ger en ökning av mängden träningsdata ännu mer uttalade förbättringar av Mean Absolute Error (MAE), vilket ger en viss grad av tolkningsbarhet. Denna prestandaförbättring är jämförbar med eller till och med överlägsen att använda en mer komplex modell. Dessutom, genom att introducera kausal analys till de vanliga Shapley-tillsatsförklaringarna värdesätter egenskapens betydelse, ger vi en ny hypotes och tolkning av egenskapens betydelse baserad på kausalanalys. Detta tillvägagångssätt kan fungera som ett utvärderingskriterium för att bedöma modellens prestanda.

Page generated in 0.1344 seconds