• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 105
  • 4
  • 3
  • 1
  • Tagged with
  • 126
  • 78
  • 71
  • 60
  • 57
  • 55
  • 54
  • 41
  • 40
  • 40
  • 30
  • 29
  • 24
  • 24
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Generating an Interpretable Ranking Model: Exploring the Power of Local Model-Agnostic Interpretability for Ranking Analysis

Galera Alfaro, Laura January 2023 (has links)
Machine learning has revolutionized recommendation systems by employing ranking models for personalized item suggestions. However, the complexity of learning-to-rank (LTR) models poses challenges in understanding the underlying reasons contributing to the ranking outcomes. This lack of transparency raises concerns about potential errors, biases, and ethical implications. To address these issues, interpretable LTR models have emerged as a solution. Currently, the state-of-the-art for interpretable LTR models is led by generalized additive models (GAMs). However, ranking GAMs face limitations in terms of computational intensity and handling high-dimensional data. To overcome these drawbacks, post-hoc methods, including local interpretable modelagnostic explanations (LIME), have been proposed as potential alternatives. Nevertheless, a quantitative evaluation comparing post-hoc methods efficacy to state-of-the-art ranking GAMs remains largely unexplored. This study aims to investigate the capabilities and limitations of LIME in an attempt to approximate a complex ranking model using a surrogate model. The proposed methodology for this study is an experimental approach. The neural ranking GAM, trained on two benchmark information retrieval datasets, serves as the ground truth for evaluating LIME’s performance. The study adapts LIME in the context of ranking by translating the problem into a classification task and asses three different sampling strategies against the prevalence of imbalanced data and their influence on the correctness of LIME’s explanations. The findings of this study contribute to understanding the limitations of LIME in the context of ranking. It analyzes the low similarity between the explanations of LIME and those generated by the ranking model, highlighting the need to develop more robust sampling strategies specific to ranking. Additionally, the study emphasizes the importance of developing appropriate evaluation metrics for assessing the quality of explanations in ranking tasks.
52

Human In Command Machine Learning

Holmberg, Lars January 2021 (has links)
Machine Learning (ML) and Artificial Intelligence (AI) impact many aspects of human life, from recommending a significant other to assist the search for extraterrestrial life. The area develops rapidly and exiting unexplored design spaces are constantly laid bare. The focus in this work is one of these areas; ML systems where decisions concerning ML model training, usage and selection of target domain lay in the hands of domain experts.  This work is then on ML systems that function as a tool that augments and/or enhance human capabilities. The approach presented is denoted Human In Command ML (HIC-ML) systems. To enquire into this research domain design experiments of varying fidelity were used. Two of these experiments focus on augmenting human capabilities and targets the domains commuting and sorting batteries. One experiment focuses on enhancing human capabilities by identifying similar hand-painted plates. The experiments are used as illustrative examples to explore settings where domain experts potentially can: independently train an ML model and in an iterative fashion, interact with it and interpret and understand its decisions.  HIC-ML should be seen as a governance principle that focuses on adding value and meaning to users. In this work, concrete application areas are presented and discussed. To open up for designing ML-based products for the area an abstract model for HIC-ML is constructed and design guidelines are proposed. In addition, terminology and abstractions useful when designing for explicability are presented by imposing structure and rigidity derived from scientific explanations. Together, this opens up for a contextual shift in ML and makes new application areas probable, areas that naturally couples the usage of AI technology to human virtues and potentially, as a consequence, can result in a democratisation of the usage and knowledge concerning this powerful technology.
53

Robot Proficiency Self-Assessment Using Assumption-Alignment Tracking

Cao, Xuan 01 April 2024 (has links) (PDF)
A robot is proficient if its performance for its task(s) satisfies a specific standard. While the design of autonomous robots often emphasizes such proficiency, another important attribute of autonomous robot systems is their ability to evaluate their own proficiency. A robot should be able to conduct proficiency self-assessment (PSA), i.e. assess how well it can perform a task before, during, and after it has attempted the task. We propose the assumption-alignment tracking (AAT) method, which provides time-indexed assessments of the veracity of robot generators' assumptions, for designing autonomous robots that can effectively evaluate their own performance. AAT can be considered as a general framework for using robot sensory data to extract useful features, which are then used to build data-driven PSA models. We develop various AAT-based data-driven approaches to PSA from different perspectives. First, we use AAT for estimating robot performance. AAT features encode how the robot's current running condition varies from the normal condition, which correlates with the deviation level between the robot's current performance and normal performance. We use the k-nearest neighbor algorithm to model that correlation. Second, AAT features are used for anomaly detection. We treat anomaly detection as a one-class classification problem where only data from the robot operating in normal conditions are used in training, decreasing the burden on acquiring data in various abnormal conditions. The cluster boundary of data points from normal conditions, which serves as the decision boundary between normal and abnormal conditions, can be identified by mainstream one-class classification algorithms. Third, we improve PSA models that predict robot success/failure by introducing meta-PSA models that assess the correctness of PSA models. The probability that a PSA model's prediction is correct is conditioned on four features: 1) the mean distance from a test sample to its nearest neighbors in the training set; 2) the predicted probability of success made by the PSA model; 3) the ratio between the robot's current performance and its performance standard; and 4) the percentage of the task the robot has already completed. Meta-PSA models trained on the four features using a Random Forest algorithm improve PSA models with respect to both discriminability and calibration. Finally, we explore how AAT can be used to generate a new type of explanation of robot behavior/policy from the perspective of a robot's proficiency. AAT provides three pieces of information for explanation generation: (1) veracity assessment of the assumptions on which the robot's generators rely; (2) proficiency assessment measured by the probability that the robot will successfully accomplish its task; and (3) counterfactual proficiency assessment computed with the veracity of some assumptions varied hypothetically. The information provided by AAT fits the situation awareness-based framework for explainable artificial intelligence. The efficacy of AAT is comprehensively evaluated using robot systems with a variety of robot types, generators, hardware, and tasks, including a simulated robot navigating in a maze-based (discrete time) Markov chain environment, a simulated robot navigating in a continuous environment, and both a simulated and a real-world robot arranging blocks of different shapes and colors in a specific order on a table.
54

Investigating the Use of Deep Learning Models for Transactional Underwriting / En Undersökning av Djupinlärningsmodeller för Transaktionell Underwriting

Tober, Samuel January 2022 (has links)
Tabular data is the most common form of data, and is abundant throughout crucial industries, such as banks, hospitals and insurance companies. Albeit, deep learning research has largely been dominated by applications to homogeneous data, e.g. images or natural language. Inspired by the great success of deep learning in these domains, recent efforts have been made to tailor deep learning architectures for tabular data. In this thesis, two such models are selected and tested in the context of transactional underwriting. Specifically, the two models are evaluated in terms of predictive performance, interpretability and complexity, to ultimately see if they can compete with gradient boosted tree models and live up to industry requirements. Moreover, the pre-training capabilities of the deep learning models are tested through transfer learning experiments across different markets. It is concluded that the two models are able to outperform the benchmark gradient boosted tree model in terms of RMSE, and moreover, pre-training across markets gives a statistically significant improvement in RMSE, on a level of 0.05. Furthermore, using SHAP, together with model specific explainability methods, it is concluded that the two deep learning models’ explainability is on-par with gradient boosted tree models. / Tabelldata är den vanligaste formen av data och finns i överflöd i viktiga branscher, såsom banker, sjukhus och försäkringsbolag. Även om forskningen inom djupinlärning till stor del dominerats av tillämpningar på homogen data, t.ex. bilder eller naturligt språk. Inspirerad av den stora framgången för djupinlärning inom dessa domäner, har nyligen ansträngningar gjorts för att skräddarsy djupinlärnings-arkitekturer för tabelldata. I denna avhandling väljs och testas två sådana modeller på problemet att estimera vinst marginalen på en transaktion. Specifikt utvärderas de två modellerna i termer av prediktiv prestanda, tolkningsbarhet och komplexitet, för att i slutändan se om de kan konkurrera med gradient boosted tree-modeller och leva upp till branschkrav. Dessutom testas för-träningsförmågan hos djupinlärningmodellerna genom överföringsexperiment mellan olika marknader. Man drar slutsatsen att de två modellerna kan överträffa benchmark gradient boosted tree-modellen när det gäller RMSE, och dessutom ger för-träning mellan marknader en statistiskt signifikant förbättring av RMSE, på en nivå av 0,05. Vidare, med hjälp av SHAP, tillsammans med modellspecifika förklaringsmetoder, dras slutsatsen att de två djupinlärning-modellernas förklaringsbarhet är i nivå med gradient boosted tree-modellerna.
55

Evolutionary Belief Rule based Explainable AI to Predict Air Pollution

Zisad, Sharif Noor January 2023 (has links)
This thesis presents a novel approach to make Artificial Intelligence (AI) more explainable by using a Belief Rule Based Expert System (BRBES). A BRBES is a type of expert system that can handle both qualitative and quantitative information under uncertainty and incompleteness by using if-then rules with belief degrees. The BRBES can model the human inference process and provide transparent and interpretable reasoning for its decisions. However, designing a BRBES requires tuning several parameters, such as the rule weights, the belief degrees, and the inference parameters. To address this challenge, this thesis report proposes to use a Differential Evolution (DE) algorithm to optimize these parameters automatically. A DE algorithm such as BRB adaptive DE (BRBaDE) and Joint Optimization of BRB is a metaheuristic that optimizes a problem by iteratively creating new candidate solutions by combining existing ones according to some simple formulae. The DE algorithm does not require any prior knowledge of the problem or its gradient, and can handle complex optimization problems with multiple objectives and constraints. This model can provide explainability by using different model agnostic method including Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP). The proposed approach is applied to calculate Air Quality Index (AQI) using particle data. The results show that the proposed approach can improve the performance and explainability of AI systems compared to other existing methods. Moreover, the proposed model can ensure the balance between accuracy and explainablity in comparison to other models.
56

Interactive or textual explanations? : Evaluation of two explanatory approaches within in-vehicle adaptive systems. / Interaktiva förklaringar eller textförklaringar? : Utvärdering av två förklaringsmetoder för adaptiva system i fordon.

Carollo, Gabriele January 2023 (has links)
Adaptiva system är idag en viktig resurs för att kunna utföra vardagliga uppgifter och tillgodose användarnas behov med personliga anpassningar. Sådana system är vanligtvis baserade på modeller som förlitar sig på komplexa algoritmer för artificiell intelligens som gör att systemets beteende uppfattas som ogenomskinligt för användarna. Därför behövs förklaringar av systemets beteende för att förbättra systemets transparens. I den här studien utvärderas två förklaringsmetoder, textbaserad och interaktiv, i samband med tre adaptiva system i ett fordon. Forskningsprojektet syftar till att bedöma potentialen hos de interaktiva förklaringarna jämfört med de textbaserade när det gäller acceptans av systemet, förtroende för systemet och dess användarupplevelse (UX). I detta syfte har en körstudie med 38 deltagare genomförts. Resultaten i allmänhet visar inga signifikanta skillnader för de tre variablerna mellan de två förklaringsmetoderna, även om en liten dominans av det textbaserade konceptet kan noteras. Resultaten i UX och Trust uppmuntrar dock till ytterligare undersökningar av interaktiva förklaringsmetoder. Från resultaten framkom dessutom behovet av kontroll av föraren i anpassningarna, och nödvändigheten av att skräddarsy förklaringsmetoden beroende på vilket förklaringsdesignmönster som används och vilket adaptivt system som avses. Dessa resultat uppmuntrar till framtida forskning om utformningen av användarcentrerade adaptiva system i fordon. / Nowadays adaptive systems represent an essential resource in achieving everyday life tasks, meeting users’ needs with personalized adaptations. Such systems are usually based on models that rely on complex artificial intelligence algorithms that make perceive the system behavior as opaque to the users. Therefore explanations of the system’s behavior are needed to improve system transparency. In this study, two explanatory approaches, named text-based and interactive, are evaluated in the context of three in-vehicle adaptive systems. The research aims to assess the potential of the interactive explanations compared with the text-based ones in terms of Acceptance of the system, Trust in the system, and User Experience (UX) of the system. To this purpose, a real-world driving study with 38 participants has been conducted. The results in general do not indicate significant differences in the three variables between the two explanatory approaches, although a slight predominance of the text-based concept is recognizable. However, the results in UX and Trust encourage further explorations of interactive explanatory approaches. From this research, it emerged the need for control of the driver over the adaptations, and the necessity of tailoring the explanatory approach depending on the explanations design pattern used, and the adaptive system targeted. These findings encourage future research in the design of user-centered in-vehicle adaptive systems.
57

Biomarker Identification for Breast Cancer Types Using Feature Selection and Explainable AI Methods

La Rosa Giraud, David E 01 January 2023 (has links) (PDF)
This paper investigates the impact the LASSO, mRMR, SHAP, and Reinforcement Feature Selection techniques on random forest models for the breast cancer subtypes markers ER, HER2, PR, and TN as well as identifying a small subset of biomarkers that could potentially cause the disease and explain them using explainable AI techniques. This is important because in areas such as healthcare understanding why the model makes a specific decision is important it is a diagnostic of an individual which requires reliable AI. Another contribution is using feature selection methods to identify a small subset of biomarkers capable of predicting if a specific RNA sequence will have one of the cancer labels positive. The study begins by obtaining baseline accuracy metric using a random forest model on The Cancer Genome Atlas's breast cancer database to then explore the effects of feature selection, selecting different numbers of features, significantly influencing model accuracy, and selecting a small number of potential biomarkers that may produce a specific type of breast cancer. Once the biomarkers were selected, the explainable AI techniques SHAP and LIME were applied to the models and provided insight into influential biomarkers and their impact on predictions. The main results are that there are some shared biomarkers between some of the subsets that had high influence over the model prediction, LASSO and Reinforcement Feature selection sets scoring the highest accuracy of all sets and obtaining some insight into how the models used the features by using existing explainable AI methods SHAP and LIME to understand how these selected features are affecting the model's prediction.
58

Comparison of Logistic Regression and an Explained Random Forest in the Domain of Creditworthiness Assessment

Ankaräng, Marcus, Kristiansson, Jakob January 2021 (has links)
As the use of AI in society is developing, the requirement of explainable algorithms has increased. A challenge with many modern machine learning algorithms is that they, due to their often complex structures, lack the ability to produce human-interpretable explanations. Research within explainable AI has resulted in methods that can be applied on top of non- interpretable models to motivate their decision bases. The aim of this thesis is to compare an unexplained machine learning model used in combination with an explanatory method, and a model that is explainable through its inherent structure. Random forest was the unexplained model in question and the explanatory method was SHAP. The explainable model was logistic regression, which is explanatory through its feature weights. The comparison was conducted within the area of creditworthiness and was based on predictive performance and explainability. Furthermore, the thesis intends to use these models to investigate what characterizes loan applicants who are likely to default. The comparison showed that no model performed significantly better than the other in terms of predictive performance. Characteristics of bad loan applicants differed between the two algorithms. Three important aspects were the applicant’s age, where they lived and whether they had a residential phone. Regarding explainability, several advantages with SHAP were observed. With SHAP, explanations on both a local and a global level can be produced. Also, SHAP offers a way to take advantage of the high performance in many modern machine learning algorithms, and at the same time fulfil today’s increased requirement of transparency. / I takt med att AI används allt oftare för att fatta beslut i samhället, har kravet på förklarbarhet ökat. En utmaning med flera moderna maskininlärningsmodeller är att de, på grund av sina komplexa strukturer, sällan ger tillgång till mänskligt förståeliga motiveringar. Forskning inom förklarar AI har lett fram till metoder som kan appliceras ovanpå icke- förklarbara modeller för att tolka deras beslutsgrunder. Det här arbetet syftar till att jämföra en icke- förklarbar maskininlärningsmodell i kombination med en förklaringsmetod, och en modell som är förklarbar genom sin struktur. Den icke- förklarbara modellen var random forest och förklaringsmetoden som användes var SHAP. Den förklarbara modellen var logistisk regression, som är förklarande genom sina vikter. Jämförelsen utfördes inom området kreditvärdighet och grundades i prediktiv prestanda och förklarbarhet. Vidare användes dessa modeller för att undersöka vilka egenskaper som var kännetecknande för låntagare som inte förväntades kunna betala tillbaka sitt lån. Jämförelsen visade att ingen av de båda metoderna presterande signifikant mycket bättre än den andra sett till prediktiv prestanda. Kännetecknande särdrag för dåliga låntagare skiljde sig åt mellan metoderna. Tre viktiga aspekter var låntagarens °ålder, vart denna bodde och huruvida personen ägde en hemtelefon. Gällande förklarbarheten framträdde flera fördelar med SHAP, däribland möjligheten att kunna producera både lokala och globala förklaringar. Vidare konstaterades att SHAP gör det möjligt att dra fördel av den höga prestandan som många moderna maskininlärningsmetoder uppvisar och samtidigt uppfylla dagens ökade krav på transparens.
59

Explainable AI for supporting operators in manufacturing machines maintenance : Evaluating different techniques of explainable AI for a machine learning model that can be used in a manufacturing environment / Förklarlig AI för att stödja operatörer inom tillverkning underhåll av maskiner : Utvärdera olika tekniker för förklarabar AI för en maskininlärningsmodell som kan användas i en tillverkningsmiljö

Di Flumeri, Francesco January 2022 (has links)
Monitoring and predicting machine breakdowns are of vital importance in the manufacturing industry. Machine Learning models could be used to improve these breakdown predictions. However, the operators responsible for the machines need to trust and understand the predictions in order to base their decisions on the information. For this reason, Explainable Artificial Intelligence, XAIs, was introduced. It is defined as the set of Artificial Intelligence systems that can provide predictions in an intelligible and trustful form. Hence, the purpose of this research is to study different techniques of Explainable Artificial Intelligence XAIs in order to discover the most suitable methodology for allowing people without a machine learning background, employed in a manufacturing environment, to understand and trust predictions. Four XAI interfaces have been tested: three integrated XAI techniques were identified through a literature review, and one was presenting an experimental XAIs facility based on a machine learning model for outliers identification. In order to predict future machines’ states, classifiers based on Random Forest were built, while for identifying anomalies a model based on Isolation Forest was built. In addition, a user study was carried out in order to discern end-users perspectives about the four XAI interfaces. Final results showed that the XAI interface based on anomalous production values gained high approval among users with no or basic machine learning knowledge. / Övervakning och förutsägelse av maskinhaverier är av avgörande betydelse inom tillverkningsindustrin. Machine Learning-modeller kan användas för att förbättra dessa förutsägelser om sammanbrott. De operatörer som ansvarar för maskinerna måste dock lita på och förstå förutsägelserna för att kunna basera sina beslut på informationen. Av denna anledning introducerades Explainable Artificial Intelligence, XAIs. Det definieras som en uppsättning artificiell intelligenssystem som kan ge förutsägelser i en begriplig och pålitlig form. Därför är syftet med denna forskning att studera olika tekniker för Explainable Artificiell Intelligens XAIs för att upptäcka den mest lämpliga metoden för att låta människor utan maskininlärningsbakgrund, anställda i en tillverkningsmiljö, förstå och lita på förutsägelser. Fyra XAIgränssnitt har testats: tre integrerade XAI-tekniker identifierade genom en litteraturgenomgång, och en presenterade en experimentell XAI-anläggning baserad på en maskininlärningsmodell för identifiering av extremvärden. För att förutsäga framtida maskiners tillstånd byggdes klassificerare baserade på Random Forest, medan för att identifiera anomalier byggdes en modell baserad på Isolation Forest. Dessutom genomfördes en användarstudie för att urskilja slutanvändarnas perspektiv på de fyra XAI-gränssnitten. Slutresultaten visade att XAI-gränssnittet baserat på onormala produktionsvärden fick högt godkännande bland användare utan någon eller grundläggande kunskap om maskininlärning.
60

Evolving Rule Based Explainable Artificial Intelligence for Decision Support System of Unmanned Aerial Vehicles

Keneni, Blen M., Keneni 14 December 2018 (has links)
No description available.

Page generated in 0.0975 seconds