• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 4
  • 1
  • 1
  • Tagged with
  • 77
  • 77
  • 40
  • 36
  • 33
  • 21
  • 20
  • 19
  • 19
  • 18
  • 17
  • 15
  • 14
  • 14
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Interpreting embedding models of knowledge bases. / Interpretando modelos de embedding de bases de conhecimento.

Arthur Colombini Gusmão 26 November 2018 (has links)
Knowledge bases are employed in a variety of applications, from natural language processing to semantic web search; alas, in practice, their usefulness is hurt by their incompleteness. To address this issue, several techniques aim at performing knowledge base completion, of which embedding models are efficient, attain state-of-the-art accuracy, and eliminate the need for feature engineering. However, embedding models predictions are notoriously hard to interpret. In this work, we propose model-agnostic methods that allow one to interpret embedding models by extracting weighted Horn rules from them. More specifically, we show how the so-called \"pedagogical techniques\", from the literature on neural networks, can be adapted to take into account the large-scale relational aspects of knowledge bases, and show experimentally their strengths and weaknesses. / Bases de conhecimento apresentam diversas aplicações, desde processamento de linguagem natural a pesquisa semântica da web; contudo, na prática, sua utilidade é prejudicada por não serem totalmente completas. Para solucionar esse problema, diversas técnicas focam em completar bases de conhecimento, das quais modelos de embedding são eficientes, atingem estado da arte em acurácia, e eliminam a necessidade de fazer-se engenharia de características dos dados de entrada. Entretanto, as predições dos modelos de embedding são notoriamente difíceis de serem interpretadas. Neste trabalho, propomos métodos agnósticos a modelo que permitem interpretar modelos de embedding através da extração de regras Horn ponderadas por pesos dos mesmos. Mais espeficicamente, mostramos como os chamados \"métodos pedagógicos\", da literatura de redes neurais, podem ser adaptados para lidar com os aspectos relacionais e de larga escala de bases de conhecimento, e mostramos experimentalmente seus pontos fortes e fracos.
32

Interpreting embedding models of knowledge bases. / Interpretando modelos de embedding de bases de conhecimento.

Gusmão, Arthur Colombini 26 November 2018 (has links)
Knowledge bases are employed in a variety of applications, from natural language processing to semantic web search; alas, in practice, their usefulness is hurt by their incompleteness. To address this issue, several techniques aim at performing knowledge base completion, of which embedding models are efficient, attain state-of-the-art accuracy, and eliminate the need for feature engineering. However, embedding models predictions are notoriously hard to interpret. In this work, we propose model-agnostic methods that allow one to interpret embedding models by extracting weighted Horn rules from them. More specifically, we show how the so-called \"pedagogical techniques\", from the literature on neural networks, can be adapted to take into account the large-scale relational aspects of knowledge bases, and show experimentally their strengths and weaknesses. / Bases de conhecimento apresentam diversas aplicações, desde processamento de linguagem natural a pesquisa semântica da web; contudo, na prática, sua utilidade é prejudicada por não serem totalmente completas. Para solucionar esse problema, diversas técnicas focam em completar bases de conhecimento, das quais modelos de embedding são eficientes, atingem estado da arte em acurácia, e eliminam a necessidade de fazer-se engenharia de características dos dados de entrada. Entretanto, as predições dos modelos de embedding são notoriamente difíceis de serem interpretadas. Neste trabalho, propomos métodos agnósticos a modelo que permitem interpretar modelos de embedding através da extração de regras Horn ponderadas por pesos dos mesmos. Mais espeficicamente, mostramos como os chamados \"métodos pedagógicos\", da literatura de redes neurais, podem ser adaptados para lidar com os aspectos relacionais e de larga escala de bases de conhecimento, e mostramos experimentalmente seus pontos fortes e fracos.
33

Explaining the output of a black box model and a white box model: an illustrative comparison

Joel, Viklund January 2020 (has links)
The thesis investigates how one should determine the appropriate transparency of an information processing system from a receiver perspective. Research in the past has suggested that the model should be maximally transparent for what is labeled as ”high stake decisions”. Instead of motivating the choice of a model’s transparency on the non-rigorous criterion that the model contributes to a high stake decision, this thesis explores an alternative method. The suggested method involves that one should let the transparency depend on how well an explanation of the model’s output satisfies the purpose of an explanation. As a result, we do not have to bother if it is a high stake decision, we should instead make sure the model is sufficiently transparent to provide an explanation that satisfies the expressed purpose of an explanation.
34

Explainable AI methods for credit card fraud detection : Evaluation of LIME and SHAP through a User Study

Ji, Yingchao January 2021 (has links)
In the past few years, Artificial Intelligence (AI) has evolved into a powerful tool applied in multi-disciplinary fields to resolve sophisticated problems. As AI becomes more powerful and ubiquitous, oftentimes the AI methods also become opaque, which might lead to trust issues for the users of the AI systems as well as fail to meet the legal requirements of AI transparency. In this report, the possibility of making a credit-card fraud detection support system explainable to users is investigated through a quantitative survey. A publicly available credit card dataset was used. Deep Learning and Random Forest were the two Machine Learning (ML) methodsimplemented and applied on the credit card fraud dataset, and the performance of their results was evaluated in terms of their accuracy, recall, sufficiency, and F1 score. After that, two explainable AI (XAI) methods - SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) were implemented and applied to the results obtained from these two ML methods. Finally, the XAI results were evaluated through a quantitative survey. The results from the survey revealed that the XAI explanations can slightly increase the users' impression of the system's ability to reason and LIME had a slight advantage over SHAP in terms of explainability. Further investigation of visualizing data pre-processing and the training process is suggested to offer deep explanations for users.
35

Human In Command Machine Learning

Holmberg, Lars January 2021 (has links)
Machine Learning (ML) and Artificial Intelligence (AI) impact many aspects of human life, from recommending a significant other to assist the search for extraterrestrial life. The area develops rapidly and exiting unexplored design spaces are constantly laid bare. The focus in this work is one of these areas; ML systems where decisions concerning ML model training, usage and selection of target domain lay in the hands of domain experts.  This work is then on ML systems that function as a tool that augments and/or enhance human capabilities. The approach presented is denoted Human In Command ML (HIC-ML) systems. To enquire into this research domain design experiments of varying fidelity were used. Two of these experiments focus on augmenting human capabilities and targets the domains commuting and sorting batteries. One experiment focuses on enhancing human capabilities by identifying similar hand-painted plates. The experiments are used as illustrative examples to explore settings where domain experts potentially can: independently train an ML model and in an iterative fashion, interact with it and interpret and understand its decisions.  HIC-ML should be seen as a governance principle that focuses on adding value and meaning to users. In this work, concrete application areas are presented and discussed. To open up for designing ML-based products for the area an abstract model for HIC-ML is constructed and design guidelines are proposed. In addition, terminology and abstractions useful when designing for explicability are presented by imposing structure and rigidity derived from scientific explanations. Together, this opens up for a contextual shift in ML and makes new application areas probable, areas that naturally couples the usage of AI technology to human virtues and potentially, as a consequence, can result in a democratisation of the usage and knowledge concerning this powerful technology.
36

Robot Proficiency Self-Assessment Using Assumption-Alignment Tracking

Cao, Xuan 01 April 2024 (has links) (PDF)
A robot is proficient if its performance for its task(s) satisfies a specific standard. While the design of autonomous robots often emphasizes such proficiency, another important attribute of autonomous robot systems is their ability to evaluate their own proficiency. A robot should be able to conduct proficiency self-assessment (PSA), i.e. assess how well it can perform a task before, during, and after it has attempted the task. We propose the assumption-alignment tracking (AAT) method, which provides time-indexed assessments of the veracity of robot generators' assumptions, for designing autonomous robots that can effectively evaluate their own performance. AAT can be considered as a general framework for using robot sensory data to extract useful features, which are then used to build data-driven PSA models. We develop various AAT-based data-driven approaches to PSA from different perspectives. First, we use AAT for estimating robot performance. AAT features encode how the robot's current running condition varies from the normal condition, which correlates with the deviation level between the robot's current performance and normal performance. We use the k-nearest neighbor algorithm to model that correlation. Second, AAT features are used for anomaly detection. We treat anomaly detection as a one-class classification problem where only data from the robot operating in normal conditions are used in training, decreasing the burden on acquiring data in various abnormal conditions. The cluster boundary of data points from normal conditions, which serves as the decision boundary between normal and abnormal conditions, can be identified by mainstream one-class classification algorithms. Third, we improve PSA models that predict robot success/failure by introducing meta-PSA models that assess the correctness of PSA models. The probability that a PSA model's prediction is correct is conditioned on four features: 1) the mean distance from a test sample to its nearest neighbors in the training set; 2) the predicted probability of success made by the PSA model; 3) the ratio between the robot's current performance and its performance standard; and 4) the percentage of the task the robot has already completed. Meta-PSA models trained on the four features using a Random Forest algorithm improve PSA models with respect to both discriminability and calibration. Finally, we explore how AAT can be used to generate a new type of explanation of robot behavior/policy from the perspective of a robot's proficiency. AAT provides three pieces of information for explanation generation: (1) veracity assessment of the assumptions on which the robot's generators rely; (2) proficiency assessment measured by the probability that the robot will successfully accomplish its task; and (3) counterfactual proficiency assessment computed with the veracity of some assumptions varied hypothetically. The information provided by AAT fits the situation awareness-based framework for explainable artificial intelligence. The efficacy of AAT is comprehensively evaluated using robot systems with a variety of robot types, generators, hardware, and tasks, including a simulated robot navigating in a maze-based (discrete time) Markov chain environment, a simulated robot navigating in a continuous environment, and both a simulated and a real-world robot arranging blocks of different shapes and colors in a specific order on a table.
37

Evolutionary Belief Rule based Explainable AI to Predict Air Pollution

Zisad, Sharif Noor January 2023 (has links)
This thesis presents a novel approach to make Artificial Intelligence (AI) more explainable by using a Belief Rule Based Expert System (BRBES). A BRBES is a type of expert system that can handle both qualitative and quantitative information under uncertainty and incompleteness by using if-then rules with belief degrees. The BRBES can model the human inference process and provide transparent and interpretable reasoning for its decisions. However, designing a BRBES requires tuning several parameters, such as the rule weights, the belief degrees, and the inference parameters. To address this challenge, this thesis report proposes to use a Differential Evolution (DE) algorithm to optimize these parameters automatically. A DE algorithm such as BRB adaptive DE (BRBaDE) and Joint Optimization of BRB is a metaheuristic that optimizes a problem by iteratively creating new candidate solutions by combining existing ones according to some simple formulae. The DE algorithm does not require any prior knowledge of the problem or its gradient, and can handle complex optimization problems with multiple objectives and constraints. This model can provide explainability by using different model agnostic method including Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP). The proposed approach is applied to calculate Air Quality Index (AQI) using particle data. The results show that the proposed approach can improve the performance and explainability of AI systems compared to other existing methods. Moreover, the proposed model can ensure the balance between accuracy and explainablity in comparison to other models.
38

Biomarker Identification for Breast Cancer Types Using Feature Selection and Explainable AI Methods

La Rosa Giraud, David E 01 January 2023 (has links) (PDF)
This paper investigates the impact the LASSO, mRMR, SHAP, and Reinforcement Feature Selection techniques on random forest models for the breast cancer subtypes markers ER, HER2, PR, and TN as well as identifying a small subset of biomarkers that could potentially cause the disease and explain them using explainable AI techniques. This is important because in areas such as healthcare understanding why the model makes a specific decision is important it is a diagnostic of an individual which requires reliable AI. Another contribution is using feature selection methods to identify a small subset of biomarkers capable of predicting if a specific RNA sequence will have one of the cancer labels positive. The study begins by obtaining baseline accuracy metric using a random forest model on The Cancer Genome Atlas's breast cancer database to then explore the effects of feature selection, selecting different numbers of features, significantly influencing model accuracy, and selecting a small number of potential biomarkers that may produce a specific type of breast cancer. Once the biomarkers were selected, the explainable AI techniques SHAP and LIME were applied to the models and provided insight into influential biomarkers and their impact on predictions. The main results are that there are some shared biomarkers between some of the subsets that had high influence over the model prediction, LASSO and Reinforcement Feature selection sets scoring the highest accuracy of all sets and obtaining some insight into how the models used the features by using existing explainable AI methods SHAP and LIME to understand how these selected features are affecting the model's prediction.
39

Förklarbar AI och transparens i AI system för SMEs / Explainable AI and transparency in AI systems for SMEs

Malmfors, Hilda, Beronius, Herman January 2024 (has links)
The study examines how explainable AI (XAI), and transparency can increase trust and facilitate the adoption of AI technologies within small and medium-sized enterprises (SMEs). These businesses face significant challenges in integrating AI due to limited technical expertise and resources. The purpose of the study is to explore how XAI could bridge the gap between complex AI models and human understanding, thereby enhancing trust and operational efficiency.   The research methodology includes a case study with a literature review and expert interviews. The literature review provides background and context for the research question, while the expert interviews gather insights from employees in various roles and with different levels of experience within the participating SMEs. This approach offers a comprehensive understanding of the current state of AI adoption and the perceived importance of XAI and transparency.   The results indicate a significant knowledge gap among SME employees regarding AI technologies, with many expressing a lack of familiarity and trust. However, there is strong consensus on the importance of transparency and explainability in AI systems. Participants noted that XAI could significantly improve trust and acceptance of AI technologies by making AI decisions more understandable and transparent. Specific benefits identified include better decision support, increased operational efficiency, and enhanced customer confidence.   The study concludes that XAI and transparency are crucial for building trust and facilitating the adoption of AI technologies in SMEs. By making AI systems more comprehensible, XAI addresses the challenges posed by limited technical expertise and promotes broader acceptance of AI. The research emphasizes the need for continuous education and clear communication strategies to improve AI understanding among stakeholders within SMEs.   To enhance transparency and user trust in AI systems, SMEs should prioritize the integration of XAI frameworks. It is essential to develop user-centered tools that provide clear explanations of AI decisions and to invest in ongoing education and training programs. Additionally, a company culture that values transparency and ethical AI practices would further support the successful adoption of AI technologies. The study contributes to the ongoing discourse on AI adoption in SMEs by providing empirical evidence on the role of XAI in building trust and improving transparency. It offers practical recommendations for SMEs to effectively leverage AI technologies while ensuring ethical and transparent AI practices in line with regulatory requirements and societal expectations. / Studien undersöker hur förklarbar AI (XAI) och transparens kan öka förtroendet och underlätta införandet av AI-teknologier inom små och medelstora företag (SME). Dessa företag står inför betydande utmaningar vid integrationen av AI på grund av begränsad teknisk expertis och resurser. Syftet med studien är att undersöka hur XAI kan överbrygga klyftan mellan komplexa AI-modeller och mänsklig förståelse, vilket i sin tur främjar förtroende och operationell effektivitet.   Forskningsmetodiken inkluderar en fallstudie med en litteraturöversikt och expertintervjuer. Litteraturöversikten ger bakgrund och kontext till forskningsfrågan, medan expertintervjuerna samlar insikter från anställda i olika roller och med olika erfarenhetsnivåer i de deltagande SMEs. Detta tillvägagångssätt gav en omfattande förståelse av det nuvarande tillståndet för AI adoption och den upplevda vikten av XAI och transparens.   Resultaten visar på en betydande kunskapslucka bland SME-anställda när det gäller AI teknologier, med många som uttrycker en brist på bekantskap och förtroende. Det råder dock stark enighet om vikten av transparens och förklarbarhet i AI-system. Deltagarna angav att XAI avsevärt kunde förbättra förtroendet och acceptansen av AI-teknologier genom att göra AI beslut mer förståeliga och transparenta. Specifika fördelar som identifierades inkluderar bättre beslutsstöd, ökad operationell effektivitet och ökat kundförtroende.   Studien drar slutsatsen att XAI och transparens är avgörande för att skapa förtroende och underlätta införandet av AI-teknologier i SME. Genom att göra AI-system mer förståeliga adresserar XAI utmaningarna med begränsad teknisk expertis och främjar en bredare acceptans av AI. Forskningen understryker behovet av kontinuerlig utbildning och tydliga kommunikationsstrategier för att förbättra AI-förståelsen bland intressenter inom SME.   För att öka transparensen och användarförtroendet i AI-system bör SME prioritera integrationen av XAI-ramverk. Det är viktigt att utveckla användarcentrerade verktyg som ger tydliga förklaringar av AI-beslut och att investera i kontinuerliga utbildnings- och träningsprogram. Dessutom kommer en organisationskultur som värderar transparens och etiska AI-praktiker ytterligare stödja det framgångsrika införandet av AI-teknologier. Studien bidrar till den pågående diskursen om AI-adoption i SME genom att tillhandahålla empiriska bevis på rollenav XAI i att bygga förtroende och förbättra transparens. Den erbjuder praktiska rekommendationer för SME att effektivt utnyttja AI-teknologier, och säkerställa etiska och transparenta AI-praktiker som är i linje med regulatoriska krav och samhälleliga förväntningar.
40

Explainable AI in Eye Tracking / Förklarbar AI inom ögonspårning

Liu, Yuru January 2024 (has links)
This thesis delves into eye tracking, a technique for estimating an individual’s point of gaze and understanding human interactions with the environment. A blossoming area within eye tracking is appearance-based eye tracking, which leverages deep neural networks to predict gaze positions from eye images. Despite its efficacy, the decision-making processes inherent in deep neural networks remain as ’black boxes’ to humans. This lack of transparency challenges the trust human professionals place in the predictions of appearance-based eye tracking models. To address this issue, explainable AI is introduced, aiming to unveil the decision-making processes of deep neural networks and render them comprehensible to humans. This thesis employs various post-hoc explainable AI methods, including saliency maps, gradient-weighted class activation mapping, and guided backpropagation, to generate heat maps of eye images. These heat maps reveal discriminative areas pivotal to the model’s gaze predictions, and glints emerge as of paramount importance. To explore additional features in gaze estimation, a glint-free dataset is derived from the original glint-preserved dataset by employing blob detection to eliminate glints from each eye image. A corresponding glint-free model is trained on this dataset. Cross-evaluations of the two datasets and models discover that the glint-free model extracts complementary features (pupil, iris, and eyelids) to the glint-preserved model (glints), with both feature sets exhibiting comparable intensities in heat maps. To make use of all the features, an augmented dataset is constructed, incorporating selected samples from both glint-preserved and glint-free datasets. An augmented model is then trained on this dataset, demonstrating a superior performance compared to both glint-preserved and glint-free models. The augmented model excels due to its training process on a diverse set of glint-preserved and glint-free samples: it prioritizes glints when of high quality, and adjusts the focus to the entire eye in the presence of poor glint quality. This exploration enhances the understanding of the critical factors influencing gaze prediction and contributes to the development of more robust and interpretable appearance-based eye tracking models. / Denna avhandling handlar om ögonspårning, en teknik för att uppskatta en individs blickpunkt och förstå människors interaktioner med miljön. Ett viktigt område inom ögonspårning är bildbaserad ögonspårning, som utnyttjar djupa neuronnät för att förutsäga blickpositioner från ögonbilder. Trots dess effektivitet förblir beslutsprocesserna i djupa neuronnät som ”svarta lådor” för människor. Denna brist på transparens utmanar det förtroende som yrkesverksamma sätter i förutsägelserna från bildbaserade ögonspårningsmodeller. För att ta itu med detta problem introduceras förklarbar AI, med målet att avslöja beslutsprocesserna hos djupa neuronnät och göra dem begripliga för människor. Denna avhandling använder olika efterhandsmetoder för förklarbar AI, inklusive saliency maps, gradient-weighted class activation mapping och guidad backpropagation, för att generera värmekartor av ögonbilder. Dessa värmekartor avslöjar områden som är avgörande för modellens blickförutsägelser, och ögonblänk framstår som av yttersta vikt. För att utforska ytterligare funktioner i blickuppskattning, härleds ett dataset utan ögonblänk från det ursprungliga datasetet genom att använda blobdetektering för att eliminera blänk från varje ögonbild. En motsvarande blänkfri modell tränas på detta dataset. Korsutvärderingar av de två datamängderna och modellerna visar att den blänkfria modellen tar fasta på kompletterande särdrag (pupill, iris och ögonlock) jämfört med den blänkbevarade modellen, men båda modellerna visar jämförbara intensiteter i värmekartorna. För att utnyttja all information konstrueras ett förstärkt dataset, som inkorporerar utvalda exempel från både blänkbevarade och blänkfria dataset. En förstärkt modell tränas sedan på detta dataset, och visar överlägsen prestanda jämfört med de båda andra modellerna. Den förstärkta modellen utmärker sig på grund av sin träning på en mångfaldig uppsättning av exempel med och utan blänk: den prioriterar blänk när de är av hög kvalitet och justerar fokuset till hela ögat vid dålig blänkkvalitet. Detta arbete förbättrar förståelsen för de kritiska faktorerna som påverkar blickförutsägelse och bidrar till utvecklingen av mer robusta och tolkningsbara modeller för bildbaserad ögonspårning.

Page generated in 0.0457 seconds