• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 105
  • 4
  • 3
  • 1
  • Tagged with
  • 126
  • 78
  • 71
  • 60
  • 57
  • 55
  • 54
  • 41
  • 40
  • 40
  • 30
  • 29
  • 24
  • 24
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Transparent ML Systems for the Process Industry : How can a recommendation system perceived as transparent be designed for experts in the process industry?

Fikret, Eliz January 2023 (has links)
Process monitoring is a field that can greatly benefit from the adoption of machine learning solutions like recommendation systems. However, for domain experts to embrace these technologies within their work processes, clear explanations are crucial. Therefore, it is important to adopt user-centred methods for designing more transparent recommendation systems. This study explores this topic through a case study in the pulp and paper industry. By employing a user-centred and design-first adaptation of the question-driven design process, this study aims to uncover the explanation needs and requirements of industry experts, as well as formulate design visions and recommendations for transparent recommendation systems. The results of the study reveal five common explanation types that are valuable for domain experts while also highlighting limitations in previous studies on explanation types. Additionally, nine requirements are identified and utilised in the creation of a prototype, which domain experts evaluate. The evaluation process leads to the development of several design recommendations that can assist HCI researchers and designers in creating effective, transparent recommendation systems. Overall, this research contributes to the field of HCI by enhancing the understanding of transparent recommendation systems from a user-centred perspective.
102

CondBEHRT: A Conditional Probability Based Transformer for Modeling Medical Ontology

Lerjebo, Linus, Hägglund, Johannes January 2022 (has links)
In recent years the number of electronic healthcare records (EHRs)has increased rapidly. EHR represents a systematized collection of patient health information in a digital format. EHR systems maintain diagnoses, medications, procedures, and lab tests associated with the patients at each time they visit the hospital or care center. Since the information is available into multiple visits to hospitals or care centers, the EHR can be used to increasing quality care. This is especially useful when working with chronic diseases because they tend to evolve. There have been many deep learning methods that make use of these EHRs to solve different prediction tasks. Transformers have shown impressive results in many sequence-to-sequence tasks within natural language processing. This paper will mainly focus on using transformers, explicitly using a sequence of visits to do prediction tasks. The model presented in this paper is called CondBEHRT. Compared to previous state-of-art models, CondBEHRT will focus on using as much available data as possible to understand the patient’s trajectory. Based on all patients, the model will learn the medical ontology between diagnoses, medications, and procedures. The results show that the inferred medical ontology that has been learned can simulate reality quite well. Having the medical ontology also gives insights about the explainability of model decisions. We also compare the proposed model with the state-of-the-art methods using two different use cases; predicting the given codes in the next visit and predicting if the patient will be readmitted within 30 days.
103

Explainable Reinforcement Learning for Risk Mitigation in Human-Robot Collaboration Scenarios / Förklarbar förstärkningsinlärning inom människa-robot sammarbete för riskreducering

Iucci, Alessandro January 2021 (has links)
Reinforcement Learning (RL) algorithms are highly popular in the robotics field to solve complex problems, learn from dynamic environments and generate optimal outcomes. However, one of the main limitations of RL is the lack of model transparency. This includes the inability to provide explanations of why the output was generated. The explainability becomes even more crucial when RL outputs influence human decisions, such as in Human-Robot Collaboration (HRC) scenarios, where safety requirements should be met. This work focuses on the application of two explainability techniques, “Reward Decomposition” and “Autonomous Policy Explanation”, on a RL algorithm which is the core of a risk mitigation module for robots’ operation in a collaborative automated warehouse scenario. The “Reward Decomposition” gives an insight into the factors that impacted the robot’s choice by decomposing the reward function into sub-functions. It also allows creating Minimal Sufficient Explanation (MSX), sets of relevant reasons for each decision taken during the robot’s operation. The second applied technique, “Autonomous Policy Explanation”, provides a global overview of the robot’s behavior by answering queries asked by human users. It also provides insights into the decision guidelines embedded in the robot’s policy. Since the synthesis of the policy descriptions and the queries’ answers are in natural language, this tool facilitates algorithm diagnosis even by non-expert users. The results proved that there is an improvement in the RL algorithm which now chooses more evenly distributed actions and a full policy to the robot’s decisions is produced which is for the most part aligned with the expectations. The work provides an analysis of the results of the application of both techniques which both led to increased transparency of the robot’s decision process. These explainability methods not only built trust in the robot’s choices, which proved to be among the optimal ones in most of the cases but also made it possible to find weaknesses in the robot’s policy, making them a tool helpful for debugging purposes. / Algoritmer för förstärkningsinlärning (RL-algoritmer) är mycket populära inom robotikområdet för att lösa komplexa problem, att lära sig av dynamiska miljöer och att generera optimala resultat. En av de viktigaste begränsningarna för RL är dock bristen på modellens transparens. Detta inkluderar den oförmåga att förklara bakomliggande process (algoritm eller modell) som genererade ett visst returvärde. Förklarbarheten blir ännu viktigare när resultatet från en RL-algoritm påverkar mänskliga beslut, till exempel i HRC-scenarier där säkerhetskrav bör uppfyllas. Detta arbete fokuserar på användningen av två förklarbarhetstekniker, “Reward Decomposition” och “Autonomous policy Explanation”, tillämpat på en RL-algoritm som är kärnan i en riskreduceringsmodul för drift av samarbetande robotars på ett automatiserat lager. “Reward Decomposition” ger en inblick i vilka faktorer som påverkade robotens val genom att bryta ner belöningsfunktionen i mindre funktioner. Det gör det också möjligt att formulera en MSX (minimal sufficient explanation), uppsättning av relevanta skäl för varje beslut som har fattas under robotens drift. Den andra tillämpade tekniken, “Autonomous Policy Explanation”, ger en generellt prespektiv över robotens beteende genom att mänskliga användare får ställa frågor till roboten. Detta ger även insikt i de beslutsriktlinjer som är inbäddade i robotens policy. Ty syntesen av policybeskrivningarna och frågornas svar är naturligt språk underlättar detta en algoritmdiagnos även för icke-expertanvändare. Resultaten visade att det finns en förbättring av RL-algoritmen som nu väljer mer jämnt fördelade åtgärder. Dessutom produceras en fullständig policy för robotens beslut som för det mesta är anpassad till förväntningarna. Rapporten ger en analys av resultaten av tillämpningen av båda teknikerna, som visade att båda ledde till ökad transparens i robotens beslutsprocess. Förklaringsmetoderna gav inte bara förtroende för robotens val, vilket visade sig vara bland de optimala i de flesta fall, utan gjorde det också möjligt att hitta svagheter i robotens policy, vilket gjorde dem till ett verktyg som är användbart för felsökningsändamål.
104

Interpreting Multivariate Time Series for an Organization Health Platform

Saluja, Rohit January 2020 (has links)
Machine learning-based systems are rapidly becoming popular because it has been realized that machines are more efficient and effective than humans at performing certain tasks. Although machine learning algorithms are extremely popular, they are also very literal and undeviating. This has led to a huge research surge in the field of interpretability in machine learning to ensure that machine learning models are reliable, fair, and can be held liable for their decision-making process. Moreover, in most real-world problems just making predictions using machine learning algorithms only solves the problem partially. Time series is one of the most popular and important data types because of its dominant presence in the fields of business, economics, and engineering. Despite this, interpretability in time series is still relatively unexplored as compared to tabular, text, and image data. With the growing research in the field of interpretability in machine learning, there is also a pressing need to be able to quantify the quality of explanations produced after interpreting machine learning models. Due to this reason, evaluation of interpretability is extremely important. The evaluation of interpretability for models built on time series seems completely unexplored in research circles. This thesis work focused on achieving and evaluating model agnostic interpretability in a time series forecasting problem.  The use case discussed in this thesis work focused on finding a solution to a problem faced by a digital consultancy company. The digital consultancy wants to take a data-driven approach to understand the effect of various sales related activities in the company on the sales deals closed by the company. The solution involved framing the problem as a time series forecasting problem to predict the sales deals and interpreting the underlying forecasting model. The interpretability was achieved using two novel model agnostic interpretability techniques, Local interpretable model- agnostic explanations (LIME) and Shapley additive explanations (SHAP). The explanations produced after achieving interpretability were evaluated using human evaluation of interpretability. The results of the human evaluation studies clearly indicate that the explanations produced by LIME and SHAP greatly helped lay humans in understanding the predictions made by the machine learning model. The human evaluation study results also indicated that LIME and SHAP explanations were almost equally understandable with LIME performing better but with a very small margin. The work done during this project can easily be extended to any time series forecasting or classification scenario for achieving and evaluating interpretability. Furthermore, this work can offer a very good framework for achieving and evaluating interpretability in any machine learning-based regression or classification problem. / Maskininlärningsbaserade system blir snabbt populära eftersom man har insett att maskiner är effektivare än människor när det gäller att utföra vissa uppgifter. Även om maskininlärningsalgoritmer är extremt populära, är de också mycket bokstavliga. Detta har lett till en enorm forskningsökning inom området tolkbarhet i maskininlärning för att säkerställa att maskininlärningsmodeller är tillförlitliga, rättvisa och kan hållas ansvariga för deras beslutsprocess. Dessutom löser problemet i de flesta verkliga problem bara att göra förutsägelser med maskininlärningsalgoritmer bara delvis. Tidsserier är en av de mest populära och viktiga datatyperna på grund av dess dominerande närvaro inom affärsverksamhet, ekonomi och teknik. Trots detta är tolkningsförmågan i tidsserier fortfarande relativt outforskad jämfört med tabell-, text- och bilddata. Med den växande forskningen inom området tolkbarhet inom maskininlärning finns det också ett stort behov av att kunna kvantifiera kvaliteten på förklaringar som produceras efter tolkning av maskininlärningsmodeller. Av denna anledning är utvärdering av tolkbarhet extremt viktig. Utvärderingen av tolkbarhet för modeller som bygger på tidsserier verkar helt outforskad i forskarkretsar. Detta uppsatsarbete fokuserar på att uppnå och utvärdera agnostisk modelltolkbarhet i ett tidsserieprognosproblem.  Fokus ligger i att hitta lösningen på ett problem som ett digitalt konsultföretag står inför som användningsfall. Det digitala konsultföretaget vill använda en datadriven metod för att förstå effekten av olika försäljningsrelaterade aktiviteter i företaget på de försäljningsavtal som företaget stänger. Lösningen innebar att inrama problemet som ett tidsserieprognosproblem för att förutsäga försäljningsavtalen och tolka den underliggande prognosmodellen. Tolkningsförmågan uppnåddes med hjälp av två nya tekniker för agnostisk tolkbarhet, lokala tolkbara modellagnostiska förklaringar (LIME) och Shapley additiva förklaringar (SHAP). Förklaringarna som producerats efter att ha uppnått tolkbarhet utvärderades med hjälp av mänsklig utvärdering av tolkbarhet. Resultaten av de mänskliga utvärderingsstudierna visar tydligt att de förklaringar som produceras av LIME och SHAP starkt hjälpte människor att förstå förutsägelserna från maskininlärningsmodellen. De mänskliga utvärderingsstudieresultaten visade också att LIME- och SHAP-förklaringar var nästan lika förståeliga med LIME som presterade bättre men med en mycket liten marginal. Arbetet som utförts under detta projekt kan enkelt utvidgas till alla tidsserieprognoser eller klassificeringsscenarier för att uppnå och utvärdera tolkbarhet. Dessutom kan detta arbete erbjuda en mycket bra ram för att uppnå och utvärdera tolkbarhet i alla maskininlärningsbaserade regressions- eller klassificeringsproblem.
105

Explainable AI For Predictive Maintenance

Karlsson, Nellie, Bengtsson, My January 2022 (has links)
As the complexity of deep learning model increases, the transparency of the systems does the opposite. It may be hard to understand the predictions a deep learning model makes, but even harder to understand why these predictions are made. Using eXplainable AI (XAI), we can gain greater knowledge of how the model operates and how the input in which the model receives can change its predictions. In this thesis, we apply Integrated Gradients (IG), an XAI method primarily used on image data and on datasets containing tabular and time-series data. We also evaluate how the results of IG differ from various types of models and how the change of baseline can change the outcome. In these results, we observe that IG can be applied to both sequenced and nonsequenced data, with varying results. We can see that the gradient baseline does not affect the results of IG on models such as RNN, LSTM, and GRU, where the data contains time series, as much as it does for models like MLP with nonsequenced data. To confirm this, we also applied IG to SVM models, which gave the results that the choice of gradient baseline has a significant impact on the results of IG.
106

Towards gradient faithfulness and beyond

Buono, Vincenzo, Åkesson, Isak January 2023 (has links)
The riveting interplay of industrialization, informalization, and exponential technological growth of recent years has shifted the attention from classical machine learning techniques to more sophisticated deep learning approaches; yet its intrinsic black-box nature has been impeding its widespread adoption in transparency-critical operations. In this rapidly evolving landscape, where the symbiotic relationship between research and practical applications has never been more interwoven, the contribution of this paper is twofold: advancing gradient faithfulness of CAM methods and exploring new frontiers beyond it. In the first part, we theorize three novel gradient-based CAM formulations, aimed at replacing and superseding traditional Grad-CAM-based methods by tackling and addressing the intricately and persistent vanishing and saturating gradient problems. As a consequence, our work introduces novel enhancements to Grad-CAM that reshape the conventional gradient computation by incorporating a customized and adapted technique inspired by the well-established and provably Expected Gradients’ difference-from-reference approach. Our proposed techniques– Expected Grad-CAM, Expected Grad-CAM++and Guided Expected Grad-CAM– as they operate directly on the gradient computation, rather than the recombination of the weighing factors, are designed as a direct and seamless replacement for Grad-CAM and any posterior work built upon it. In the second part, we build on our prior proposition and devise a novel CAM method that produces both high-resolution and class-discriminative explanation without fusing other methods, while addressing the issues of both gradient and CAM methods altogether. Our last and most advanced proposition, Hyper Expected Grad-CAM, challenges the current state and formulation of visual explanation and faithfulness and produces a new type of hybrid saliencies that satisfy the notion of natural encoding and perceived resolution. By rethinking faithfulness and resolution is possible to generate saliencies which are more detailed, localized, and less noisy, but most importantly that are composed of only concepts that are encoded by the layerwise models’ understanding. Both contributions have been quantitatively and qualitatively compared and assessed in a 5 to 10 times larger evaluation study on the ILSVRC2012 dataset against nine of the most recent and performing CAM techniques across six metrics. Expected Grad-CAM outperformed not only the original formulation but also more advanced methods, resulting in the second-best explainer with an Ins-Del score of 0.56. Hyper Expected Grad-CAM provided remarkable results across each quantitative metric, yielding a 0.15 increase in insertion when compared to the highest-scoring explainer PolyCAM, totaling to an Ins-Del score of 0.72.
107

Is eXplainable AI suitable as a hypotheses generating tool for medical research? Comparing basic pathology annotation with heat maps to find out

Adlersson, Albert January 2023 (has links)
Hypothesis testing has long been a formal and standardized process. Hypothesis generation, on the other hand, remains largely informal. This thesis assess whether eXplainable AI (XAI) can aid in the standardization of hypothesis generation through its utilization as a hypothesis generating tool for medical research. We produce XAI heat maps for a Convolutional Neural Network (CNN) trained to classify Microsatellite Instability (MSI) in colon and gastric cancer with four different XAI methods: Guided Backpropagation, VarGrad, Grad-CAM and Sobol Attribution. We then compare these heat maps with pathology annotations in order to look for differences to turn into new hypotheses. Our CNN successfully generates non-random XAI heat maps whilst achieving a validation accuracy of 85% and a validation AUC of 93% – as compared to others who achieve a AUC of 87%. Our results conclude that Guided Backpropagation and VarGrad are better at explaining high-level image features whereas Grad-CAM and Sobol Attribution are better at explaining low-level ones. This makes the two groups of XAI methods good complements to each other. Images of Microsatellite Insta- bility (MSI) with high differentiation are more difficult to analyse regardless of which XAI is used, probably due to exhibiting less regularity. Regardless of this drawback, our assessment is that XAI can be used as a useful hypotheses generating tool for research in medicine. Our results indicate that our CNN utilizes the same features as our basic pathology annotations when classifying MSI – with some additional features of basic pathology missing – features which we successfully are able to generate new hypotheses with.
108

[en] AUTONOMOUS SYSTEMS EXPLAINABLE THROUGH DATA PROVENANCE / [pt] SISTEMAS AUTÔNOMOS EXPLICÁVEIS POR MEIO DE PROVENIÊNCIA DE DADOS

TASSIO FERENZINI MARTINS SIRQUEIRA 25 June 2020 (has links)
[pt] Determinar a proveniência dos dados, isto é, o processo que levou a esses dados, é vital em muitas áreas, especialmente quando é essencial que os resultados ou ações sejam confiáveis. Com o crescente número de aplicações baseadas em inteligência artificial, criou-se a necessidade de torná-las capazes de explicar seu comportamento e responder às suas decisões. Isso é um desafio, especialmente se as aplicações forem distribuídas e compostas de vários agentes autônomos, formando um Sistema Multiagente (SMA). Uma maneira fundamental de tornar tais sistemas explicáveis é rastrear o comportamento do agente, isto é, registrar a origem de suas ações e raciocínios, como em uma depuração onisciente. Embora a ideia de proveniência já tenha sido explorada em alguns contextos, ela não foi extensivamente explorada no contexto de SMA, deixando muitas questões para serem compreendidas e abordadas. Nosso objetivo neste trabalho é justificar a importância da proveniência dos dados para SMA, discutindo quais perguntas podem ser respondidas em relação ao comportamento do SMA, utilizando a proveniência e ilustrando, através de cenários de aplicação, os benefícios que a proveniência proporciona para responder a essas questões. Este estudo envolve a criação de um framework de software, chamado FProvW3C, que suporta a coleta e armazenamento da proveniência dos dados produzidos pelo SMA, que foi integrado a plataforma BDI4JADE (41), formando o que denominamos de Prov-BDI4JADE. Por meio desta plataforma, utilizando exemplos de sistemas autônomos, demostramos com rigor que, o uso da proveniência de dados em SMA é uma solução sólida, para tornar transparente o processo de raciocínio e ação do agente. / [en] Determining the data provenance, that is, the process that led to those data, is vital in many areas, especially when it is essential that the results or actions be reliable. With the increasing number of applications based on artificial intelligence, the need has been created to make them capable of explaining their behavior and be responsive to their decisions. This is a challenge especially if the applications are distributed, and composed of multiple autonomous agents, forming a Multiagent System (MAS). A key way of making such systems explicable is to track the agent s behavior, that is, to record the source of their actions and reasoning, as in an omniscient debugging. Although the idea of provenance has already been explored in some contexts, it has not been extensively explored in the context of MAS, leaving many questions to be understood and addressed. Our objective in this work is to justify the importance of the data provenance to MAS, discussing which questions can be answered regarding the behavior of MAS using the provenance and illustrating, through application scenarios, to demonstrate the benefits that provenance provides to reply to these questions. This study involves the creation of a software framework, called FProvW3C, which supports the collects and stores the provenance of the data produced by the MAS, which was integrated with the platform BDI4JADE (41), forming what we call Prov-BDI4JADE. Through this platform, using examples of autonomous systems, we have rigorously demonstrated that the use of data provenance in MAS is a solid solution to make the agent’s reasoning and action process transparent.
109

[en] EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR MEDICAL IMAGE CLASSIFIERS / [pt] INTELIGÊNCIA ARTIFICIAL EXPLICÁVEL PARA CLASSIFICADORES DE IMAGENS MÉDICAS

IAM PALATNIK DE SOUSA 02 July 2021 (has links)
[pt] A inteligência artificial tem gerado resultados promissores na área médica, especialmente na última década. Contudo, os modelos de melhor desempenho apresentam opacidade em relação ao seu funcionamento interno. Nesta tese, são apresentadas novas metodologias e abordagens para o desenvolvimento de classificadores explicáveis de imagens médicas. Dois principais métodos, Squaregrid e EvEx, foram desenvolvidos. O primeiro consiste em uma geração mais grosseira, porém rápida, de heatmaps explicativos via segmentações em grades quadrados, enquanto o segundo baseia-se em otimização multi-objetivo, baseada em computação evolucionária, visando ao ajuste fino de parâmetros de segmentação. Notavelmente, ambas as técnicas são agnósticas ao modelo, o que facilita sua utilização para qualquer tipo de classificador de imagens. O potencial destas abordagens foi avaliado em três estudos de caso de classificações médicas: metástases em linfonodos, malária e COVID-19. Para alguns destes casos foram analisados modelos de classificação existentes, publicamente disponíveis. Por outro lado, em outros estudos de caso, novos modelos tiveram que ser treinados. No caso do estudo de COVID-19, a ResNet50 treinada levou a F-scores acima de 0,9 para o conjunto de teste de uma competição para classificação de coronavirus, levando ao terceiro lugar geral. Adicionalmente, técnicas de inteligência artificial já existentes como LIME e GradCAM, bem como Vanilla, Smooth e Integrated Gradients também foram usadas para gerar heatmaps e possibilitar comparações. Os resultados aqui descritos ajudaram a demonstrar e preencher parcialmente lacunas associadas à integração das áreas de inteligência artificial explicável e medicina. Eles também ajudaram a demonstrar que as diferentes abordagens de inteligência artificial explicável podem gerar heatmaps que focam em características diferentes da imagem. Isso por sua vez demonstra a importância de combinar abordagens para criar um panorama mais completo sobre os modelos classificadores, bem como extrair informações sobre o que estes aprendem. / [en] Artificial Intelligence has generated promissing results for the medical area, especially on the last decade. However, the best performing models present opacity when it comes to their internal working. In this thesis, methodologies and approaches are presented for the develpoment of explainable classifiers of medical images. Two main methods, Squaregrid and EvEx, were developed. The first consistts in a rough, but fast, generation of heatmaps via segmentations in square grids, and the second in genetic multi objective optimizations aiming at the fine-tuning of segmentation parameters. Notably, both techniques are agnostic to the model,which facilitates their utilization for any kind of image classifier. The potential of these approaches was demonstrated in three case studies of medical classifications: lymph node mestastases, malária and COVID-19. In some of these cases, already existing classifier models were analyzed, while in some others new models were trained. For the COVID-19 study, the trained ResNet50 provided F-scores above 0.9 in a test set from a coronavirus classification competition, resulting in the third place overall. Additionally, already existing explainable artificial intelligence techniques, such as LIME and GradCAM, as well as Vanilla, Smooth and Integrated Gradients, were also used to generate heatmaps and enable comparisons. The results here described help to demonstrate and improve the gaps in integrating the areas of explainable artificial intelligence and medicine. They also aided in demonstrating that the different types of approaches in explainable artificial intelligence can generate heatmaps that focus on different characteristics of the image. This shows the importance of combining approaches to create a more complete overview of classifier models, as well as extracting informations about what they learned from data.
110

Influences of simulated XAI explanations on players of economic games : A pilot study

Tomasson Izquierdo, Hannibal January 2023 (has links)
This pilot study has the twofold purpose of analyzing the effect of XAI explanations on the mental models of economic games participants, and testing the feasibility of the methodology devised for it. To achieve this, we compared the contribution behavior and mental models of 30 participants playing a public good game. Playing in pairs, a total of 10 participants played in each of three different conditions: i) a condition with a decision support system providing suggestions for contributions; ii) a condition with the decision support system and explanations for its suggestions; iii) a control condition. Upon finishing the game, all participants completed a Retrospection Task questionnaire to elicit their mental models about the game’s goals and partner’s behavior. Our results showed differences in the contribution behavior and mental models of the participants between the three conditions, with the condition with explanations presenting consistently higher contributions and participants reporting prosocial attitudes. Through these findings, this pilot study demonstrates the feasibility of the methodology and argues for the need of a larger-scale study to further investigate the effect of XAI explanations on the users’ mental models. / Denna pilotstudie har det dubbla syftet att analysera effekten av XAI-förklaringar på de mentala representationerna hos deltagarna i ekonomiska spel och att testa genomförbarheten av den metod som utarbetats för studien. För att uppnå detta jämförde vi bidragsbeteendet och de mentala representationerna hos 30 deltagare som spelade ett spel om kollektiva nyttigheter. Totalt 10 deltagare spelade parvis i var och en av tre olika behandlingar: i) en med ett beslutsstödsystem som ger förslag på bidrag, ii) en med beslutsstödsystemet och förklaringar till dess förslag, iii) en kontrollgrupp. Efter att ha avslutat spelet fyllde alla deltagare i ett frågeformulär om en retrospektionsuppgift för att utvärdera deras mentala representationer om spelets mål och partnerns beteende. Våra resultat visade skillnader i deltagarnas bidragsbeteende och mentala modeller mellan de tre behandlingarna, där behandlingen med förklaringar presenterade genomgående högre bidrag och deltagare som rapporterade prosociala attityder. Genom dessa resultat visar den här pilotstudien att metoden är genomförbar och argumenterar för behovet av en studie i större skala för att ytterligare undersöka effekten av XAI-förklaringar på användarnas mentala modeller.

Page generated in 0.0759 seconds