1 |
[en] OCEANUI: INTERFACE FOR COUNTERFACTUAL EXPLANATIONS GENERATION / [pt] OCEANUI: INTERFACE PARA GERAÇÃO DE EXPLICAÇÕES CONTRAFACTUAISMOISES HENRIQUE PEREIRA 22 August 2022 (has links)
[pt] Atualmente algoritmos de aprendizado de máquina (ML) estão incrivelmente presentes no nosso cotidiano, desde sistemas de recomendação de filmes
e músicas até áreas de alto risco como saúde, justiça criminal, finanças e assim
por diante, auxiliando na tomada de decisões. Mas a complexidade de criação
desses algoritmos de ML também está aumentando, enquanto sua interpretabilidade está diminuindo. Muitos algoritmos e suas decisões não podem ser facilmente explicados por desenvolvedores ou usuários, e os algoritmos também não
são autoexplicáveis. Com isso, erros e vieses podem acabar ficando ocultos,
o que pode impactar profundamente a vida das pessoas. Devido a isso, iniciativas relacionadas a transparência, explicabilidade e interpretabilidade estão
se tornando cada vez mais relevantes, como podemos ver no novo regulamento
sobre proteção e tratamento de dados pessoais (GDPR, do inglês General Data
Protection Regulation), aprovado em 2016 para a União Europeia, e também
na Lei Geral de Proteção de Dados (LGPD) aprovada em 2020 no Brasil. Além
de leis e regulamentações tratando sobre o tema, diversos autores consideram
necessário o uso de algoritmos inerentemente interpretáveis; outros mostram
alternativas para se explicar algoritmos caixa-preta usando explicações locais,
tomando a vizinhança de um determinado ponto e então analisando a fronteira
de decisão dessa região; enquanto ainda outros estudam o uso de explicações
contrafactuais. Seguindo essa linha dos contrafactuais, nos propomos a desenvolver uma interface com usuário para o sistema Optimal Counterfactual
Explanations in Tree Ensembles (OCEAN), denominada OceanUI, através do
qual o usuário gera explicações contrafactuais plausíveis usando Programação
Inteira Mista e Isolation Forest. O propósito desta interface é facilitar a geração
de contrafactuais e permitir ao usuário obter um contrafactual personalizado e
mais aplicável individualmente, por meio da utilização de restrições e gráficos
interativos. / [en] Machine learning algorithms (ML) are becoming incredibly present in
our daily lives, from movie and song recommendation systems to high-risk areas like health care, criminal justice, finance, and so on, supporting decision
making. But the complexity of those algorithms is increasing while their interpretability is decreasing. Many algorithms and their decisions cannot be
easily explained by either developers or users, and the algorithms are also not
self-explanatory. As a result, mistakes and biases can end up being hidden,
which can profoundly impact people s lives. So, initiatives concerning transparency, explainability, and interpretability are becoming increasingly more
relevant, as we can see in the General Data Protection Regulation (GDPR),
approved in 2016 for the European Union, and in the General Data Protection
Law (LGPD) approved in 2020 in Brazil. In addition to laws and regulations,
several authors consider necessary the use of inherently interpretable algorithms; others show alternatives to explain black-box algorithms using local
explanations, taking the neighborhood of a given point and then analyzing
the decision boundary in that region; while yet others study the use of counterfactual explanations. Following the path of counterfactuals, we propose to
develop a user interface for the system Optimal Counterfactual Explanations
in Tree Ensembles (OCEAN), which we call OceanUI, through which the user
generates plausible counterfactual explanations using Mixed Integer Programming and Isolation Forest. The purpose of this user interface is to facilitate the
counterfactual generation and to allow the user to obtain a personal and more
individually applicable counterfactual, by means ofrestrictions and interactive
graphics.
|
2 |
User Preference-Based Evaluation of Counterfactual Explanation MethodsAkram, Muhammad Zain January 2023 (has links)
Explainable AI (XAI) has grown as an important field over the years. As more complicated AI systems are utilised in decision-making situations, the necessity for explanations for such systems is also increasing in order to ensure transparency and stakeholder trust. This study focuses on a specific type of explanation method, namely counterfactual explanations. Counterfactual explanations provide feedback that outlines what changes should be made to the input to reach a different outcome. This study expands on a previous dissertation in which a proof-of-concept tool was created for comparing several counterfactual explanation methods. This thesis investigates the properties of counterfactual explanation methods along with some appropriate metrics. The identified metrics are then used to evaluate and compare the desirable properties of the counterfactual approaches. The proof-of-concept tool is extended with a properties-metrics mapping module, and a user preference-based system is developed, allowing users to evaluate different counterfactual approaches depending on their preferences. This addition to the proof-of-concept tool is a critical step in providing field researchers with a standardised benchmarking tool.
|
3 |
GAN-Based Counterfactual Explanation on ImagesWang, Ning January 2023 (has links)
Machine learning models are widely used in various industries. However, the black-box nature of the model limits users’ understanding and trust in its inner workings, and the interpretability of the model becomes critical. For example, when a person’s loan application is rejected, he may want to understand the reason for the rejection and seek to improve his personal information to increase his chances of approval. Counterfactual explanation is a method used to explain the different outcomes of a specific event or situation. It modifies or manipulates the original data to generate counterfactual instances to make the model make other decision results. This paper proposes a counterfactual explanation method based on Generative Adversarial Networks (GAN) and applies it to image recognition. Counterfactual explanation aims to make the model change the predictions by modifying the feature information of the input image. Traditional machine learning methods have apparent shortcomings in computational resources when training and have specific bottlenecks in practical applications. This article builds a counterfactual explanation model based on Deep Convolutional Generative Adversarial Network (DCGAN).The original random noise input of DCGAN is converted into an image, and the perturbation is generated by the generator in the GAN network, which is combined with the original image to generate counterfactual samples. The experimental results show that the counterfactual samples generated based on GAN are better than the traditional machine learning model regarding generation efficiency and accuracy, thus verifying the effectiveness and advancement of the method proposed in this article.
|
4 |
Automated Tactile Sensing for Quality Control of Locks Using Machine LearningAndersson, Tim January 2024 (has links)
This thesis delves into the use of Artificial Intelligence (AI) for quality control in manufacturing systems, with a particular focus on anomaly detection through the analysis of torque measurements in rotating mechanical systems. The research specifically examines the effectiveness of torque measurements in quality control of locks, challenging the traditional method that relies on human tactile sense for detecting mechanical anomalies. This conventional approach, while widely used, has been found to yield inconsistent results and poses physical strain on operators. A key aspect of this study involves conducting experiments on locks using torque measurements to identify mechanical anomalies. This method represents a shift from the subjective and physically demanding practice of manually testing each lock. The research aims to demonstrate that an automated, AI-driven approach can offer more consistent and reliable results, thereby improving overall product quality. The development of a machine learning model for this purpose starts with the collection of training data, a process that can be costly and disruptive to normal workflow. Therefore, this thesis also investigates strategies for predicting and minimizing the sample size used for training. Additionally, it addresses the critical need of trustworthiness in AI systems used for final quality control. The research explores how to utilize machine learning models that are not only effective in detecting anomalies but also offers a level of interpretability, avoiding the pitfalls of black box AI models. Overall, this thesis contributes to advancing automated quality control by exploring the state-of-the-art machine learning algorithms for mechanical fault detection, focusing on sample size prediction and minimization and also model interpretability. To the best of the author’s knowledge, it is the first study that evaluates an AI-driven solution for quality control of mechanical locks, marking an innovation in the field. / Denna avhandling fördjupar sig i användningen av Artificiell Intelligens (AI) för kvalitetskontroll i tillverkningssystem, med särskilt fokus på anomalidetektion genom analys av momentmätningar i roterande mekaniska system. Forskningen undersöker specifikt effektiviteten av momentmätningar för kvalitetskontroll av lås, vilket utmanar den traditionella metoden som förlitar sig på människans taktila sinne för att upptäcka mekaniska anomalier. Denna konventionella metod, som är brett använd, har visat sig ge inkonsekventa resultat och medför fysisk belastning för operatörerna. En nyckelaspekt av denna studie innebär att genomföra experiment på lås med hjälp av momentmätningar för att identifiera mekaniska anomalier. Denna metod representerar en övergång från den subjektiva och fysiskt krävande praxisen att manuellt testa varje lås. Forskningen syftar till att demonstrera att en automatiserad, AI-driven metod kan erbjuda mer konsekventa och tillförlitliga resultat, och därmed förbättra den övergripande produktkvaliteten. Utvecklingen av en maskininlärningsmodell för detta ändamål börjar med insamling av träningsdata, en process som kan vara kostsam och störande för det normala arbetsflödet. Därför undersöker denna avhandling också strategier för att förutsäga och minimera mängden av data som används för träning. Dessutom adresseras det kritiska behovet av tillförlitlighet i AI-system som används för slutlig kvalitetskontroll. Forskningen utforskar hur man kan använda maskininlärningsmodeller som inte bara är effektiva för att upptäcka anomalier, utan också erbjuder en nivå av tolkningsbarhet, för att undvika fallgroparna med svart låda AI-modeller. Sammantaget bidrar denna avhandling till att främja automatiserad kvalitetskontroll genom att utforska de senaste maskininlärningsalgoritmerna för detektion av mekaniska fel, med fokus på prediktion och minimering av mängden träningsdata samt tolkbarheten av modellens beslut. Denna avhandling utgör det första försöket att utvärdera en AI-driven strategi för kvalitetskontroll av mekaniska lås, vilket utgör en nyskapande innovation inom området.
|
Page generated in 0.0996 seconds