Spelling suggestions: "subject:"[een] EXPLANATION"" "subject:"[enn] EXPLANATION""
151 |
[en] OCEANUI: INTERFACE FOR COUNTERFACTUAL EXPLANATIONS GENERATION / [pt] OCEANUI: INTERFACE PARA GERAÇÃO DE EXPLICAÇÕES CONTRAFACTUAISMOISES HENRIQUE PEREIRA 22 August 2022 (has links)
[pt] Atualmente algoritmos de aprendizado de máquina (ML) estão incrivelmente presentes no nosso cotidiano, desde sistemas de recomendação de filmes
e músicas até áreas de alto risco como saúde, justiça criminal, finanças e assim
por diante, auxiliando na tomada de decisões. Mas a complexidade de criação
desses algoritmos de ML também está aumentando, enquanto sua interpretabilidade está diminuindo. Muitos algoritmos e suas decisões não podem ser facilmente explicados por desenvolvedores ou usuários, e os algoritmos também não
são autoexplicáveis. Com isso, erros e vieses podem acabar ficando ocultos,
o que pode impactar profundamente a vida das pessoas. Devido a isso, iniciativas relacionadas a transparência, explicabilidade e interpretabilidade estão
se tornando cada vez mais relevantes, como podemos ver no novo regulamento
sobre proteção e tratamento de dados pessoais (GDPR, do inglês General Data
Protection Regulation), aprovado em 2016 para a União Europeia, e também
na Lei Geral de Proteção de Dados (LGPD) aprovada em 2020 no Brasil. Além
de leis e regulamentações tratando sobre o tema, diversos autores consideram
necessário o uso de algoritmos inerentemente interpretáveis; outros mostram
alternativas para se explicar algoritmos caixa-preta usando explicações locais,
tomando a vizinhança de um determinado ponto e então analisando a fronteira
de decisão dessa região; enquanto ainda outros estudam o uso de explicações
contrafactuais. Seguindo essa linha dos contrafactuais, nos propomos a desenvolver uma interface com usuário para o sistema Optimal Counterfactual
Explanations in Tree Ensembles (OCEAN), denominada OceanUI, através do
qual o usuário gera explicações contrafactuais plausíveis usando Programação
Inteira Mista e Isolation Forest. O propósito desta interface é facilitar a geração
de contrafactuais e permitir ao usuário obter um contrafactual personalizado e
mais aplicável individualmente, por meio da utilização de restrições e gráficos
interativos. / [en] Machine learning algorithms (ML) are becoming incredibly present in
our daily lives, from movie and song recommendation systems to high-risk areas like health care, criminal justice, finance, and so on, supporting decision
making. But the complexity of those algorithms is increasing while their interpretability is decreasing. Many algorithms and their decisions cannot be
easily explained by either developers or users, and the algorithms are also not
self-explanatory. As a result, mistakes and biases can end up being hidden,
which can profoundly impact people s lives. So, initiatives concerning transparency, explainability, and interpretability are becoming increasingly more
relevant, as we can see in the General Data Protection Regulation (GDPR),
approved in 2016 for the European Union, and in the General Data Protection
Law (LGPD) approved in 2020 in Brazil. In addition to laws and regulations,
several authors consider necessary the use of inherently interpretable algorithms; others show alternatives to explain black-box algorithms using local
explanations, taking the neighborhood of a given point and then analyzing
the decision boundary in that region; while yet others study the use of counterfactual explanations. Following the path of counterfactuals, we propose to
develop a user interface for the system Optimal Counterfactual Explanations
in Tree Ensembles (OCEAN), which we call OceanUI, through which the user
generates plausible counterfactual explanations using Mixed Integer Programming and Isolation Forest. The purpose of this user interface is to facilitate the
counterfactual generation and to allow the user to obtain a personal and more
individually applicable counterfactual, by means ofrestrictions and interactive
graphics.
|
152 |
Characterization and Explanation of the Destination Choice Patterns of Canadian Male Labour Force Entrants 1971-76Moffett, Patricia 04 1900 (has links)
<p> Since the classic study of migration and metropolitan growth by Lowry (1966), migration researchers have assumed a two-stage process wherein the decision to migrate is followed by the destination choice decision. Such an approach is employed here, to provide a characterization and explanation of the destination choice patterns of the male labour force entrants.</p> <p> Specifically, a nonlinear migration model, developed by Liaw and Bartels (1982), is applied to Canadian migration data for the 1971-76 period. The inter-metropolitan
migration patterns of the male labour force entrants is found to be well explained by six explanatory variables: population size, logarithmic distance, housing growth, employment
increase, cultural barriers and "strong ties". The last two variables are dummy variables derived from the characterization of the destination choice patterns through the application of entropies. The study examines factors involved in the destination choice decision and concludes with suggestions for future investigation.</p> / Thesis / Candidate in Philosophy
|
153 |
User Preference-Based Evaluation of Counterfactual Explanation MethodsAkram, Muhammad Zain January 2023 (has links)
Explainable AI (XAI) has grown as an important field over the years. As more complicated AI systems are utilised in decision-making situations, the necessity for explanations for such systems is also increasing in order to ensure transparency and stakeholder trust. This study focuses on a specific type of explanation method, namely counterfactual explanations. Counterfactual explanations provide feedback that outlines what changes should be made to the input to reach a different outcome. This study expands on a previous dissertation in which a proof-of-concept tool was created for comparing several counterfactual explanation methods. This thesis investigates the properties of counterfactual explanation methods along with some appropriate metrics. The identified metrics are then used to evaluate and compare the desirable properties of the counterfactual approaches. The proof-of-concept tool is extended with a properties-metrics mapping module, and a user preference-based system is developed, allowing users to evaluate different counterfactual approaches depending on their preferences. This addition to the proof-of-concept tool is a critical step in providing field researchers with a standardised benchmarking tool.
|
154 |
GAN-Based Counterfactual Explanation on ImagesWang, Ning January 2023 (has links)
Machine learning models are widely used in various industries. However, the black-box nature of the model limits users’ understanding and trust in its inner workings, and the interpretability of the model becomes critical. For example, when a person’s loan application is rejected, he may want to understand the reason for the rejection and seek to improve his personal information to increase his chances of approval. Counterfactual explanation is a method used to explain the different outcomes of a specific event or situation. It modifies or manipulates the original data to generate counterfactual instances to make the model make other decision results. This paper proposes a counterfactual explanation method based on Generative Adversarial Networks (GAN) and applies it to image recognition. Counterfactual explanation aims to make the model change the predictions by modifying the feature information of the input image. Traditional machine learning methods have apparent shortcomings in computational resources when training and have specific bottlenecks in practical applications. This article builds a counterfactual explanation model based on Deep Convolutional Generative Adversarial Network (DCGAN).The original random noise input of DCGAN is converted into an image, and the perturbation is generated by the generator in the GAN network, which is combined with the original image to generate counterfactual samples. The experimental results show that the counterfactual samples generated based on GAN are better than the traditional machine learning model regarding generation efficiency and accuracy, thus verifying the effectiveness and advancement of the method proposed in this article.
|
155 |
Conjunction monism : Humean scientific explanation explainedMagnusson, Love January 2024 (has links)
Humeans say that laws depend on their instances. Another way of saying this is that the instances explain the laws. However, laws are often used in science to help explain these same instances. If this is true it appears as though the instances help explain themselves, which would be a serious problem for the Humeans (Miller, 2015, pp. 1314-1317). In this essay I expand on a solution proposed by Miller (2015, pp. 1328-1331) that the laws are not explained by their instances but rather grounded by a set of global facts. I develop this into a new framework in which it would be expected for the laws to not be grounded by their instances. I call this framework conjunction monism since the core idea is a that conjunctions ground their conjuncts. I finish with a discussion about the compatibility of conjunction monism and Humeanism.
|
156 |
Understanding change in medicine and the biomedical sciences: Modeling change as interactions among flows with arrow diagramsFennimore, Todd F. 22 August 2011 (has links)
No description available.
|
157 |
Use of Intelligent Tutor Dialogues on Photographic Techniques: Explanation versus ArgumentationCedillos, Elizabeth M. 09 December 2013 (has links)
No description available.
|
158 |
Understanding Cognition via Complexity ScienceFavela, Luis H., Jr. 02 June 2015 (has links)
No description available.
|
159 |
Characteristics of Non-reductive Explanations in Complex Dynamical Systems ResearchLamb, Maurice 05 June 2015 (has links)
No description available.
|
160 |
ECOLOGICAL MECHANISMS IN PHILOSOPHICAL FOCUSPASLARU, VIOREL January 2007 (has links)
No description available.
|
Page generated in 0.0602 seconds