• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Experiencing Science in Action: The Use of Exhibition Techniques in Guided Tours to a Scientific Laboratory

Keilman, Thomas January 2004 (has links)
The current paper presents a study conducted at CERN, Switzerland, to investigate visitors' and tour guides' use and appreciation of existing panels at visit itinerary points. The results were used to develop a set of recommendations for constructing optimal panels to assist the guides' explanation.
2

Comparing Human Reasoning and Explainable AI

Helgstrand, Carl Johan, Hultin, Niklas January 2022 (has links)
Explainable AI (XAI) is a research field dedicated to formulating avenues of breaching the black box nature of many of today’s machine learning models. As society finds new ways of applying these models in everyday life, certain risk thresholds are crossed when society replaces human decision making with autonomous systems. How can we trust the algorithms to make sound judgement when all we provide is input and all they provide is an output? XAI methods examine different data points in the machine learning process to determine what factors influenced the decision making. While these methods of post-hoc explanation may provide certain insights, previous studies into XAI have found the designs to often be biased towards the designers and do not incorporate necessary interdisciplinary fields to improve user understanding. In this thesis, we look at animal classification and what features in animal images were found to be important by humans. We use a novel approach of first letting the participants create their own post-hoc explanations, before asking them to evaluate real XAI explanations as well as a pre-made human explanation generated from a test group. The results show strong cohesion in the participants' answers and can provide guidelines for designing XAI explanations more closely related to human reasoning. The data also indicates a preference for human-like explanations within the context of this study. Additionally, a potential bias was identified as participants preferred explanations marking large portions of an image as important, even if many of the important areas coincided with what the participants themselves considered to be unimportant. While the sample pool and data gathering tools are limiting, the results points toward a need for additional research into comparisons of human reasoning and XAI explanations and how it may affect the evaluation of, and bias towards, explanation methods.
3

Towards gradient faithfulness and beyond

Buono, Vincenzo, Åkesson, Isak January 2023 (has links)
The riveting interplay of industrialization, informalization, and exponential technological growth of recent years has shifted the attention from classical machine learning techniques to more sophisticated deep learning approaches; yet its intrinsic black-box nature has been impeding its widespread adoption in transparency-critical operations. In this rapidly evolving landscape, where the symbiotic relationship between research and practical applications has never been more interwoven, the contribution of this paper is twofold: advancing gradient faithfulness of CAM methods and exploring new frontiers beyond it. In the first part, we theorize three novel gradient-based CAM formulations, aimed at replacing and superseding traditional Grad-CAM-based methods by tackling and addressing the intricately and persistent vanishing and saturating gradient problems. As a consequence, our work introduces novel enhancements to Grad-CAM that reshape the conventional gradient computation by incorporating a customized and adapted technique inspired by the well-established and provably Expected Gradients’ difference-from-reference approach. Our proposed techniques– Expected Grad-CAM, Expected Grad-CAM++and Guided Expected Grad-CAM– as they operate directly on the gradient computation, rather than the recombination of the weighing factors, are designed as a direct and seamless replacement for Grad-CAM and any posterior work built upon it. In the second part, we build on our prior proposition and devise a novel CAM method that produces both high-resolution and class-discriminative explanation without fusing other methods, while addressing the issues of both gradient and CAM methods altogether. Our last and most advanced proposition, Hyper Expected Grad-CAM, challenges the current state and formulation of visual explanation and faithfulness and produces a new type of hybrid saliencies that satisfy the notion of natural encoding and perceived resolution. By rethinking faithfulness and resolution is possible to generate saliencies which are more detailed, localized, and less noisy, but most importantly that are composed of only concepts that are encoded by the layerwise models’ understanding. Both contributions have been quantitatively and qualitatively compared and assessed in a 5 to 10 times larger evaluation study on the ILSVRC2012 dataset against nine of the most recent and performing CAM techniques across six metrics. Expected Grad-CAM outperformed not only the original formulation but also more advanced methods, resulting in the second-best explainer with an Ins-Del score of 0.56. Hyper Expected Grad-CAM provided remarkable results across each quantitative metric, yielding a 0.15 increase in insertion when compared to the highest-scoring explainer PolyCAM, totaling to an Ins-Del score of 0.72.

Page generated in 0.1237 seconds