1 |
Improving Visual Question Answering by Leveraging Depth and Adapting Explainability / Förbättring av Visual Question Answering (VQA) genom utnyttjandet av djup och anpassandet av förklaringsförmåganPanesar, Amrita Kaur January 2022 (has links)
To produce smooth human-robot interactions, it is important for robots to be able to answer users’ questions accurately and provide a suitable explanation for why they arrive to the answer they provide. However, in the wild, the user may ask the robot questions relating to aspects of the scene that the robot is unfamiliar with and hence be unable to answer correctly all of the time. In order to gain trust in the robot and resolve failure cases where an incorrect answer is provided, we propose a method that uses Grad-CAM explainability on RGB-D data. Depth is a critical component in producing more intelligent robots that can respond correctly most of the time as some questions might rely on spatial relations within the scene, for which 2D RGB data alone would be insufficient. To our knowledge, this work is the first of its kind to leverage depth and an explainability module to produce an explainable Visual Question Answering (VQA) system. Furthermore, we introduce a new dataset for the task of VQA on RGB-D data, VQA-SUNRGBD. We evaluate our explainability method against Grad-CAM on RGB data and find that ours produces better visual explanations. When we compare our proposed model on RGB-D data against the baseline VQN network on RGB data alone, we show that ours outperforms, particularly in questions relating to depth such as asking about the proximity of objects and relative positions of objects to one another. / För att skapa smidiga interaktioner mellan människa och robot är det viktigt för robotar att kunna svara på användarnas frågor korrekt och ge en lämplig förklaring till varför de kommer fram till det svar de ger. Men i det vilda kan användaren ställa frågor till roboten som rör aspekter av miljön som roboten är obekant med och därmed inte kunna svara korrekt hela tiden. För att få förtroende för roboten och lösa de misslyckade fall där ett felaktigt svar ges, föreslår vi en metod som använder Grad-CAM-förklarbarhet på RGB-D-data. Djup är en kritisk komponent för att producera mer intelligenta robotar som kan svara korrekt för det mesta, eftersom vissa frågor kan förlita sig på rumsliga relationer inom scenen, för vilka enbart 2D RGB-data skulle vara otillräcklig. Såvitt vi vet är detta arbete det första i sitt slag som utnyttjar djup och en förklaringsmodul för att producera ett förklarabart Visual Question Answering (VQA)-system. Dessutom introducerar vi ett nytt dataset för uppdraget av VQA på RGB-D-data, VQA-SUNRGBD. Vi utvärderar vår förklaringsmetod mot Grad-CAM på RGB-data och finner att vår modell ger bättre visuella förklaringar. När vi jämför vår föreslagna modell för RGB-Ddata mot baslinje-VQN-nätverket på enbart RGB-data visar vi att vår modell överträffar, särskilt i frågor som rör djup, som att fråga om objekts närhet och relativa positioner för objekt jämntemot varandra.
|
2 |
Towards gradient faithfulness and beyondBuono, Vincenzo, Åkesson, Isak January 2023 (has links)
The riveting interplay of industrialization, informalization, and exponential technological growth of recent years has shifted the attention from classical machine learning techniques to more sophisticated deep learning approaches; yet its intrinsic black-box nature has been impeding its widespread adoption in transparency-critical operations. In this rapidly evolving landscape, where the symbiotic relationship between research and practical applications has never been more interwoven, the contribution of this paper is twofold: advancing gradient faithfulness of CAM methods and exploring new frontiers beyond it. In the first part, we theorize three novel gradient-based CAM formulations, aimed at replacing and superseding traditional Grad-CAM-based methods by tackling and addressing the intricately and persistent vanishing and saturating gradient problems. As a consequence, our work introduces novel enhancements to Grad-CAM that reshape the conventional gradient computation by incorporating a customized and adapted technique inspired by the well-established and provably Expected Gradients’ difference-from-reference approach. Our proposed techniques– Expected Grad-CAM, Expected Grad-CAM++and Guided Expected Grad-CAM– as they operate directly on the gradient computation, rather than the recombination of the weighing factors, are designed as a direct and seamless replacement for Grad-CAM and any posterior work built upon it. In the second part, we build on our prior proposition and devise a novel CAM method that produces both high-resolution and class-discriminative explanation without fusing other methods, while addressing the issues of both gradient and CAM methods altogether. Our last and most advanced proposition, Hyper Expected Grad-CAM, challenges the current state and formulation of visual explanation and faithfulness and produces a new type of hybrid saliencies that satisfy the notion of natural encoding and perceived resolution. By rethinking faithfulness and resolution is possible to generate saliencies which are more detailed, localized, and less noisy, but most importantly that are composed of only concepts that are encoded by the layerwise models’ understanding. Both contributions have been quantitatively and qualitatively compared and assessed in a 5 to 10 times larger evaluation study on the ILSVRC2012 dataset against nine of the most recent and performing CAM techniques across six metrics. Expected Grad-CAM outperformed not only the original formulation but also more advanced methods, resulting in the second-best explainer with an Ins-Del score of 0.56. Hyper Expected Grad-CAM provided remarkable results across each quantitative metric, yielding a 0.15 increase in insertion when compared to the highest-scoring explainer PolyCAM, totaling to an Ins-Del score of 0.72.
|
3 |
Identifikace abnormálních EKG segmentů pomocí metody Multiple-Instance Learning / Identification of Abnormal ECG Segments Using Multiple-Instance LearningŠťávová, Karolína January 2021 (has links)
Heart arrhythmias are a very common heart disease whose incidence is rising. This thesis is focused on the detection of premature ventricular contractions from 12-lead ECG records by means of deep learning. The location of these arrhythmias (key instances) in the record was found using a technique based on Multiple-Instance Learning. In the theoretical part of the thesis, basic electrophysiology of the heart and deep learning with a focus on the convolutional neural networks are described. Afterward, a program was created using the Python programming language, which contains a model based on the InceptionTime architecture, using which classification of the signals into the selected classes was performed. Grad-CAM was implemented to find locations of the key instances in the ECGs. The evaluation of the arrhythmia detection quality was done using the F1 score and the results were discussed at the end of the thesis.
|
4 |
Person Re-Identification in the wild : Evaluation and application for soccer games using Deep LearningKarapoulios, Vasileios January 2021 (has links)
Person Re-Identification (ReID) is the process of associating images of the same person taken from different angles, cameras and at different times. The task is very challenging as a slight change in the appearance of the person can cause troubles in identifying them. In this thesis, the Re-Identification task is applied in the context of soccer games. In soccer games, the players of the same team wear the same outfit and colors, thus the task of Re-Identification is very hard. To address this problem, a state-of-the-art deep neural network based model named AlignedReID and a variation of it called Vanilla model are explored and compared to a baseline approach based on Euclidean distance in the image space. The AlignedReID model uses two feature extractor branches, one global and one local feature extractor. The Vanilla approach is a variation of the AlignedReID which uses only the global feature extractor branch of the AlignedReID. They are trained using two different loss functions, the Batch Hard and its soft-margin variation. The triplet loss is used, where for each loss calculation a triplet of images is used, an anchor, a positive pair (coming from the same person) and a negative pair. By comparing the metrics used for their evaluation, that is rank-1, rank-5, mean Average Precision (mAP) and the Area Under Curve (AUC), and by statistically comparing their mAPs which is assumed to be the most important metric, the AlignedReID model using the Batch Hard loss function outperforms the rest of the models with a mAP of 81\% and rank-1 \& rank-5 above 98\%. Also, a qualitative evaluation of the best model is presented using Grad-CAM, in order to figure how the model decides which images are similar by investigating in which parts of the images it focuses on to produce their embedding representations. It is observed that the model focuses on some discriminative features, such as face, legs and hands other than clothing color and outfit. The empirical results suggest that the AlignedReid is usable in real world applications, however further research to get a better understanding of the generalization to different cameras, leagues and other factors that may affect appearance would be interesting.
|
5 |
Transformer Based Object Detection and Semantic Segmentation for Autonomous DrivingHardebro, Mikaela, Jirskog, Elin January 2022 (has links)
The development of autonomous driving systems has been one of the most popular research areas in the 21st century. One key component of these kinds of systems is the ability to perceive and comprehend the physical world. Two techniques that address this are object detection and semantic segmentation. During the last decade, CNN based models have dominated these types of tasks. However, in 2021, transformer based networks were able to outperform the existing CNN approach, therefore, indicating a paradigm shift in the domain. This thesis aims to explore the use of a vision transformer, particularly a Swin Transformer, in an object detection and semantic segmentation framework, and compare it to a classical CNN on road scenes. In addition, since real-time execution is crucial for autonomous driving systems, the possibility of a parameter reduction of the transformer based network is investigated. The results appear to be advantageous for the Swin Transformer compared to the convolutional based network, considering both object detection and semantic segmentation. Furthermore, the analysis indicates that it is possible to reduce the computational complexity while retaining the performance.
|
Page generated in 0.0168 seconds