• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

MixUp as Directional Adversarial Training: A Unifying Understanding of MixUp and Adversarial Training

Perrault Archambault, Guillaume 29 April 2020 (has links)
This thesis aims to contribute to the field of neural networks by improving upon the performance of a state-of-the-art regularization scheme called MixUp, and by contributing to the conceptual understanding of MixUp. MixUp is a data augmentation scheme in which pairs of training samples and their corresponding labels are mixed using linear coefficients. Without label mixing, MixUp becomes a more conventional scheme: input samples are moved but their original labels are retained. Because samples are preferentially moved in the direction of other classes we refer to this method as directional adversarial training, or DAT. We show that under two mild conditions, MixUp asymptotically convergences to a subset of DAT. We define untied MixUp (UMixUp), a superset of MixUp wherein training labels are mixed with different linear coefficients to those of their corresponding samples. We show that under the same mild conditions, untied MixUp converges to the entire class of DAT schemes. Motivated by the understanding that UMixUp is both a generalization of MixUp and a scheme possessing adversarial-training properties, we experiment with different datasets and loss functions to show that UMixUp provides improves performance over MixUp. In short, we present a novel interpretation of MixUp as belonging to a class highly analogous to adversarial training, and on this basis we introduce a simple generalization which outperforms MixUp.
2

Self-Supervised Representation Learning for Early Breast Cancer Detection in Mammographic Imaging

Kristofer, Ågren January 2024 (has links)
The proposed master's thesis investigates the adaptability and efficacy of self-supervised representation learning (SSL) in medical image analysis, focusing on Mammographic Imaging to develop robust representation learning models. This research will build upon existing studies in Mammographic Imaging that have utilized contrastive learning and knowledge distillation-based self-supervised methods, focusing on SimCLR (Chen et al 2020) and SimSiam (Chen et al 2020) and evaluate approaches to increase the classification performance in a transfer learning setting. The thesis will critically evaluate and integrate recent advancements in these SSL paradigms (Chhipa 2023, chapter 2), and incorporating additional SSL approaches. The core objective is to enhance robust generalization and label efficiency in medical imaging analysis, contributing to the broader field of AI-driven diagnostic methodologies. The proposed master's thesis will not only extend the current understanding of SSL in medical imaging but also aims to provide actionable insights that could be instrumental in enhancing breast cancer detection methodologies, thereby contributing significantly to the field of medical imaging and cancer research.

Page generated in 0.02 seconds