• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 4
  • 2
  • 2
  • Tagged with
  • 87
  • 87
  • 68
  • 41
  • 37
  • 34
  • 29
  • 29
  • 28
  • 27
  • 27
  • 26
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

ADVERSARIAL LEARNING ON ROBUSTNESS AND GENERATIVE MODELS

Qingyi Gao (11211114) 03 August 2021 (has links)
<div>In this dissertation, we study two important problems in the area of modern deep learning: adversarial robustness and adversarial generative model. In the first part, we study the generalization performance of deep neural networks (DNNs) in adversarial learning. Recent studies have shown that many machine learning models are vulnerable to adversarial attacks, but much remains unknown concerning its generalization error in this scenario. We focus on the $\ell_\infty$ adversarial attacks produced under the fast gradient sign method (FGSM). We establish a tight bound for the adversarial Rademacher complexity of DNNs based on both spectral norms and ranks of weight matrices. The spectral norm and rank constraints imply that this class of networks can be realized as a subset of the class of a shallow network composed with a low dimensional Lipschitz continuous function. This crucial observation leads to a bound that improves the dependence on the network width compared to previous works and achieves depth independence. We show that adversarial Rademacher complexity is always larger than its natural counterpart, but the effect of adversarial perturbations can be limited under our weight normalization framework. </div><div></div><div>In the second part, we study deep generative models that receive great success in many fields. It is well-known that the complex data usually does not populate its ambient Euclidean space but resides in a lower-dimensional manifold instead. Thus, misspecifying the latent dimension in generative models will result in a mismatch of latent representations and poor generative qualities. To address these problems, we propose a novel framework called Latent Wasserstein GAN (LWGAN) to fuse the auto-encoder and WGAN such that the intrinsic dimension of data manifold can be adaptively learned by an informative latent distribution. In particular, we show that there exist an encoder network and a generator network in such a way that the intrinsic dimension of the learned encodes distribution is equal to the dimension of the data manifold. Theoretically, we prove the consistency of the estimation for the intrinsic dimension of the data manifold and derive a generalization error bound for LWGAN. Comprehensive empirical experiments verify our framework and show that LWGAN is able to identify the correct intrinsic dimension under several scenarios, and simultaneously generate high-quality synthetic data by samples from the learned latent distribution. </div><div><br></div>
12

Tvorba nepřátelských vzorů hlubokými generativními modely / Adversarial examples design by deep generative models

Čermák, Vojtěch January 2021 (has links)
In the thesis, we explore the prospects of creating adversarial examples using various generative models. We design two algorithms to create unrestricted adversarial examples by perturbing the vectors of latent representation and exploiting the target classifier's decision boundary properties. The first algorithm uses linear interpolation combined with bisection to extract candidate samples near the decision boundary of the targeted classifier. The second algorithm applies the idea behind the FGSM algorithm on vectors of latent representation and uses additional information from gradients to obtain better candidate samples. In an empirical study on MNIST, SVHN and CIFAR10 datasets, we show that the candidate samples contain adversarial examples, samples that look like some class to humans but are classified as a different class by machines. Additionally, we show that standard defence techniques are vulnerable to our attacks.
13

Latent Data-Structures for Complex State Representation : A Steppingstone to Generating Synthetic 5G RAN data using Deep Learning

Häggström, Jakob January 2023 (has links)
The aim of this thesis is to investigate the feasibility of applying generative deep learning models on data related to 5G Radio Access Networks (5GRAN). Simulated data is used in order to develop the generative models, and this project serves as a proof of concept for further applications on real data. A Long Short-Term Memory network based Variational Autoencoder (VAE), Regularised Autoencoder (RAE) with a Gaussian Mixture prior and a Gradient Penalty Wasserstein Generative Adversarial Network (GP-WGAN) are fit by using the collected dataset. Their performance is evaluated in their ability to generate samples that resembles the real distribution and characteristics of the training data. Moreover, the performance is also measured in usability. The results indicates that it is indeed feasible to generate synthetic data given the current dataset, where the RAE and VAE seem to outperform the GP-WGAN in all tests, however, there is no clear best performer between RAE and VAE. Finally, whether the current models function on real 5G RAN data is not known and left for future work. Another topic of interest would be to improve the current models with conditional generation or other types of architectures.
14

Modeling Structured Data with Invertible Generative Models

Lu, You 01 February 2022 (has links)
Data is complex and has a variety of structures and formats. Modeling datasets is a core problem in modern artificial intelligence. Generative models are machine learning models, which model datasets with probability distributions. Deep generative models combine deep learning with probability theory, so that can model complicated datasets with flexible models. They have become one of the most popular models in machine learning, and have been applied to many problems. Normalizing flows are a novel class of deep generative models that allow efficient exact likelihood calculation, exact latent variable inference and sampling. They are constructed using functions whose inverse and Jacobian determinant can be efficiently computed. In this paper, we develop normalizing flow based generative models to model complex datasets. In general, data can be categorized to unlabeled data, labeled data, and weakly labeled data. We develop models for these three types of data, respectively. First, we develop Woodbury transformations, which are flow layers for general unsupervised normalizing flows, and can improve the flexibility and scalability of current flow based models. Woodbury transformations achieve efficient invertibility via Woodbury matrix identity and efficient determinant calculation via Sylvester's determinant identity. In contrast with other operations used in state-of-the-art normalizing flows, Woodbury transformations enable (1) high-dimensional interactions, (2) efficient sampling, and (3) efficient likelihood evaluation. Other similar operations, such as 1x1 convolutions, emerging convolutions, or periodic convolutions allow at most two of these three advantages. In our experiments on multiple image datasets, we find that Woodbury transformations allow learning of higher-likelihood models than other flow architectures while still enjoying their efficiency advantages. Second, we propose conditional Glow (c-Glow), a conditional generative flow for structured output learning, which is an advanced variant of supervised learning with structured labels. Traditional structured prediction models try to learn a conditional likelihood, i.e., p(y|x), to capture the relationship between the structured output y and the input features x. For many models, computing the likelihood is intractable. These models are therefore hard to train, requiring the use of surrogate objectives or variational inference to approximate likelihood. C-Glow benefits from the ability of flow-based models to compute p(y|x) exactly and efficiently. Learning with c-Glow does not require a surrogate objective or performing inference during training. Once trained, we can directly and efficiently generate conditional samples. We develop a sample-based prediction method, which can use this advantage to do efficient and effective inference. In our experiments, we test c-Glow on five different tasks. C-Glow outperforms the state-of-the-art baselines in some tasks and predicts comparable outputs in the other tasks. The results show that c-Glow is applicable to many different structured prediction problems. Third, we develop label learning flows (LLF), which is a general framework for weakly supervised learning problems. Our method is a generative model based on normalizing flows. The main idea of LLF is to optimize the conditional likelihoods of all possible labelings of the data within a constrained space defined by weak signals. We develop a training method for LLF that trains the conditional flow inversely and avoids estimating the labels. Once a model is trained, we can make predictions with a sampling algorithm. We apply LLF to three weakly supervised learning problems. Experiment results show that our method outperforms many state-of-the-art alternatives. Our research shows the advantages and versatility of normalizing flows. / Doctor of Philosophy / Data is now more affordable and accessible. At the same time, datasets are more and more complicated. Modeling data is a key problem in modern artificial intelligence and data analysis. Deep generative models combine deep learning and probability theory, and are now a major way to model complex datasets. In this dissertation, we focus on a novel class of deep generative model--normalizing flows. They are becoming popular because of their abilities to efficiently compute exact likelihood, infer exact latent variables, and draw samples. We develop flow-based generative models for different types of data, i.e., unlabeled data, labeled data, and weakly labeled data. First, we develop Woodbury transformations for unsupervised normalizing flows, which improve the flexibility and expressiveness of flow based models. Second, we develop conditional generative flows for an advanced supervised learning problem -- structured output learning, which removes the need of approximations, and surrogate objectives in traditional (deep) structured prediction models. Third, we develop label learning flows, which is a general framework for weakly supervised learning problems. Our research improves the performance of normalizing flows, and extend the applications of them to many supervised and weakly supervised problems.
15

A Preliminary Observation: Can One Linguistic Feature Be the Deterministic Factor for More Accurate Fake News Detection?

Chen, Yini January 2023 (has links)
This study inspected three linguistic features, specifically the percentage of nouns per sentence, the percentage of verbs per sentence, as well as the mean of dependency distance of the sentence, and observed their respective influence on the fake news classification accuracy. In comparison to the previous studies where linguistic features are combined as a set to be leveraged, this study attempted to untangle the effective individual features from the previously proposed optimal sets. In order to keep the influence of each individual feature independent from the other inspected features, the other feature is held constant in the experiments of observing each target feature. The FEVER dataset is utilized in this study, and the study incorporates the weighted random baselines and Macro F1 scores to mitigate the probable bias caused by the imbalanced distribution of labels in the dataset. GPT-2 and DistilGPT2 models are both fine-tuned to measure the performance gap between the models with different numbers of parameters. The experiment results indicate that the fake news classification accuracy and the features are not always correlated as hypothesized. Nevertheless, having attended to the challenges and limitations imposed by the dataset, this study has paved the way for future studies with similar research purposes. Future works are encouraged to extend the scope and include more linguistic features for the inspection, to eventually achieve more effective fake news classification that leverages only the most relevant features.
16

A New Approach to Synthetic Image Evaluation

Memari, Majid 01 December 2023 (has links) (PDF)
This study is dedicated to enhancing the effectiveness of Optical Character Recognition (OCR) systems, with a special emphasis on Arabic handwritten digit recognition. The choice to focus on Arabic handwritten digits is twofold: first, there has been relatively less research conducted in this area compared to its English counterparts; second, the recognition of Arabic handwritten digits presents more challenges due to the inherent similarities between different Arabic digits.OCR systems, engineered to decipher both printed and handwritten text, often face difficulties in accurately identifying low-quality or distorted handwritten text. The quality of the input image and the complexity of the text significantly influence their performance. However, data augmentation strategies can notably improve these systems' performance. These strategies generate new images that closely resemble the original ones, albeit with minor variations, thereby enriching the model's learning and enhancing its adaptability. The research found Conditional Variational Autoencoders (C-VAE) and Conditional Generative Adversarial Networks (C-GAN) to be particularly effective in this context. These two generative models stand out due to their superior image generation and feature extraction capabilities. A significant contribution of the study has been the formulation of the Synthetic Image Evaluation Procedure, a systematic approach designed to evaluate and amplify the generative models' image generation abilities. This procedure facilitates the extraction of meaningful features, computation of the Fréchet Inception Distance (LFID) score, and supports hyper-parameter optimization and model modifications.
17

Diffusion-Based Generation of SVG Images

Jbara, Hassan 06 February 2024 (has links)
Diffusion Models have achieved state-of-the-art results in image generating tasks, yet face different challenges when used in different domains. We first give a brief overview of the Diffusion Models architecture. Then, we present a new model and architecture called SVGFusion that applies the principles of Diffusion Models to generate Vector Graphics. Vector Graphics have a complex structure and are vastly different than pixel images, and thus the main challenge when working with Vector Graphics is how to represent their complex structure in a way that a Diffusion Model can effectively process. We will explain this and the further challenges that we encountered during the process and how we successfully addressed some of them. We demonstrate the effectiveness of our approach by training a sample model on a decently sized dataset as well as running valuable experiments. Furthermore, we offer useful insights, recommendations and code to researchers who wish to further explore this topic.
18

NoiseLearner: An Unsupervised, Content-agnostic Approach to Detect Deepfake Images

Vives, Cristian 21 March 2022 (has links)
Recent advancements in generative models have resulted in the improvement of hyper- realistic synthetic images or "deepfakes" at high resolutions, making them almost indistin- guishable from real images from cameras. While exciting, this technology introduces room for abuse. Deepfakes have already been misused to produce pornography, political propaganda, and misinformation. The ability to produce fully synthetic content that can cause such mis- information demands for robust deepfake detection frameworks. Most deepfake detection methods are trained in a supervised manner, and fail to generalize to deepfakes produced by newer and superior generative models. More importantly, such detection methods are usually focused on detecting deepfakes having a specific type of content, e.g., face deepfakes. How- ever, other types of deepfakes are starting to emerge, e.g., deepfakes of biomedical images, satellite imagery, people, and objects shown in different settings. Taking these challenges into account, we propose NoiseLearner, an unsupervised and content-agnostic deepfake im- age detection method. NoiseLearner aims to detect any deepfake image regardless of the generative model of origin or the content of the image. We perform a comprehensive evalu- ation by testing on multiple deepfake datasets composed of different generative models and different content groups, such as faces, satellite images, landscapes, and animals. Further- more, we include more recent state-of-the-art generative models in our evaluation, such as StyleGAN3 and probabilistic denoising diffusion models (DDPM). We observe that Noise- Learner performs well on multiple datasets, achieving 96% accuracy on both StyleGAN and StyleGAN2 datasets. / Master of Science / Images synthesized by artificial intelligence, commonly known as deepfakes, are starting to become indistinguishable from real images. While these technological advances are exciting with regards to what a computer can do, it is important to understand that such technol- ogy is currently being used with ill intent. Thus, identifying these images is becoming a growing necessity, especially as deepfake technology grows to perfectly mimic the nature of real images. Current deepfake detection approaches fail to detect deepfakes of other content, such as sattelite imagery or X-rays, and cannot generalize to deepfakes synthesized by new artificial intelligence. Taking these concerns into account, we propose NoiseLearner, a deep- fake detection method that can detect any deepfake regardless of the content and artificial intelligence model used to synthesize it. The key idea behind NoiseLearner is that it does not require any deepfakes to train. Instead, NoiseLearner learns the key features of real images and uses them to differentiate between deepfakes and real images – without ever looking at a single deepfake. Even with this strong constraint, NoiseLearner shows promise by detecting deepfakes of diverse contents and models used to generate them. We also explore different ways to improve NoiseLearner.
19

Towards label-efficient deep learning for medical image analysis

Sun, Li 11 September 2024 (has links)
Deep learning methods have achieved state-of-the-art performance in various tasks of medical image analysis. However, the success relies heavily on the expensive and time-consuming collection of large quantities of labeled data, which is not always available. This dissertation investigates the use of self-supervised and generative methods to enhance the label efficiency of deep learning models for 3D medical image analysis. Unlike natural images, medical images contain consistent anatomical contexts specific to the domain, which can be exploited as self-supervision signals to pre-train the model. Furthermore, generative methods can be utilized to synthesize additional samples, thereby increasing sample diversity. In the first part of the dissertation, we introduce self-supervised learning frameworks that learn anatomy-aware and disease-related representation. In order to learn disease-related representation, we propose two domain-specific contrasting strategies that leverage anatomical similarity across patients to create hard negative samples that incentivize learning fine-grained pathological features. In order to learn anatomy-sensitive representation, we develop a novel 3D convolutional layer with kernels that are conditionally parameterized based on the anatomical locations. We perform extensive experiments on large-scale datasets of CT scans, which show that our method improves the performance of many downstream tasks. In the second part of the dissertation, we introduce generative models capable of synthesizing high-resolution, anatomy-guided 3D medical images. Current generative models are typically limited to low-resolution outputs due to memory constraints, despite clinicians' need for high-resolution details in diagnoses. To overcome this, we present a hierarchical architecture that efficiently manages memory demands, enabling the generation of high-resolution images. In addition, diffusion-based generative models are becoming more prevalent in medical imaging. However, existing state-of-the-art methods often under-utilize the extensive information found in radiology reports and anatomical structures. To address these limitations, we propose a text-guided 3D image diffusion model that preserves anatomical details. We conduct experiments on downstream tasks and blind evaluation by radiologists, which demonstrate the clinical value of our proposed methodologies.
20

Simulating to learn: using adaptive simulation to train, test and understand neural networks

Ruiz, Nataniel 10 September 2024 (has links)
Most machine learning models are trained and tested on fixed datasets that have been collected in the real world. This longstanding approach has some prominent weaknesses: (1) collecting and annotating real data is expensive (2) real data might not cover all of the important rare scenarios that might be of interest (3) it is impossible to finely control certain attributes of real data (e.g. lighting, pose, texture), and (4) testing on a similar distribution as the training data can give an incomplete picture of the capabilities and weaknesses of the model. In this thesis we propose approaches for training and testing machine learning models using adaptive simulation. Specifically, given a parametric image/video simulator, the causal parameters of a scene can be adapted to generate different data distributions. We present five different methods to train and test machine learning models by adapting the simulated data distribution, these are Learning to Simulate, One-at-a-Time Simulated Testing, Simulated Adversarial Testing, Simulated Adversarial Training and Counterfactual Simulation Testing. We demonstrate these five approaches on vastly different real-world computer vision tasks, including semantic segmentation in traffic scenes, face recognition, body measurement estimation and object recognition. We achieve state-of-the-art results in several different applications. We release three large public datasets for different domains. Our main discoveries include: (1) we can find biases of models by testing them using scenes where each causal parameter is varied independently (2) our confidence in the performance of some models is inflated since they fail when the data distribution is adversarially sampled (3) we can bridge the simulation/real domain gap using counterfactual testing in order to compare different neural networks with different architectures, and (4) we can improve machine learning model performance by adapting the simulated data distribution either by (a) by learning the generative parameters to directly maximize performance on a validation set or (b) by adversarial optimization of the generative parameters. Finally, we present DreamBooth, a first exploration in the direction of controlling recently released diffusion models in order to achieve realistic simulation, which would improve the precision, performance and impact of all the ideas presented in this thesis.

Page generated in 0.078 seconds