• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 229
  • 20
  • 19
  • 9
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 367
  • 210
  • 182
  • 139
  • 132
  • 121
  • 111
  • 90
  • 87
  • 70
  • 67
  • 57
  • 55
  • 54
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Privacy-Preserving Synthetic Medical Data Generation with Deep Learning

Torfi, Amirsina 26 August 2020 (has links)
Deep learning models demonstrated good performance in various domains such as ComputerVision and Natural Language Processing. However, the utilization of data-driven methods in healthcare raises privacy concerns, which creates limitations for collaborative research. A remedy to this problem is to generate and employ synthetic data to address privacy concerns. Existing methods for artificial data generation suffer from different limitations, such as being bound to particular use cases. Furthermore, their generalizability to real-world problems is controversial regarding the uncertainties in defining and measuring key realistic characteristics. Hence, there is a need to establish insightful metrics (and to measure the validity of synthetic data), as well as quantitative criteria regarding privacy restrictions. We propose the use of Generative Adversarial Networks to help satisfy requirements for realistic characteristics and acceptable values of privacy metrics, simultaneously. The present study makes several unique contributions to synthetic data generation in the healthcare domain. First, we propose a novel domain-agnostic metric to evaluate the quality of synthetic data. Second, by utilizing 1-D Convolutional Neural Networks, we devise a new approach to capturing the correlation between adjacent diagnosis records. Third, we employ ConvolutionalAutoencoders for creating a robust and compact feature space to handle the mixture of discrete and continuous data. Finally, we devise a privacy-preserving framework that enforcesRényi differential privacy as a new notion of differential privacy. / Doctor of Philosophy / Computers programs have been widely used for clinical diagnosis but are often designed with assumptions limiting their scalability and interoperability. The recent proliferation of abundant health data, significant increases in computer processing power, and superior performance of data-driven methods enable a trending paradigm shift in healthcare technology. This involves the adoption of artificial intelligence methods, such as deep learning, to improve healthcare knowledge and practice. Despite the success in using deep learning in many different domains, in the healthcare field, privacy challenges make collaborative research difficult, as working with data-driven methods may jeopardize patients' privacy. To overcome these challenges, researchers propose to generate and utilize realistic synthetic data that can be used instead of real private data. Existing methods for artificial data generation are limited by being bound to special use cases. Furthermore, their generalizability to real-world problems is questionable. There is a need to establish valid synthetic data that overcomes privacy restrictions and functions as a real-world analog for healthcare deep learning data training. We propose the use of Generative Adversarial Networks to simultaneously overcome the realism and privacy challenges associated with healthcare data.
52

Latent Walking Techniques for Conditioning GAN-Generated Music

Eisenbeiser, Logan Ryan 21 September 2020 (has links)
Artificial music generation is a rapidly developing field focused on the complex task of creating neural networks that can produce realistic-sounding music. Generating music is very difficult; components like long and short term structure present time complexity, which can be difficult for neural networks to capture. Additionally, the acoustics of musical features like harmonies and chords, as well as timbre and instrumentation require complex representations for a network to accurately generate them. Various techniques for both music representation and network architecture have been used in the past decade to address these challenges in music generation. The focus of this thesis extends beyond generating music to the challenge of controlling and/or conditioning that generation. Conditional generation involves an additional piece or pieces of information which are input to the generator and constrain aspects of the results. Conditioning can be used to specify a tempo for the generated song, increase the density of notes, or even change the genre. Latent walking is one of the most popular techniques in conditional image generation, but its effectiveness on music-domain generation is largely unexplored. This paper focuses on latent walking techniques for conditioning the music generation network MuseGAN and examines the impact of this conditioning on the generated music. / Master of Science / Artificial music generation is a rapidly developing field focused on the complex task of creating neural networks that can produce realistic-sounding music. Beyond simply generating music lies the challenge of controlling or conditioning that generation. Conditional generation can be used to specify a tempo for the generated song, increase the density of notes, or even change the genre. Latent walking is one of the most popular techniques in conditional image generation, but its effectiveness on music-domain generation is largely unexplored, especially for generative adversarial networks (GANs). This paper focuses on latent walking techniques for conditioning the music generation network MuseGAN and examines the impact and effectiveness of this conditioning on the generated music.
53

On Transferability of Adversarial Examples on Machine-Learning-Based Malware Classifiers

Hu, Yang 12 May 2022 (has links)
The use of Machine Learning for malware detection is essential to counter the massive growth in malware types compared with the traditional signature-based detection system. However, machine learning models could also be extremely vulnerable and sensible to transferable adversarial example (AE) attacks. The transfer AE attack does not require extra information from the victim model such as gradient information. Researchers explore mainly 2 lines of transfer-based adversarial example attacks: ensemble models and ensemble samples. \\ Although comprehensive innovations and progress have been achieved in transfer AE attacks, few works have investigated how these techniques perform in malware data. Besides, generating adversarial examples on an android APK file is not as easy and convenient as it is on image data since the generated AE of malware should also remain its functionality and executability after perturbation. Therefore, it is urgent to validate whether previous methodologies could still have their effect on malware considering the differences compared to image data. \\ In this thesis, we first have a thorough literature review for the AE attacks on malware data and general transfer AE attacks. Then we design our algorithm for the transfer AE attack. We formulate the optimization problem based on the intuition that the contribution evenness of features towards the final prediction result is highly correlated to the AE transferability. We then solve the optimization problem by gradient descent and evaluate it through extensive experiments. Implementing and experimenting with the state-of-the-art AE algorithms and transferability enhancement techniques, we analyze and summarize the weaknesses and strengths of each method. / Master of Science / Machine learning models have been widely applied to malware detection systems in recent years due to the massive growth in malware types. However, these models are vulnerable to adversarial attacks. Malicious attackers can add some small imperceptible perturbations to the original testing samples and mislead the classification results at a very low cost. Research on adversarial attacks would help us gain a better understanding of the attacker's side and inspire defenses against them. Among all adversarial attacks, the transfer-based adversarial example attack is one of the most devastating attacks since it does not require extra information from the targeted victim model such as gradient information or query from the model. Although plenty of researchers has explored the transfer AE attack lately, few works focus on malware (e.g., Android) data. Compared with image data, perturbing malware is more complicated and challenging since the generated adversarial examples of malware need to remain functionality and executability. To validate how transfer AE attack methods perform on malware, we implement the state-of-the-art (SOTA) works in this thesis and experiment with them on real Android data. Besides, we develop a new transfer-based AE attack method based on the contribution of each feature for generating AE. We then do comprehensive evaluations and draw comparisons between SOTA works and our proposed method.
54

Image-based Process Monitoring via Generative Adversarial Autoencoder with Applications to Rolling Defect Detection

January 2019 (has links)
abstract: Image-based process monitoring has recently attracted increasing attention due to the advancement of the sensing technologies. However, existing process monitoring methods fail to fully utilize the spatial information of images due to their complex characteristics including the high dimensionality and complex spatial structures. Recent advancement of the unsupervised deep models such as a generative adversarial network (GAN) and generative adversarial autoencoder (AAE) has enabled to learn the complex spatial structures automatically. Inspired by this advancement, we propose an anomaly detection framework based on the AAE for unsupervised anomaly detection for images. AAE combines the power of GAN with the variational autoencoder, which serves as a nonlinear dimension reduction technique with regularization from the discriminator. Based on this, we propose a monitoring statistic efficiently capturing the change of the image data. The performance of the proposed AAE-based anomaly detection algorithm is validated through a simulation study and real case study for rolling defect detection. / Dissertation/Thesis / Masters Thesis Industrial Engineering 2019
55

Leveraging Synthetic Images with Domain-Adversarial Neural Networks for Fine-Grained Car Model Classification

Smith, Dayyan January 2021 (has links)
Supervised learning methods require vast amounts of annotated images to successfully train an image classifier. Acquiring the necessary annotated images is costly. The increased availability of photorealistic computer generated images that are annotated automatically begs the question under which conditions it is possible to leverage this synthetic data during training. We investigate the conditions that make it possible to leverage computer generated renders of car models for fine-grained car model classification. / Övervakade inlärningsmetoder kräver stora mängder kommenterade bilder för att framgångsrikt träna en bildklassificator. Det är kostsamt att skaffa de nödvändiga bilderna med kommentarer. Den ökade tillgången till fotorealistiska datorgenererade bilder som kommenteras automatiskt väcker frågan om under vilka förhållanden det är möjligt att utnyttja dessa syntetiska data vid träning. Vi undersöker vilka villkor som gör det möjligt att utnyttja datorgenererade renderingar av bilmodeller för finkornig klassificering av bilmodeller.
56

Scenario Generation for Stress Testing Using Generative Adversarial Networks : Deep Learning Approach to Generate Extreme but Plausible Scenarios

Gustafsson, Jonas, Jonsson, Conrad January 2023 (has links)
Central Clearing Counterparties play a crucial role in financial markets, requiring robust risk management practices to ensure operational stability. A growing emphasis on risk analysis and stress testing from regulators has led to the need for sophisticated tools that can model extreme but plausible market scenarios. This thesis presents a method leveraging Wasserstein Generative Adversarial Networks with Gradient Penalty (WGAN-GP) to construct an independent scenario generator capable of modeling and generating return distributions for financial markets. The developed method utilizes two primary components: the WGAN-GP model and a novel scenario selection strategy. The WGAN-GP model approximates the multivariate return distribution of stocks, generating plausible return scenarios. The scenario selection strategy employs lower and upper bounds on Euclidean distance calculated from the return vector to identify, and select, extreme scenarios suitable for stress testing clearing members' portfolios. This approach enables the extraction of extreme yet plausible returns. This method was evaluated using 25 years of historical stock return data from the S&P 500. Results demonstrate that the WGAN-GP model effectively approximates the multivariate return distribution of several stocks, facilitating the generation of new plausible returns. However, the model requires extensive training to fully capture the tails of the distribution. The Euclidean distance-based scenario selection strategy shows promise in identifying extreme scenarios, with the generated scenarios demonstrating comparable portfolio impact to historical scenarios. These results suggest that the proposed method offers valuable tools for Central Clearing Counterparties to enhance their risk management. / Centrala motparter spelar en avgörande roll i dagens finansmarknad, vilket innebär att robusta riskhanteringsrutiner är nödvändiga för att säkerställa operativ stabilitet. Ökande regulatoriskt tryck för riskanalys och stresstestning från tillsynsmyndigheter har lett till behovet av avancerade verktyg som kan modellera extrema men troliga marknadsscenarier. I denna uppsats presenteras en metod som använder Wasserstein Generative Adversarial Networks med Gradient Penalty (WGAN-GP) för att skapa en oberoende scenariogenerator som kan modellera och generera avkastningsfördelningar för finansmarknader. Den framtagna metoden består av två huvudkomponenter: WGAN-GP-modellen och en scenariourvalstrategi. WGAN-GP-modellen approximerar den multivariata avkastningsfördelningen för aktier och genererar möjliga avkastningsscenarier. Urvalsstrategin för scenarier använder nedre och övre gränser för euklidiskt avstånd, beräknat från avkastningsvektorn, för att identifiera och välja extrema scenarier som kan användas för att stresstesta clearingmedlemmars portföljer. Denna strategi gör det möjligt att erhålla nya extrema men troliga avkastningar. Metoden utvärderas med 25 års historisk aktieavkastningsdata från S&P 500. Resultaten visar att WGAN-GP-modellen effektivt kan approximera den multivariata avkastningsfördelningen för flera aktier och därmed generera nya möjliga avkastningar. Modellen kan dock kräva en omfattande mängd träningscykler (epochs) för att fullt ut fånga fördelningens svansar. Scenariurvalet baserat på euklidiskt avstånd visade lovande resultat som ett urvalskriterium för extrema scenarier. De genererade scenarierna visar en jämförbar påverkan på portföljer i förhållande till de historiska scenarierna. Dessa resultat tyder på att den föreslagna metoden kan erbjuda värdefulla verktyg för centrala motparter att förbättra sin riskhantering.
57

A Graybox Defense Through Bootstrapping Deep Neural Network

Kirsen L Sullivan (14105763) 11 November 2022 (has links)
<p>Building a robust deep neural network (DNN) framework turns out to be a very difficult task as adaptive attacks are developed that break a robust DNN strategy. In this work we first study the bootstrap distribution of DNN weights and biases. We bootstrap three DNN models: a simple three layer convolutional neural network (CNN), VGG16 with 13 convolutional layers and 3 fully connected layers, and Inception v3 with 42 layers. Both VGG16 and Inception v3 are trained on CIFAR10 in order for bootstrapping networks to converge. We then compare the bootstrap NN parameter distributions with those from training DNN with different random initial seeds. We discover that the bootstrap DNN parameter distributions change as the DNN model size increases. And the bootstrap DNN parameter distributions are very close to those obtained from training with different random initial seeds. The bootstrap DNN parameter distributions are used to create a graybox defense strategy. We randomize a certain percentage of the weights of the first convolutional layers of a DNN model, and create a random ensemble of DNNs. Based on one trained DNN, we have infinitely many random DNN ensembles. The adaptive attacks lose the target. A random DNN ensemble is resilient to the adversarial attacks and maintains performance on clean data.</p>
58

Robust Neural Receiver in Wireless Communication : Defense against Adversarial Attacks

Nicklasson Cedbro, Alice January 2023 (has links)
In the field of wireless communication systems, the interest in machine learning has increased in recent years. Adversarial machine learning includes attack and defense methods on machine learning components. It is a topic that has been thoroughly studied in computer vision and natural language processing but not to the same extent in wireless communication. In this thesis, a Fast Gradient Sign Method (FGSM) attack on a neural receiver is studied. Furthermore, the thesis investigates whether it is possible to make a neural receiver robust against these attacks. The study is made using the python library Sionna, a library used for research on for example 5G, 6G and machine learning in wireless communication. The effect of a FGSM attack is evaluated and mitigated with different models within adversarial training. The training data of the models is either augmented with adversarial samples, or original samples are replaced with adversarial ones. Furthermore, the power distribution and range of the adversarial samples included in the training are varied. The thesis concludes that a FGSM attack decreases the performance of a neural receiver and needs less power than a barrage jamming attack to achieve the same performance loss. A neural receiver can be made more robust against a FGSM attack when the training data of the model is augmented with adversarial samples. The samples are concentrated on a specific attack power range and the power of the adversarial samples is normally distributed. A neural receiver is also proven to be more robust against a barrage jamming attack than conventional methods without defenses.
59

Benevolent and Malevolent Adversaries: A Study of GANs and Face Verification Systems

Nazari, Ehsan 22 November 2023 (has links)
Cybersecurity is rapidly evolving, necessitating inventive solutions for emerging challenges. Deep Learning (DL), having demonstrated remarkable capabilities across various domains, has found a significant role within Cybersecurity. This thesis focuses on benevolent and malevolent adversaries. For the benevolent adversaries, we analyze specific applications of DL in Cybersecurity contributing to the enhancement of DL for downstream tasks. Regarding the malevolent adversaries, we explore the question of how resistant to (Cyber) attacks is DL and show vulnerabilities of specific DL-based systems. We begin by focusing on the benevolent adversaries by studying the use of a generative model called Generative Adversarial Networks (GAN) to improve the abilities of DL. In particular, we look at the use of Conditional Generative Adversarial Networks (CGAN) to generate synthetic data and address issues with imbalanced datasets in cybersecurity applications. Imbalanced classes can be a significant issue in this field and can lead to serious problems. We find that CGANs can effectively address this issue, especially in more difficult scenarios. Then, we turn our attention to using CGAN with tabular cybersecurity problems. However, visually assessing the results of a CGAN is not possible when we are dealing with tabular cybersecurity data. To address this issue, we introduce AutoGAN, a method that can train a GAN on both image-based and tabular data, reducing the need for human inspection during GAN training. This opens up new opportunities for using GANs with tabular datasets, including those in cybersecurity that are not image-based. Our experiments show that AutoGAN can achieve comparable or even better results than other methods. Finally, we shift our focus to the malevolent adversaries by looking at the robustness of DL models in the context of automatic face recognition. We know from previous research that DL models can be tricked into making incorrect classifications by adding small, almost unnoticeable changes to an image. These deceptive manipulations are known as adversarial attacks. We aim to expose new vulnerabilities in DL-based Face Verification (FV) systems. We introduce a novel attack method on FV systems, called the DodgePersonation Attack, and a system for categorizing these attacks based on their specific targets. We also propose a new algorithm that significantly improves upon a previous method for making such attacks, increasing the success rate by more than 13%.
60

Improving the Robustness of Deep Neural Networks against Adversarial Examples via Adversarial Training with Maximal Coding Rate Reduction / Förbättra Robustheten hos Djupa Neurala Nätverk mot Exempel på en Motpart genom Utbildning för motståndare med Maximal Minskning av Kodningshastigheten

Chu, Hsiang-Yu January 2022 (has links)
Deep learning is one of the hottest scientific topics at the moment. Deep convolutional networks can solve various complex tasks in the field of image processing. However, adversarial attacks have been shown to have the ability of fooling deep learning models. An adversarial attack is accomplished by applying specially designed perturbations on the input image of a deep learning model. The noises are almost visually indistinguishable to human eyes, but can fool classifiers into making wrong predictions. In this thesis, adversarial attacks and methods to improve deep learning ’models robustness against adversarial samples were studied. Five different adversarial attack algorithm were implemented. These attack algorithms included white-box attacks and black-box attacks, targeted attacks and non-targeted attacks, and image-specific attacks and universal attacks. The adversarial attacks generated adversarial examples that resulted in significant drop in classification accuracy. Adversarial training is one commonly used strategy to improve the robustness of deep learning models against adversarial examples. It is shown that adversarial training can provide an additional regularization benefit beyond that provided by using dropout. Adversarial training is performed by incorporating adversarial examples into the training process. Traditionally, during this process, cross-entropy loss is used as the loss function. In order to improve the robustness of deep learning models against adversarial examples, in this thesis we propose two new methods of adversarial training by applying the principle of Maximal Coding Rate Reduction. The Maximal Coding Rate Reduction loss function maximizes the coding rate difference between the whole data set and the sum of each individual class. We evaluated the performance of different adversarial training methods by comparing the clean accuracy, adversarial accuracy and local Lipschitzness. It was shown that adversarial training with Maximal Coding Rate Reduction loss function would yield a more robust network than the traditional adversarial training method. / Djupinlärning är ett av de hetaste vetenskapliga ämnena just nu. Djupa konvolutionella nätverk kan lösa olika komplexa uppgifter inom bildbehandling. Det har dock visat sig att motståndarattacker har förmågan att lura djupa inlärningsmodeller. En motståndarattack genomförs genom att man tillämpar särskilt utformade störningar på den ingående bilden för en djup inlärningsmodell. Störningarna är nästan visuellt omöjliga att särskilja för mänskliga ögon, men kan lura klassificerare att göra felaktiga förutsägelser. I den här avhandlingen studerades motståndarattacker och metoder för att förbättra djupinlärningsmodellers robusthet mot motståndarexempel. Fem olika algoritmer för motståndarattack implementerades. Dessa angreppsalgoritmer omfattade white-box-attacker och black-box-attacker, riktade attacker och icke-målinriktade attacker samt bildspecifika attacker och universella attacker. De negativa attackerna genererade motståndarexempel som ledde till en betydande minskning av klassificeringsnoggrannheten. Motståndsträning är en vanligt förekommande strategi för att förbättra djupinlärningsmodellernas robusthet mot motståndarexempel. Det visas att motståndsträning kan ge en ytterligare regulariseringsfördel utöver den som ges genom att använda dropout. Motståndsträning utförs genom att man införlivar motståndarexempel i träningsprocessen. Traditionellt används under denna process cross-entropy loss som förlustfunktion. För att förbättra djupinlärningsmodellernas robusthet mot motståndarexempel föreslår vi i den här avhandlingen två nya metoder för motståndsträning genom att tillämpa principen om maximal minskning av kodningshastigheten. Förlustfunktionen Maximal Coding Rate Reduction maximerar skillnaden i kodningshastighet mellan hela datamängden och summan av varje enskild klass. Vi utvärderade prestandan hos olika metoder för motståndsträning genom att jämföra ren noggrannhet, motstånds noggrannhet och lokal Lipschitzness. Det visades att motståndsträning med förlustfunktionen Maximal Coding Rate Reduction skulle ge ett mer robust nätverk än den traditionella motståndsträningsmetoden.

Page generated in 0.2198 seconds