Spelling suggestions: "subject:"adversarial"" "subject:"adversarialt""
61 |
Robust Neural Receiver in Wireless Communication : Defense against Adversarial AttacksNicklasson Cedbro, Alice January 2023 (has links)
In the field of wireless communication systems, the interest in machine learning has increased in recent years. Adversarial machine learning includes attack and defense methods on machine learning components. It is a topic that has been thoroughly studied in computer vision and natural language processing but not to the same extent in wireless communication. In this thesis, a Fast Gradient Sign Method (FGSM) attack on a neural receiver is studied. Furthermore, the thesis investigates whether it is possible to make a neural receiver robust against these attacks. The study is made using the python library Sionna, a library used for research on for example 5G, 6G and machine learning in wireless communication. The effect of a FGSM attack is evaluated and mitigated with different models within adversarial training. The training data of the models is either augmented with adversarial samples, or original samples are replaced with adversarial ones. Furthermore, the power distribution and range of the adversarial samples included in the training are varied. The thesis concludes that a FGSM attack decreases the performance of a neural receiver and needs less power than a barrage jamming attack to achieve the same performance loss. A neural receiver can be made more robust against a FGSM attack when the training data of the model is augmented with adversarial samples. The samples are concentrated on a specific attack power range and the power of the adversarial samples is normally distributed. A neural receiver is also proven to be more robust against a barrage jamming attack than conventional methods without defenses.
|
62 |
Benevolent and Malevolent Adversaries: A Study of GANs and Face Verification SystemsNazari, Ehsan 22 November 2023 (has links)
Cybersecurity is rapidly evolving, necessitating inventive solutions for emerging challenges. Deep Learning (DL), having demonstrated remarkable capabilities across various domains, has found a significant role within Cybersecurity. This thesis focuses on benevolent and malevolent adversaries. For the benevolent adversaries, we analyze specific applications of DL in Cybersecurity contributing to the enhancement of DL for downstream tasks. Regarding the malevolent adversaries, we explore the question of how resistant to (Cyber) attacks is DL and show vulnerabilities of specific DL-based systems.
We begin by focusing on the benevolent adversaries by studying the use of a generative model called Generative Adversarial Networks (GAN) to improve the abilities of DL. In particular, we look at the use of Conditional Generative Adversarial Networks (CGAN) to generate synthetic data and address issues with imbalanced datasets in cybersecurity applications. Imbalanced classes can be a significant issue in this field and can lead to serious problems. We find that CGANs can effectively address this issue, especially in more difficult scenarios. Then, we turn our attention to using CGAN with tabular cybersecurity problems. However, visually assessing the results of a CGAN is not possible when we are dealing with tabular cybersecurity data. To address this issue, we introduce AutoGAN, a method that can train a GAN on both image-based and tabular data, reducing the need for human inspection during GAN training. This opens up new opportunities for using GANs with tabular datasets, including those in cybersecurity that are not image-based. Our experiments show that AutoGAN can achieve comparable or even better results than other methods.
Finally, we shift our focus to the malevolent adversaries by looking at the robustness of DL models in the context of automatic face recognition. We know from previous research that DL models can be tricked into making incorrect classifications by adding small, almost unnoticeable changes to an image. These deceptive manipulations are known as adversarial attacks. We aim to expose new vulnerabilities in DL-based Face Verification (FV) systems. We introduce a novel attack method on FV systems, called the DodgePersonation Attack, and a system for categorizing these attacks based on their specific targets. We also propose a new algorithm that significantly improves upon a previous method for making such attacks, increasing the success rate by more than 13%.
|
63 |
Improving the Robustness of Deep Neural Networks against Adversarial Examples via Adversarial Training with Maximal Coding Rate Reduction / Förbättra Robustheten hos Djupa Neurala Nätverk mot Exempel på en Motpart genom Utbildning för motståndare med Maximal Minskning av KodningshastighetenChu, Hsiang-Yu January 2022 (has links)
Deep learning is one of the hottest scientific topics at the moment. Deep convolutional networks can solve various complex tasks in the field of image processing. However, adversarial attacks have been shown to have the ability of fooling deep learning models. An adversarial attack is accomplished by applying specially designed perturbations on the input image of a deep learning model. The noises are almost visually indistinguishable to human eyes, but can fool classifiers into making wrong predictions. In this thesis, adversarial attacks and methods to improve deep learning ’models robustness against adversarial samples were studied. Five different adversarial attack algorithm were implemented. These attack algorithms included white-box attacks and black-box attacks, targeted attacks and non-targeted attacks, and image-specific attacks and universal attacks. The adversarial attacks generated adversarial examples that resulted in significant drop in classification accuracy. Adversarial training is one commonly used strategy to improve the robustness of deep learning models against adversarial examples. It is shown that adversarial training can provide an additional regularization benefit beyond that provided by using dropout. Adversarial training is performed by incorporating adversarial examples into the training process. Traditionally, during this process, cross-entropy loss is used as the loss function. In order to improve the robustness of deep learning models against adversarial examples, in this thesis we propose two new methods of adversarial training by applying the principle of Maximal Coding Rate Reduction. The Maximal Coding Rate Reduction loss function maximizes the coding rate difference between the whole data set and the sum of each individual class. We evaluated the performance of different adversarial training methods by comparing the clean accuracy, adversarial accuracy and local Lipschitzness. It was shown that adversarial training with Maximal Coding Rate Reduction loss function would yield a more robust network than the traditional adversarial training method. / Djupinlärning är ett av de hetaste vetenskapliga ämnena just nu. Djupa konvolutionella nätverk kan lösa olika komplexa uppgifter inom bildbehandling. Det har dock visat sig att motståndarattacker har förmågan att lura djupa inlärningsmodeller. En motståndarattack genomförs genom att man tillämpar särskilt utformade störningar på den ingående bilden för en djup inlärningsmodell. Störningarna är nästan visuellt omöjliga att särskilja för mänskliga ögon, men kan lura klassificerare att göra felaktiga förutsägelser. I den här avhandlingen studerades motståndarattacker och metoder för att förbättra djupinlärningsmodellers robusthet mot motståndarexempel. Fem olika algoritmer för motståndarattack implementerades. Dessa angreppsalgoritmer omfattade white-box-attacker och black-box-attacker, riktade attacker och icke-målinriktade attacker samt bildspecifika attacker och universella attacker. De negativa attackerna genererade motståndarexempel som ledde till en betydande minskning av klassificeringsnoggrannheten. Motståndsträning är en vanligt förekommande strategi för att förbättra djupinlärningsmodellernas robusthet mot motståndarexempel. Det visas att motståndsträning kan ge en ytterligare regulariseringsfördel utöver den som ges genom att använda dropout. Motståndsträning utförs genom att man införlivar motståndarexempel i träningsprocessen. Traditionellt används under denna process cross-entropy loss som förlustfunktion. För att förbättra djupinlärningsmodellernas robusthet mot motståndarexempel föreslår vi i den här avhandlingen två nya metoder för motståndsträning genom att tillämpa principen om maximal minskning av kodningshastigheten. Förlustfunktionen Maximal Coding Rate Reduction maximerar skillnaden i kodningshastighet mellan hela datamängden och summan av varje enskild klass. Vi utvärderade prestandan hos olika metoder för motståndsträning genom att jämföra ren noggrannhet, motstånds noggrannhet och lokal Lipschitzness. Det visades att motståndsträning med förlustfunktionen Maximal Coding Rate Reduction skulle ge ett mer robust nätverk än den traditionella motståndsträningsmetoden.
|
64 |
Advancing adversarial robustness with feature desensitization and synthesized dataBayat, Reza 07 1900 (has links)
Cette thèse porte sur la question critique de la vulnérabilité des modèles d’apprentissage profond face aux attaques adversariales. Susceptibles à de légères perturbations invisibles à l'œil humain, ces modèles peuvent produire des prédictions erronées. Les attaques adversariales représentent une menace importante quant à l’utilisation de ces modèles dans des systèmes de sécurité critique. Pour atténuer ces risques, l’entraînement adversarial s’impose comme une approche prometteuse, consistant à entraîner les modèles sur des exemples adversariaux pour renforcer leur robustesse.
Dans le Chapitre 1, nous offrons un aperçu détaillé de la vulnérabilité adversariale, en décrivant la création d’échantillons adversariaux ainsi que leurs répercussions dans le monde réel. Nous expliquons le processus de conception de ces exemples et présentons divers scénarios illustrant leurs conséquences potentiellement catastrophiques. En outre, nous examinons les défis associés à l'entraînement adversarial, en mettant l’emphase sur des défis tels que le manque de robustesse face à une large gamme d’attaques et le compromis entre robustesse et généralisation, qui sont au cœur de cette étude.
Le Chapitre 2 présente la Désensibilisation des Caractéristiques Adversariales (AFD), une méthode innovante utilisant des techniques d’adaptation de domaine pour renforcer la robustesse adversariale. L’AFD vise à apprendre des caractéristiques invariantes aux perturbations adversariales, augmentant ainsi la résilience face à divers types et intensités d’attaques. Cette approche consiste à entraîner simultanément un discriminateur de domaine et un classificateur afin de réduire la divergence entre les représentations de données naturelles et adversariales. En alignant les caractéristiques des deux domaines, l'AFD garantit que les caractéristiques apprises sont à la fois prédictives et robustes, atténuant ainsi le surapprentissage à des schémas d'attaque spécifiques et favorisant une défense plus globale.
Le Chapitre 3 présente l’Entraînement Adversarial avec Données Synthétisées, une méthode visant à combler l’écart entre la robustesse et la généralisation des réseaux de neurones. En utilisant des données synthétisées générées par des techniques avancées, ce chapitre explore comment l'incorporation de telles données peut atténuer le surapprentissage et améliorer la performance globale des modèles entraînés adversarialement. Les résultats montrent que, bien que l’entraînement adversarial soit souvent confronté à un compromis entre robustesse et généralisation, l’utilisation de données synthétisées permet de maintenir une haute précision des données corrompues et hors distribution sans compromettre la robustesse. Cette approche offre une voie prometteuse pour développer des réseaux de neurones à la fois résilients aux attaques adversariales et capables de bien généraliser à de nombreux scénarios.
Le Chapitre 4 conclut la thèse en résumant les principales découvertes et contributions de cette recherche. De plus, il propose plusieurs pistes pour des recherches futures visant à améliorer davantage la sécurité et la fiabilité des modèles d’apprentissage profond. Ces pistes incluent l’exploration de l’effet des données synthétisées sur une gamme plus large de tâches de généralisation, le développement d’approches alternatives moins coûteuses en termes de calcul d’entraînement, et l’adaptation de nouvelles techniques guidées par l’information en retour pour synthétiser des données qui favorise l’efficacité d’échantillonnage. En suivant ces directions, les recherches futures pourront s’appuyer sur les bases présentées dans cette thèse et continuer à faire progresser le domaine de la robustesse adversariale, menant à des systèmes d’apprentissage automatique plus sécuritaires et plus fiables.
À travers ces contributions, cette thèse avance la compréhension de la robustesse adversariale et propose des solutions pratiques pour améliorer la sécurité et la fiabilité des systèmes d'apprentissage automatique. En abordant les limites des méthodes actuelles d'entraînement adversarial et en introduisant des approches innovatrices comme l'AFD et l'incorporation de données synthétisées, cette recherche ouvre le chemin à des modèles d'apprentissage automatique plus robustes et généralisables. / This thesis addresses the critical issue of adversarial vulnerability in deep learning models, which are susceptible to slight, human-imperceptible perturbations that can lead to incorrect predictions. Adversarial attacks pose significant threats to the deployment of these models in safety-critical systems. To mitigate these threats, adversarial training has emerged as a prominent approach, where models are trained on adversarial examples to enhance their robustness.
In Chapter 1, we provide a comprehensive background on adversarial vulnerability, detailing the creation of adversarial examples and their real-world implications. We illustrate how adversarial examples are crafted and present various scenarios demonstrating their potential catastrophic outcomes. Furthermore, we explore the challenges associated with adversarial training, focusing on issues like the lack of robustness against a broad range of attack strengths and a trade-off between robustness and generalization, which are the subjects of our study.
Chapter 2 introduces Adversarial Feature Desensitization (AFD), a novel method that leverages domain adaptation techniques to enhance adversarial robustness. AFD aims to learn features that are invariant to adversarial perturbations, thereby improving resilience across various attack types and strengths. This approach involves training a domain discriminator alongside the classifier to reduce the divergence between natural and adversarial data representations. By aligning the features from both domains, AFD ensures that the learned features are both predictive and robust, mitigating overfitting to specific attack patterns and promoting broader defensive capability.
Chapter 3 presents Adversarial Training with Synthesized Data, a method aimed at bridging the gap between robustness and generalization in neural networks. By leveraging synthesized data generated through advanced techniques, this chapter explores how incorporating such data can mitigate robust overfitting and enhance the overall performance of adversarially trained models. The findings indicate that while adversarial training traditionally faces a trade-off between robustness and generalization, the use of synthesized data helps maintain high accuracy on corrupted and out-of-distribution data without compromising robustness. This approach provides a promising pathway to develop neural networks that are both resilient to adversarial attacks and capable of generalizing well to a wide range of scenarios.
Chapter 4 concludes the thesis by summarizing the key findings and contributions of this thesis. Additionally, it outlines several avenues for future research to further enhance the security and reliability of deep learning models. Future research could explore the effect of synthesized data on a broader range of generalization tasks, develop alternative approaches to adversarial training that are less computationally expensive, and adapt new feedback-guided techniques for synthesizing data to enhance sample efficiency. By pursuing these directions, future research can build on the foundations laid by this thesis and continue to advance the field of adversarial robustness, ultimately leading to safer and more reliable machine learning systems.
Through these contributions, this thesis advances the understanding of adversarial robustness and proposes practical solutions to enhance the security and reliability of machine learning systems. By addressing the limitations of current adversarial training methods and introducing innovative approaches like AFD and the incorporation of synthesized data, this research paves the way for more robust and generalizable machine learning models capable of withstanding a diverse array of adversarial attacks.
|
65 |
Prediction games : machine learning in the presence of an adversaryBrückner, Michael January 2012 (has links)
In many applications one is faced with the problem of inferring some functional relation between input and output variables from given data. Consider, for instance, the task of email spam filtering where one seeks to find a model which automatically assigns new, previously unseen emails to class spam or non-spam. Building such a predictive model based on observed training inputs (e.g., emails) with corresponding outputs (e.g., spam labels) is a major goal of machine learning.
Many learning methods assume that these training data are governed by the same distribution as the test data which the predictive model will be exposed to at application time. That assumption is violated when the test data are generated in response to the presence of a predictive model. This becomes apparent, for instance, in the above example of email spam filtering. Here, email service providers employ spam filters and spam senders engineer campaign templates such as to achieve a high rate of successful deliveries despite any filters.
Most of the existing work casts such situations as learning robust models which are unsusceptible against small changes of the data generation process. The models are constructed under the worst-case assumption that these changes are performed such to produce the highest possible adverse effect on the performance of the predictive model. However, this approach is not capable to realistically model the true dependency between the model-building process and the process of generating future data. We therefore establish the concept of prediction games: We model the interaction between a learner, who builds the predictive model, and a data generator, who controls the process of data generation, as an one-shot game. The game-theoretic framework enables us to explicitly model the players' interests, their possible actions, their level of knowledge about each other, and the order at which they decide for an action.
We model the players' interests as minimizing their own cost function which both depend on both players' actions. The learner's action is to choose the model parameters and the data generator's action is to perturbate the training data which reflects the modification of the data generation process with respect to the past data.
We extensively study three instances of prediction games which differ regarding the order in which the players decide for their action. We first assume that both player choose their actions simultaneously, that is, without the knowledge of their opponent's decision. We identify conditions under which this Nash prediction game has a meaningful solution, that is, a unique Nash equilibrium, and derive algorithms that find the equilibrial prediction model. As a second case, we consider a data generator who is potentially fully informed about the move of the learner. This setting establishes a Stackelberg competition. We derive a relaxed optimization criterion to determine the solution of this game and show that this Stackelberg prediction game generalizes existing prediction models. Finally, we study the setting where the learner observes the data generator's action, that is, the (unlabeled) test data, before building the predictive model. As the test data and the training data may be governed by differing probability distributions, this scenario reduces to learning under covariate shift. We derive a new integrated as well as a two-stage method to account for this data set shift.
In case studies on email spam filtering we empirically explore properties of all derived models as well as several existing baseline methods. We show that spam filters resulting from the Nash prediction game as well as the Stackelberg prediction game in the majority of cases outperform other existing baseline methods. / Eine der Aufgabenstellungen des Maschinellen Lernens ist die Konstruktion von Vorhersagemodellen basierend auf gegebenen Trainingsdaten. Ein solches Modell beschreibt den Zusammenhang zwischen einem Eingabedatum, wie beispielsweise einer E-Mail, und einer Zielgröße; zum Beispiel, ob die E-Mail durch den Empfänger als erwünscht oder unerwünscht empfunden wird. Dabei ist entscheidend, dass ein gelerntes Vorhersagemodell auch die Zielgrößen zuvor unbeobachteter Testdaten korrekt vorhersagt.
Die Mehrzahl existierender Lernverfahren wurde unter der Annahme entwickelt, dass Trainings- und Testdaten derselben Wahrscheinlichkeitsverteilung unterliegen. Insbesondere in Fällen in welchen zukünftige Daten von der Wahl des Vorhersagemodells abhängen, ist diese Annahme jedoch verletzt. Ein Beispiel hierfür ist das automatische Filtern von Spam-E-Mails durch E-Mail-Anbieter. Diese konstruieren Spam-Filter basierend auf zuvor empfangenen E-Mails. Die Spam-Sender verändern daraufhin den Inhalt und die Gestaltung der zukünftigen Spam-E-Mails mit dem Ziel, dass diese durch die Filter möglichst nicht erkannt werden.
Bisherige Arbeiten zu diesem Thema beschränken sich auf das Lernen robuster Vorhersagemodelle welche unempfindlich gegenüber geringen Veränderungen des datengenerierenden Prozesses sind. Die Modelle werden dabei unter der Worst-Case-Annahme konstruiert, dass diese Veränderungen einen maximal negativen Effekt auf die Vorhersagequalität des Modells haben. Diese Modellierung beschreibt die tatsächliche Wechselwirkung zwischen der Modellbildung und der Generierung zukünftiger Daten nur ungenügend. Aus diesem Grund führen wir in dieser Arbeit das Konzept der Prädiktionsspiele ein. Die Modellbildung wird dabei als mathematisches Spiel zwischen einer lernenden und einer datengenerierenden Instanz beschrieben. Die spieltheoretische Modellierung ermöglicht es uns, die Interaktion der beiden Parteien exakt zu beschreiben. Dies umfasst die jeweils verfolgten Ziele, ihre Handlungsmöglichkeiten, ihr Wissen übereinander und die zeitliche Reihenfolge, in der sie agieren.
Insbesondere die Reihenfolge der Spielzüge hat einen entscheidenden Einfluss auf die spieltheoretisch optimale Lösung. Wir betrachten zunächst den Fall gleichzeitig agierender Spieler, in welchem sowohl der Lerner als auch der Datengenerierer keine Kenntnis über die Aktion des jeweils anderen Spielers haben. Wir leiten hinreichende Bedingungen her, unter welchen dieses Spiel eine Lösung in Form eines eindeutigen Nash-Gleichgewichts besitzt. Im Anschluss diskutieren wir zwei verschiedene Verfahren zur effizienten Berechnung dieses Gleichgewichts. Als zweites betrachten wir den Fall eines Stackelberg-Duopols. In diesem Prädiktionsspiel wählt der Lerner zunächst das Vorhersagemodell, woraufhin der Datengenerierer in voller Kenntnis des Modells reagiert. Wir leiten ein relaxiertes Optimierungsproblem zur Bestimmung des Stackelberg-Gleichgewichts her und stellen ein mögliches Lösungsverfahren vor. Darüber hinaus diskutieren wir, inwieweit das Stackelberg-Modell bestehende robuste Lernverfahren verallgemeinert. Abschließend untersuchen wir einen Lerner, der auf die Aktion des Datengenerierers, d.h. der Wahl der Testdaten, reagiert. In diesem Fall sind die Testdaten dem Lerner zum Zeitpunkt der Modellbildung bekannt und können in den Lernprozess einfließen. Allerdings unterliegen die Trainings- und Testdaten nicht notwendigerweise der gleichen Verteilung. Wir leiten daher ein neues integriertes sowie ein zweistufiges Lernverfahren her, welche diese Verteilungsverschiebung bei der Modellbildung berücksichtigen.
In mehreren Fallstudien zur Klassifikation von Spam-E-Mails untersuchen wir alle hergeleiteten, sowie existierende Verfahren empirisch. Wir zeigen, dass die hergeleiteten spieltheoretisch-motivierten Lernverfahren in Summe signifikant bessere Spam-Filter erzeugen als alle betrachteten Referenzverfahren.
|
66 |
Incremental Learning of Deep Convolutional Neural Networks for Tumour Classification in Pathology ImagesJohansson, Philip January 2019 (has links)
Medical doctors understaffing is becoming a compelling problem in many healthcare systems. This problem can be alleviated by utilising Computer-Aided Diagnosis (CAD) systems to substitute doctors in different tasks, for instance, histopa-thological image classification. The recent surge of deep learning has allowed CAD systems to perform this task at a very competitive performance. However, a major challenge with this task is the need to periodically update the models with new data and/or new classes or diseases. These periodical updates will result in catastrophic forgetting, as Convolutional Neural Networks typically requires the entire data set beforehand and tend to lose knowledge about old data when trained on new data. Incremental learning methods were proposed to alleviate this problem with deep learning. In this thesis, two incremental learning methods, Learning without Forgetting (LwF) and a generative rehearsal-based method, are investigated. They are evaluated on two criteria: The first, capability of incrementally adding new classes to a pre-trained model, and the second is the ability to update the current model with an new unbalanced data set. Experiments shows that LwF does not retain knowledge properly for the two cases. Further experiments are needed to draw any definite conclusions, for instance using another training approach for the classes and try different combinations of losses. On the other hand, the generative rehearsal-based method tends to work for one class, showing a good potential to work if better quality images were generated. Additional experiments are also required in order to investigating new architectures and approaches for a more stable training.
|
67 |
Generative adversarial networks for single image super resolution in microscopy imagesGawande, Saurabh January 2018 (has links)
Image Super resolution is a widely-studied problem in computer vision, where the objective is to convert a lowresolution image to a high resolution image. Conventional methods for achieving super-resolution such as image priors, interpolation, sparse coding require a lot of pre/post processing and optimization. Recently, deep learning methods such as convolutional neural networks and generative adversarial networks are being used to perform super-resolution with results competitive to the state of the art but none of them have been used on microscopy images. In this thesis, a generative adversarial network, mSRGAN, is proposed for super resolution with a perceptual loss function consisting of a adversarial loss, mean squared error and content loss. The objective of our implementation is to learn an end to end mapping between the low / high resolution images and optimize the upscaled image for quantitative metrics as well as perceptual quality. We then compare our results with the current state of the art methods in super resolution, conduct a proof of concept segmentation study to show that super resolved images can be used as a effective pre processing step before segmentation and validate the findings statistically. / Image Super-resolution är ett allmänt studerad problem i datasyn, där målet är att konvertera en lågupplösningsbild till en högupplöst bild. Konventionella metoder för att uppnå superupplösning som image priors, interpolation, sparse coding behöver mycket föroch efterbehandling och optimering.Nyligen djupa inlärningsmetoder som convolutional neurala nätverk och generativa adversariella nätverk är användas för att utföra superupplösning med resultat som är konkurrenskraftiga mot toppmoderna teknik, men ingen av dem har använts på mikroskopibilder. I denna avhandling, ett generativ kontradiktorisktsnätverk, mSRGAN, är föreslås för superupplösning med en perceptuell förlustfunktion bestående av en motsatt förlust, medelkvadratfel och innehållförlust.Mål med vår implementering är att lära oss ett slut på att slut kartläggning mellan bilder med låg / hög upplösning och optimera den uppskalade bilden för kvantitativa metriks såväl som perceptuell kvalitet. Vi jämför sedan våra resultat med de nuvarande toppmoderna metoderna i superupplösning, och uppträdande ett bevis på konceptsegmenteringsstudie för att visa att superlösa bilder kan användas som ett effektivt förbehandling steg före segmentering och validera fynden statistiskt.
|
68 |
Deep Learning-based Regularizers for Cone Beam Computed Tomography Reconstruction / Djupinlärningsbaserade regulariserare för rekonstruktion inom volymtomografiSyed, Sabina, Stenberg, Josefin January 2023 (has links)
Cone Beam Computed Tomography is a technology to visualize the 3D interior anatomy of a patient. It is important for image-guided radiation therapy in cancer treatment. During a scan, iterative methods are often used for the image reconstruction step. A key challenge is the ill-posedness of the resulting inversion problem, causing the images to become noisy. To combat this, regularizers can be introduced, which help stabilize the problem. This thesis focuses on Adversarial Convex Regularization that with deep learning regularize the scans according to a target image quality. It can be interpreted in a Bayesian setting by letting the regularizer be the prior, approximating the likelihood with the measurement error, and obtaining the patient image through the maximum-a-posteriori estimate. Adversarial Convex Regularization has previously shown promising results in regular Computed Tomography, and this study aims to investigate its potential in Cone Beam Computed Tomography. Three different learned regularization methods have been developed, all based on Convolutional Neural Network architectures. One model is based on three-dimensional convolutional layers, while the remaining two rely on 2D layers. These two are in a later stage crafted to be applicable to 3D reconstruction by either stacking a 2D model or by averaging 2D models trained in three orthogonal planes. All neural networks are trained on simulated male pelvis data provided by Elekta. The 3D convolutional neural network model has proven to be heavily memory-consuming, while not performing better than current reconstruction methods with respect to image quality. The two architectures based on merging multiple 2D neural network gradients for 3D reconstruction are novel contributions that avoid memory issues. These two models outperform current methods in terms of multiple image quality metrics, such as Peak Signal-to-Noise Ratio and Structural Similarity Index Measure, and they also generalize well for real Cone Beam Computed Tomography data. Additionally, the architecture based on a weighted average of 2D neural networks is able to capture spatial interactions to a larger extent and is adjustable to favor the plane that best shows the field of interest, a possibly desirable feature in medical practice. / Volymtomografi kan användas inom cancerbehandling för att skapa bilder av patientens inre anatomi i 3D som sedan används vid stråldosplanering. Under den rekonstruerande fasen i en skanning används ofta iterativa metoder. En utmaning är att det resulterande inversionsproblemet är illa ställt, vilket leder till att bilderna blir brusiga. För att motverka detta kan regularisering introduceras som bidrar till att stabilisera problemet. Fokus för denna uppsats är Adversarial Convex Regularization som baserat på djupinlärning regulariserar bilderna enligt en målbildskvalitet. Detta kan även tolkas ur ett Bayesianskt perspektiv genom att betrakta regulariseraren som apriorifördelningen, approximera likelihoodfördelningen med mätfelet samt erhålla patientbilden genom maximum-a-posteriori-skattningen. Adversarial Convex Regularization har tidigare visat lovande resultat för data från Datortomografi och syftet med denna uppsats är att undersöka dess potential för Volymtomografi. Tre olika inlärda regulariseringsmetoder har utvecklats med hjälp av faltningsnätverk. En av modellerna bygger på faltning av tredimensionella lager, medan de återstående två är baserade på 2D-lager. Dessa två sammanförs i ett senare skede för att kunna appliceras vid 3D-rekonstruktion, antingen genom att stapla 2D modeller eller genom att beräkna ett viktat medelvärde av tre 2D-modeller som tränats i tre ortogonala plan. Samtliga modeller är tränade på simulerad manlig bäckendata från Elekta. 3D-faltningsnätverket har visat sig vara minneskrävande samtidigt som det inte presterar bättre än nuvarande rekonstruktionsmetoder med avseende på bildkvalitet. De andra två metoderna som bygger på att stapla flera gradienter av 2D-nätverk vid 3D-rekonstruktion är ett nytt vetenskapligt bidrag och undviker minnesproblemen. Dessa två modeller överträffar nuvarande metoder gällande flera bildkvalitetsmått och generaliserar även väl för data från verklig Volymtomografi. Dessutom lyckas modellen som bygger på ett viktat medelvärde av 2D-nätverk i större utsträckning fånga spatiala interaktioner. Den kan även anpassas till att gynna det plan som bäst visar intresseområdet i kroppen, vilket möjligtvis är en önskvärd egenskap i medicinska sammanhang.
|
69 |
SELF-SUPERVISED ONE-SHOT LEARNING FOR AUTOMATIC SEGMENTATION OF GAN-GENERATED IMAGESAnkit V Manerikar (16523988) 11 July 2023 (has links)
<p>Generative Adversarial Networks (GANs) have consistently defined the state-of-the-art in the generative modelling of high-quality images in several applications. The images generated using GANs, however, do not lend themselves to being directly used in supervised learning tasks without first being curated through annotations. This dissertation investigates how to carry out automatic on-the-fly segmentation of GAN-generated images and how this can be applied to the problem of producing high-quality simulated data for X-ray based security screening. The research exploits the hidden layer properties of GAN models in a self-supervised learning framework for the automatic one-shot segmentation of images created by a style-based GAN. The framework consists of a novel contrastive learner that is based on a Sinkhorn distance-based clustering algorithm and that learns a compact feature space for per-pixel classification of the GAN-generated images. This facilitates faster learning of the feature vectors for one-shot segmentation and allows on-the-fly automatic annotation of the GAN images. We have tested our framework on a number of standard benchmarks (CelebA, PASCAL, LSUN) to yield a segmentation performance that not only exceeds the semi-supervised baselines by an average wIoU margin of 1.02 % but also improves the inference speeds by a factor of 4.5. This dissertation also presents BagGAN, an extension of our framework to the problem domain of X-ray based baggage screening. BagGAN produces annotated synthetic baggage X-ray scans to train machine-learning algorithms for the detection of prohibited items during security screening. We have compared the images generated by BagGAN with those created by deterministic ray-tracing models for X-ray simulation and have observed that our GAN-based baggage simulator yields a significantly improved performance in terms of image fidelity and diversity. The BagGAN framework is also tested on the PIDRay and other baggage screening benchmarks to produce segmentation results comparable to their respective baseline segmenters based on manual annotations.</p>
|
70 |
Time-series Generative Adversarial Networks for Telecommunications Data AugmentationDimyati, Hamid January 2021 (has links)
Time- series Generative Adversarial Networks (TimeGAN) is proposed to overcome the GAN model’s insufficiency in producing synthetic samples that inherit the predictive ability of the original timeseries data. TimeGAN combines the unsupervised adversarial loss in the GAN framework with a supervised loss adopted from an autoregressive model. However, TimeGAN is like another GANbased model that only learns from the set of smaller sequences extracted from the original time-series. This behavior yields a severe consequence when encountering data augmentation for time-series with multiple seasonal patterns, as found in the mobile telecommunication network data. This study examined the effectiveness of the TimeGAN model with the help of Dynamic Time Warping (DTW) and different types of RNN as its architecture to produce synthetic mobile telecommunication network data, which can be utilized to improve the forecasting performance of the statistical and deep learning models relative to the baseline models trained only on the original data. The experiment results indicate that DTW helps TimeGAN maintaining the multiple seasonal attributes. In addition, either LSTM or Bidirectional LSTM as TimeGAN architecture ensures the model is robust to mode collapse problem and creates synthetic data that are diversified and indistinguishable from the original time-series. Finally, merging both original and synthetic time-series becomes a compelling way to significantly improve the deep learning model’s forecasting performance but fails to do so for the statistical model. / Time-series Generative Adversarial Networks (TimeGAN) föreslås för att övervinna GAN-modellens brist att kunna producera syntetisk data som ärver de prediktiva förmåga från den ursprungliga tidsseriedatan. TimeGAN kombinerar den icke-vägledande förlusten i GAN-ramverket tillsammans med den vägledande förlusten från en autoregressiv modell. TimeGAN liknar en vanlig GAN-baserad modell, men behöver bara en mindre uppsättning sekvenser från den ursprungliga tidsserien för att lära sig. Denna egenskap kan dock leda till allvarliga konsekvenser när man stöter på dataförstoring för tidsserier med flera säsongsmönster, vilket återfinns i mobilnätverksdata. Denna studie har undersökt effektiviteten av TimeGAN-modellen med hjälp av Dynamic Time Warping (DTW) och olika typer av RNN som dess arkitektur för att producera syntetisk mobilnätverksdata. Detta kan användas för att förbättra statistiska och djupinlärningsmodellers prognostisering relativt till modeller som bara har tränat på orginaldata. De experimentella resultaten indikerar att DTW hjälper TimeGAN att bibehålla de olika säsongsattributen. Dessutom, TimeGAN med antingen LSTM eller Bidirectional LSTM som arkitektur säkerställer att modellen är robust för lägesfallsproblem och skapar syntetisk data som är diversifierade och inte kan urskiljas från den ursprungliga tidsserien. Slutligen, en sammanslagning av både ursprungliga och syntetiska tidsserier blir ett övertygande sätt att avsevärt förbättra djupinlärningsmodellens prestanda men misslyckas med detta för den statistiska modellen.
|
Page generated in 0.0439 seconds