1 |
On the Neural Representation for Adversarial Attack and DefenseQiuling Xu (17121274) 20 October 2023 (has links)
<p dir="ltr">Neural representations are high-dimensional embeddings generated during the feed-forward process of neural networks. These embeddings compress raw input information and extract abstract features beneficial for downstream tasks. However, effectively utilizing these representations poses challenges due to their inherent complexity. This complexity arises from the non-linear relationship between inputs and neural representations, as well as the diversity of the learning process.</p><p dir="ltr">In this thesis, we propose effective methods to utilize neural representations for adversarial attack and defense. Our approach generally involves decomposing complex neural representations into smaller, more analyzable parts. We also seek general patterns emerging during learning to better understand the semantic meaning associated with neural representations.</p><p dir="ltr">We demonstrate that formalizing neural representations can reveal models' weaknesses and aid in defending against poison attacks. Specifically, we define a new type of adversarial attack using neural style, a special component of neural representation. This new attack uncovers novel aspects of the models' vulnerabilities. </p><p dir="ltr">Furthermore, we develop an interpretation of neural representations by approximating their marginal distribution, treating intermediate neurons as feature indicators. By properly harnessing these rich feature indicators, we address scalability and imperceptibility issues related to pixel-wise bounds.</p><p dir="ltr">Finally, we discover that neural representations contain crucial information about how neural networks make decisions. Leveraging the general patterns in neural representations, we design algorithms to remove unwanted and harmful functionalities from neural networks, thereby mitigating poison attacks.</p>
|
2 |
Efficient and Secure Deep Learning Inference System: A Software and Hardware Co-design PerspectiveJanuary 2020 (has links)
abstract: The advances of Deep Learning (DL) achieved recently have successfully demonstrated its great potential of surpassing or close to human-level performance across multiple domains. Consequently, there exists a rising demand to deploy state-of-the-art DL algorithms, e.g., Deep Neural Networks (DNN), in real-world applications to release labors from repetitive work. On the one hand, the impressive performance achieved by the DNN normally accompanies with the drawbacks of intensive memory and power usage due to enormous model size and high computation workload, which significantly hampers their deployment on the resource-limited cyber-physical systems or edge devices. Thus, the urgent demand for enhancing the inference efficiency of DNN has also great research interests across various communities. On the other hand, scientists and engineers still have insufficient knowledge about the principles of DNN which makes it mostly be treated as a black-box. Under such circumstance, DNN is like "the sword of Damocles" where its security or fault-tolerance capability is an essential concern which cannot be circumvented.
Motivated by the aforementioned concerns, this dissertation comprehensively investigates the emerging efficiency and security issues of DNNs, from both software and hardware design perspectives. From the efficiency perspective, as the foundation technique for efficient inference of target DNN, the model compression via quantization is elaborated. In order to maximize the inference performance boost, the deployment of quantized DNN on the revolutionary Computing-in-Memory based neural accelerator is presented in a cross-layer (device/circuit/system) fashion. From the security perspective, the well known adversarial attack is investigated spanning from its original input attack form (aka. Adversarial example generation) to its parameter attack variant. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2020
|
3 |
Detecting Adversarial Examples by Measuring their Stress ResponseJanuary 2019 (has links)
abstract: Machine learning (ML) and deep neural networks (DNNs) have achieved great success in a variety of application domains, however, despite significant effort to make these networks robust, they remain vulnerable to adversarial attacks in which input that is perceptually indistinguishable from natural data can be erroneously classified with high prediction confidence. Works on defending against adversarial examples can be broadly classified as correcting or detecting, which aim, respectively at negating the effects of the attack and correctly classifying the input, or detecting and rejecting the input as adversarial. In this work, a new approach for detecting adversarial examples is proposed. The approach takes advantage of the robustness of natural images to noise. As noise is added to a natural image, the prediction probability of its true class drops, but the drop is not sudden or precipitous. The same seems to not hold for adversarial examples. In other word, the stress response profile for natural images seems different from that of adversarial examples, which could be detected by their stress response profile. An evaluation of this approach for detecting adversarial examples is performed on the MNIST, CIFAR-10 and ImageNet datasets. Experimental data shows that this approach is effective at detecting some adversarial examples on small scaled simple content images and with little sacrifice on benign accuracy. / Dissertation/Thesis / Masters Thesis Computer Science 2019
|
4 |
On robustness and explainability of deep learningLe, Hieu 06 February 2024 (has links)
There has been tremendous progress in machine learning and specifically deep learning in the last few decades. However, due to some inherent nature of deep neural networks, many questions regarding explainability and robustness still remain open. More specifically, as deep learning models are shown to be brittle against malicious changes, when do the models fail and how can we construct a more robust model against these types of attacks are of high interest. This work tries to answer some of the questions regarding explainability and robustness of deep learning by tackling the problem at four different topics. First, real world datasets often contain noise which can badly impact classification model performance. Furthermore, adversarial noise can be crafted to alter classification results. Geometric multi-resolution analysis (GMRA) is capable of capturing and recovering manifolds while preserving geomtric features. We showed that GMRA can be applied to retrieve low dimension representation, which is more robust to noise and simplify classification models. Secondly, I showed that adversarial defense in the image domain can be partially achieved without knowing the specific attacking method by employing preprocessing model trained with the task of denoising. Next, I tackle the problem of adversarial generation in the text domain within the context of real world applications. I devised a new method of crafting adversarial text by using filtered unlabeled data, which is usually more abundant compared to labeled data. Experimental results showed that the new method created more natural and relevant adversarial texts compared with current state of the art methods. Lastly, I presented my work in referring expression generation aiming at creating a more explainable natural language model. The proposed method decomposes the referring expression generation task into two subtasks and experimental results showed that generated expressions are more comprehensive to human readers. I hope that all the approaches proposed here can help further our understanding of the explainability and robustness deep learning models.
|
5 |
Defending against Adversarial Attacks in Speaker Verification SystemsLi-Chi Chang (11178210) 26 July 2021 (has links)
<p>With the advance of the
technologies of Internet of things, smart devices or virtual personal
assistants at home, such as Google Assistant, Apple Siri, and Amazon Alexa,
have been widely used to control and access different objects like door lock,
blobs, air conditioner, and even bank accounts, which makes our life
convenient. Because of its ease for operations, voice control becomes a main
interface between users and these smart devices. To make voice control more
secure, speaker verification systems have been researched to apply human voice
as biometrics to accurately identify a legitimate user and avoid the illegal
access. In recent studies, however, it has been shown that speaker verification
systems are vulnerable to different security attacks such as replay, voice
cloning, and adversarial attacks. Among all attacks, adversarial attacks are
the most dangerous and very challenging to defend. Currently, there is no known
method that can effectively defend against such an attack in speaker verification
systems.</p>
<p>The
goal of this project is to design and implement a defense system that is
simple, light-weight, and effectively against adversarial attacks for speaker
verification. To achieve this goal, we study the audio samples from adversarial
attacks in both the time domain and the Mel spectrogram, and find that the
generated adversarial audio is simply a clean illegal audio with small
perturbations that are similar to white noises, but well-designed to fool
speaker verification. Our intuition is that if these perturbations can be
removed or modified, adversarial attacks can potentially loss the attacking
ability. Therefore, we propose to add a plugin-function module to preprocess
the input audio before it is fed into the verification system. As a first
attempt, we study two opposite plugin functions: denoising that attempts to
remove or reduce perturbations and noise-adding that adds small Gaussian noises
to an input audio. We show through experiments that both methods can
significantly degrade the performance of a state-of-the-art adversarial attack.
Specifically, it is shown that denoising and noise-adding can reduce the
targeted attack success rate of the attack from 100% to only 56% and 5.2%,
respectively. Moreover, noise-adding can slow down the attack 25 times in speed
and has a minor effect on the normal operations of a speaker verification
system. Therefore, we believe that noise-adding can be applied to any speaker
verification system against adversarial attacks. To the best of our knowledge,
this is the first attempt in applying the noise-adding method to defend against
adversarial attacks in speaker verification systems.</p><br>
|
6 |
Adversarial attacks and defense mechanisms to improve robustness of deep temporal point processesSamira Khorshidi (13141233) 08 September 2022 (has links)
<p>Temporal point processes (TPP) are mathematical approaches for modeling asynchronous event sequences by considering the temporal dependency of each event on past events and its instantaneous rate. Temporal point processes can model various problems, from earthquake aftershocks, trade orders, gang violence, and reported crime patterns, to network analysis, infectious disease transmissions, and virus spread forecasting. In each of these cases, the entity's behavior with the corresponding information is noted over time as an asynchronous event sequence, and the analysis is done using temporal point processes, which provides a means to define the generative mechanism of the sequence of events and ultimately predict events and investigate causality.</p>
<p><br></p>
<p>Among point processes, Hawkes process as a stochastic point process is able to model a wide range of contagious and self-exciting patterns. One of Hawkes process's well-known applications is predicting the evolution of viral processes on networks, which is an important problem in biology, the social sciences, and the study of the Internet. In existing works, mean-field analysis based upon degree distribution is used to predict viral spreading across networks of different types. However, it has been shown that degree distribution alone fails to predict the behavior of viruses on some real-world networks. Recent attempts have been made to use assortativity to address this shortcoming. This thesis illustrates how the evolution of such a viral process is sensitive to the underlying network's structure. </p>
<p><br></p>
<p>In Chapter 3, we show that adding assortativity does not fully explain the variance in the spread of viruses for a number of real-world networks. We propose using the graphlet frequency distribution combined with assortativity to explain variations in the evolution of viral processes across networks with identical degree distribution. Using a data-driven approach, by coupling predictive modeling with viral process simulation on real-world networks, we show that simple regression models based on graphlet frequency distribution can explain over 95\% of the variance in virality on networks with the same degree distribution but different network topologies. Our results highlight the importance of graphlets and identify a small collection of graphlets that may have the most significant influence over the viral processes on a network.</p>
<p><br></p>
<p>Due to the flexibility and expressiveness of deep learning techniques, several neural network-based approaches have recently shown promise for modeling point process intensities. However, there is a lack of research on the possible adversarial attacks and the robustness of such models regarding adversarial attacks and natural shocks to systems. Furthermore, while neural point processes may outperform simpler parametric models on in-sample tests, how these models perform when encountering adversarial examples or sharp non-stationary trends remains unknown. </p>
<p><br></p>
<p>In Chapter 4, we propose several white-box and black-box adversarial attacks against deep temporal point processes. Additionally, we investigate the transferability of white-box adversarial attacks against point processes modeled by deep neural networks, which are considered a more elevated risk. Extensive experiments confirm that neural point processes are vulnerable to adversarial attacks. Such a vulnerability is illustrated both in terms of predictive metrics and the effect of attacks on the underlying point process's parameters. Expressly, adversarial attacks successfully transform the temporal Hawkes process regime from sub-critical to into a super-critical and manipulate the modeled parameters that is considered a risk against parametric modeling approaches. Additionally, we evaluate the vulnerability and performance of these models in the presence of non-stationary abrupt changes, using the crimes and Covid-19 pandemic dataset as an example.</p>
<p><br></p>
<p> Considering the security vulnerability of deep-learning models, including deep temporal point processes, to adversarial attacks, it is essential to ensure the robustness of the deployed algorithms that is despite the success of deep learning techniques in modeling temporal point processes.</p>
<p> </p>
<p>In Chapter 5, we study the robustness of deep temporal point processes against several proposed adversarial attacks from the adversarial defense viewpoint. Specifically, we investigate the effectiveness of adversarial training using universal adversarial samples in improving the robustness of the deep point processes. Additionally, we propose a general point process domain-adopted (GPDA) regularization, which is strictly applicable to temporal point processes, to reduce the effect of adversarial attacks and acquire an empirically robust model. In this approach, unlike other computationally expensive approaches, there is no need for additional back-propagation in the training step, and no further network is required. Ultimately, we propose an adversarial detection framework that has been trained in the Generative Adversarial Network (GAN) manner and solely on clean training data. </p>
<p><br></p>
<p>Finally, in Chapter 6, we discuss implications of the research and future research directions.</p>
|
7 |
Accelerator Architecture for Secure and Energy Efficient Machine learningSamavatian, Mohammad Hossein 12 September 2022 (has links)
No description available.
|
8 |
MACHINE LEARNING METHODS FOR SPECTRAL ANALYSISYoulin Liu (11173365) 26 July 2021 (has links)
Measurement science has seen fast growth of data in both volume and complexity in recent years, new algorithms and methodologies have been developed to aid the decision<br>making in measurement sciences, and this process is automated for the liberation of labor. In light of the adversarial approaches shown in digital image processing, Chapter 2 demonstrate how the same attack is possible with spectroscopic data. Chapter 3 takes the question presented in Chapter 2 and optimized the classifier through an iterative approach. The optimized LDA was cross-validated and compared with other standard chemometrics methods, the application was extended to bi-distribution mineral Raman data. Chapter 4 focused on a novel Artificial Neural Network structure design with diffusion measurements; the architecture was tested both with simulated dataset and experimental dataset. Chapter 5 presents the construction of a novel infrared hyperspectral microscope for complex chemical compound classification, with detailed discussion in the segmentation of the images and choice of a classifier to choose.<br>
|
9 |
Imaging and Object Detection under Extreme Lighting Conditions and Real World Adversarial AttacksXiangyu Qu (16385259) 22 June 2023 (has links)
<p>Imaging and computer vision systems deployed in real-world environments face the challenge of accommodating a wide range of lighting conditions. However, the cost, the demand for high resolution, and the miniaturization of imaging devices impose physical constraints on sensor design, limiting both the dynamic range and effective aperture size of each pixel. Consequently, conventional CMOS sensors fail to deliver satisfactory capture in high dynamic range scenes or under photon-limited conditions, thereby impacting the performance of downstream vision tasks. In this thesis, we address two key problems: 1) exploring the utilization of spatial multiplexing, specifically spatially varying exposure tiling, to extend sensor dynamic range and optimize scene capture, and 2) developing techniques to enhance the robustness of object detection systems under photon-limited conditions.</p>
<p><br></p>
<p>In addition to challenges imposed by natural environments, real-world vision systems are susceptible to adversarial attacks in the form of artificially added digital content. Therefore, this thesis presents a comprehensive pipeline for constructing a robust and scalable system to counter such attacks.</p>
|
10 |
Improving the Robustness of Deep Neural Networks against Adversarial Examples via Adversarial Training with Maximal Coding Rate Reduction / Förbättra Robustheten hos Djupa Neurala Nätverk mot Exempel på en Motpart genom Utbildning för motståndare med Maximal Minskning av KodningshastighetenChu, Hsiang-Yu January 2022 (has links)
Deep learning is one of the hottest scientific topics at the moment. Deep convolutional networks can solve various complex tasks in the field of image processing. However, adversarial attacks have been shown to have the ability of fooling deep learning models. An adversarial attack is accomplished by applying specially designed perturbations on the input image of a deep learning model. The noises are almost visually indistinguishable to human eyes, but can fool classifiers into making wrong predictions. In this thesis, adversarial attacks and methods to improve deep learning ’models robustness against adversarial samples were studied. Five different adversarial attack algorithm were implemented. These attack algorithms included white-box attacks and black-box attacks, targeted attacks and non-targeted attacks, and image-specific attacks and universal attacks. The adversarial attacks generated adversarial examples that resulted in significant drop in classification accuracy. Adversarial training is one commonly used strategy to improve the robustness of deep learning models against adversarial examples. It is shown that adversarial training can provide an additional regularization benefit beyond that provided by using dropout. Adversarial training is performed by incorporating adversarial examples into the training process. Traditionally, during this process, cross-entropy loss is used as the loss function. In order to improve the robustness of deep learning models against adversarial examples, in this thesis we propose two new methods of adversarial training by applying the principle of Maximal Coding Rate Reduction. The Maximal Coding Rate Reduction loss function maximizes the coding rate difference between the whole data set and the sum of each individual class. We evaluated the performance of different adversarial training methods by comparing the clean accuracy, adversarial accuracy and local Lipschitzness. It was shown that adversarial training with Maximal Coding Rate Reduction loss function would yield a more robust network than the traditional adversarial training method. / Djupinlärning är ett av de hetaste vetenskapliga ämnena just nu. Djupa konvolutionella nätverk kan lösa olika komplexa uppgifter inom bildbehandling. Det har dock visat sig att motståndarattacker har förmågan att lura djupa inlärningsmodeller. En motståndarattack genomförs genom att man tillämpar särskilt utformade störningar på den ingående bilden för en djup inlärningsmodell. Störningarna är nästan visuellt omöjliga att särskilja för mänskliga ögon, men kan lura klassificerare att göra felaktiga förutsägelser. I den här avhandlingen studerades motståndarattacker och metoder för att förbättra djupinlärningsmodellers robusthet mot motståndarexempel. Fem olika algoritmer för motståndarattack implementerades. Dessa angreppsalgoritmer omfattade white-box-attacker och black-box-attacker, riktade attacker och icke-målinriktade attacker samt bildspecifika attacker och universella attacker. De negativa attackerna genererade motståndarexempel som ledde till en betydande minskning av klassificeringsnoggrannheten. Motståndsträning är en vanligt förekommande strategi för att förbättra djupinlärningsmodellernas robusthet mot motståndarexempel. Det visas att motståndsträning kan ge en ytterligare regulariseringsfördel utöver den som ges genom att använda dropout. Motståndsträning utförs genom att man införlivar motståndarexempel i träningsprocessen. Traditionellt används under denna process cross-entropy loss som förlustfunktion. För att förbättra djupinlärningsmodellernas robusthet mot motståndarexempel föreslår vi i den här avhandlingen två nya metoder för motståndsträning genom att tillämpa principen om maximal minskning av kodningshastigheten. Förlustfunktionen Maximal Coding Rate Reduction maximerar skillnaden i kodningshastighet mellan hela datamängden och summan av varje enskild klass. Vi utvärderade prestandan hos olika metoder för motståndsträning genom att jämföra ren noggrannhet, motstånds noggrannhet och lokal Lipschitzness. Det visades att motståndsträning med förlustfunktionen Maximal Coding Rate Reduction skulle ge ett mer robust nätverk än den traditionella motståndsträningsmetoden.
|
Page generated in 0.1101 seconds