Spelling suggestions: "subject:"adversarial cachine 1earning"" "subject:"adversarial cachine c1earning""
11 |
A Graybox Defense Through Bootstrapping Deep Neural NetworkKirsen L Sullivan (14105763) 11 November 2022 (has links)
<p>Building a robust deep neural network (DNN) framework turns out to be a very difficult task as adaptive attacks are developed that break a robust DNN strategy. In this work we first study the bootstrap distribution of DNN weights and biases. We bootstrap three DNN models: a simple three layer convolutional neural network (CNN), VGG16 with 13 convolutional layers and 3 fully connected layers, and Inception v3 with 42 layers. Both VGG16 and Inception v3 are trained on CIFAR10 in order for bootstrapping networks to converge. We then compare the bootstrap NN parameter distributions with those from training DNN with different random initial seeds. We discover that the bootstrap DNN parameter distributions change as the DNN model size increases. And the bootstrap DNN parameter distributions are very close to those obtained from training with different random initial seeds. The bootstrap DNN parameter distributions are used to create a graybox defense strategy. We randomize a certain percentage of the weights of the first convolutional layers of a DNN model, and create a random ensemble of DNNs. Based on one trained DNN, we have infinitely many random DNN ensembles. The adaptive attacks lose the target. A random DNN ensemble is resilient to the adversarial attacks and maintains performance on clean data.</p>
|
12 |
Adversarial Attacks On Graph Convolutional Transformer With EHR DataSiddhartha Pothukuchi (18437181) 28 April 2024 (has links)
<p dir="ltr">This research explores adversarial attacks on Graph Convolutional Transformer (GCT) models that utilize Electronic Health Record (EHR) data. As deep learning models become increasingly integral to healthcare, securing their robustness against adversarial threats is critical. This research assesses the susceptibility of GCT models to specific adversarial attacks, namely the Fast Gradient Sign Method (FGSM) and the Jacobian-based Saliency Map Attack (JSMA). It examines their effect on the model’s prediction of mortality and readmission. Through experiments conducted with the MIMIC-III and eICU datasets, the study finds that although the GCT model exhibits superior performance in processing EHR data under normal conditions, its accuracy drops when subjected to adversarial conditions—from an accuracy of 86% with test data to about 57% and an area under the curve (AUC) from 0.86 to 0.51. These findings averaged across both datasets and attack methods, underscore the urgent need for effective adversarial defense mechanisms in AI systems used in healthcare. This thesis contributes to the field by identifying vulnerabilities and suggesting various strategies to enhance the resilience of GCT models against adversarial manipulations.</p>
|
13 |
Trojan Attacks and Defenses on Deep Neural NetworksYingqi Liu (13943811) 13 October 2022 (has links)
<p>With the fast spread of machine learning techniques, sharing and adopting public deep neural networks become very popular. As deep neural networks are not intuitive for human to understand, malicious behaviors can be injected into deep neural networks undetected. We call it trojan attack or backdoor attack on neural networks. Trojaned models operate normally when regular inputs are provided, and misclassify to a specific output label when the input is stamped with some special pattern called trojan trigger. Deploying trojaned models can cause various severe consequences including endangering human lives (in applications like autonomous driving). Trojan attacks on deep neural networks introduce two challenges. From the attacker's perspective, since the training data or training process is usually not accessible to the attacker, the attacker needs to find a way to carry out the trojan attack without access to training data. From the user's perspective, the user needs to quickly scan the online public deep neural networks and detect trojaned models.</p>
<p>We try to address these challenges in this dissertation. For trojan attack without access to training data, We propose to invert the neural network to generate a general trojan trigger, and then retrain the model with reverse-engineered training data to inject malicious behaviors to the model. The malicious behaviors are only activated by inputs stamped with the trojan trigger. To scan and detect trojaned models, we develop a novel technique that analyzes inner neuron behaviors by determining how output activation change when we introduce different levels of stimulation to a neuron. A trojan trigger is then reverse-engineered through an optimization procedure using the stimulation analysis results, to confirm that a neuron is truly compromised. Furthermore, for complex trojan attacks, we propose a novel complex trigger detection method. It leverages a novel symmetric feature differencing method to distinguish features of injected complex triggers from natural features. For trojan attacks on NLP models, we propose a novel backdoor scanning technique. It transforms a subject model to an equivalent but differentiable form. It then inverts a distribution of words denoting their likelihood in the trigger and applies a novel word discriminativity analysis to determine if the subject model is particularly discriminative for the presence of likely trigger words.</p>
|
14 |
Robust Large Margin Approaches for Machine Learning in Adversarial SettingsTorkamani, MohamadAli 21 November 2016 (has links)
Machine learning algorithms are invented to learn from data and to use data to perform predictions and analyses. Many agencies are now using machine learning algorithms to present services and to perform tasks that used to be done by humans. These services and tasks include making high-stake decisions. Determining the right decision strongly relies on the correctness of the input data. This fact provides a tempting incentive for criminals to try to deceive machine learning algorithms by manipulating the data that is fed to the algorithms. And yet, traditional machine learning algorithms are not designed to be safe when confronting unexpected inputs.
In this dissertation, we address the problem of adversarial machine learning; i.e., our goal is to build safe machine learning algorithms that are robust in the presence of noisy or adversarially manipulated data.
Many complex questions -- to which a machine learning system must respond -- have complex answers. Such outputs of the machine learning algorithm can have some internal structure, with exponentially many possible values. Adversarial machine learning will be more challenging when the output that we want to predict has a complex structure itself. In this dissertation, a significant focus is on adversarial machine learning for predicting structured outputs.
In this thesis, first, we develop a new algorithm that reliably performs collective classification: It jointly assigns labels to the nodes of graphed data. It is robust to malicious changes that an adversary can make in the properties of the different nodes of the graph. The learning method is highly efficient and is formulated as a convex quadratic program. Empirical evaluations confirm that this technique not only secures the prediction algorithm in the presence of an adversary, but it also generalizes to future inputs better, even if there is no adversary.
While our robust collective classification method is efficient, it is not applicable to generic structured prediction problems. Next, we investigate the problem of parameter learning for robust, structured prediction models. This method constructs regularization functions based on the limitations of the adversary in altering the feature space of the structured prediction algorithm. The proposed regularization techniques secure the algorithm against adversarial data changes, with little additional computational cost. In this dissertation, we prove that robustness to adversarial manipulation of data is equivalent to some regularization for large-margin structured prediction, and vice versa. This confirms some of the previous results for simpler problems.
As a matter of fact, an ordinary adversary regularly either does not have enough computational power to design the ultimate optimal attack, or it does not have sufficient information about the learner's model to do so. Therefore, it often tries to apply many random changes to the input in a hope of making a breakthrough. This fact implies that if we minimize the expected loss function under adversarial noise, we will obtain robustness against mediocre adversaries. Dropout training resembles such a noise injection scenario. Dropout training was initially proposed as a regularization technique for neural networks. The procedure is simple: At each iteration of training, randomly selected features are set to zero. We derive a regularization method for large-margin parameter learning based on dropout. Our method calculates the expected loss function under all possible dropout values. This method results in a simple objective function that is efficient to optimize. We extend dropout regularization to non-linear kernels in several different directions. We define the concept of dropout for input space, feature space, and input dimensions, and we introduce methods for approximate marginalization over feature space, even if the feature space is infinite-dimensional.
Empirical evaluations show that our techniques consistently outperform the baselines on different datasets.
|
15 |
Adversarial Machine Learning: A Comparative Study on Contemporary Intrusion Detection DatasetsPacheco Monasterios, Yulexis D. January 2020 (has links)
No description available.
|
16 |
Bridging the gap between human and computer vision in machine learning, adversarial and manifold learning for high-dimensional dataJungeum Kim (12957389) 01 July 2022 (has links)
<p>In this dissertation, we study three important problems in modern deep learning: adversarial robustness, visualization, and partially monotonic function modeling. In the first part, we study the trade-off between robustness and standard accuracy in deep neural network (DNN) classifiers. We introduce sensible adversarial learning and demonstrate the synergistic effect between pursuits of standard natural accuracy and robustness. Specifically, we define a sensible adversary which is useful for learning a robust model while keeping high natural accuracy. We theoretically establish that the Bayes classifier is the most robust multi-class classifier with the 0-1 loss under sensible adversarial learning. We propose a novel and efficient algorithm that trains a robust model using implicit loss truncation. Our experiments demonstrate that our method is effective in promoting robustness against various attacks and keeping high natural accuracy. </p>
<p>In the second part, we study nonlinear dimensional reduction with the manifold assumption, often called manifold learning. Despite the recent advances in manifold learning, current state-of-the-art techniques focus on preserving only local or global structure information of the data. Moreover, they are transductive; the dimensional reduction results cannot be generalized to unseen data. We propose iGLoMAP, a novel inductive manifold learning method for dimensional reduction and high-dimensional data visualization. iGLoMAP preserves both local and global structure information in the same algorithm by preserving geodesic distance between data points. We establish the consistency property of our geodesic distance estimators. iGLoMAP can provide the lower-dimensional embedding for an unseen, novel point without any additional optimization. We successfully apply iGLoMAP to the simulated and real-data settings with competitive experiments against state-of-the-art methods.</p>
<p>In the third part, we study partially monotonic DNNs. We model such a function by using the fundamental theorem for line integrals, where the gradient is parametrized by DNNs. For the validity of the model formulation, we develop a symmetric penalty for gradient modeling. Unlike existing methods, our method allows partially monotonic modeling for general DNN architectures and monotonic constraints on multiple variables. We empirically show the necessity of the symmetric penalty on a simulated dataset.</p>
|
17 |
Securing Connected and Automated Surveillance Systems Against Network Intrusions and Adversarial AttacksSiddiqui, Abdul Jabbar 30 June 2021 (has links)
In the recent years, connected surveillance systems have been witnessing an unprecedented
evolution owing to the advancements in internet of things and deep learning technologies. However,
vulnerabilities to various kinds of attacks both at the cyber network-level and at the physical worldlevel are also rising. This poses danger not only to the devices but also to human life and property. The goal of this thesis is to enhance the security of an internet of things, focusing on connected video-based surveillance systems, by proposing multiple novel solutions to address security issues at the cyber network-level and to defend such systems at the physical world-level.
In order to enhance security at the cyber network-level, this thesis designs and develops solutions to detect network intrusions in an internet of things such as surveillance cameras. The first solution is a novel method for network flow features transformation, named TempoCode. It introduces a temporal codebook-based encoding of flow features based on capturing the key patterns of benign traffic in a learnt temporal codebook. The second solution takes an unsupervised learning-based approach and proposes four methods to build efficient and adaptive ensembles of neural networks-based autoencoders for intrusion detection in internet of things such as surveillance cameras.
To address the physical world-level attacks, this thesis studies, for the first time to the best of
our knowledge, adversarial patches-based attacks against a convolutional neural network (CNN)-
based surveillance system designed for vehicle make and model recognition (VMMR). The connected video-based surveillance systems that are based on deep learning models such as CNNs
are highly vulnerable to adversarial machine learning-based attacks that could trick and fool the
surveillance systems. In addition, this thesis proposes and evaluates a lightweight defense solution
called SIHFR to mitigate the impact of such adversarial-patches on CNN-based VMMR systems,
leveraging the symmetry in vehicles’ face images.
The experimental evaluations on recent realistic intrusion detection datasets prove the effectiveness of the developed solutions, in comparison to state-of-the-art, in detecting intrusions of various
types and for different devices. Moreover, using a real-world surveillance dataset, we demonstrate
the effectiveness of the SIHFR defense method which does not require re-training of the target
VMMR model and adds only a minimal overhead. The solutions designed and developed in this
thesis shall pave the way forward for future studies to develop efficient intrusion detection systems
and adversarial attacks mitigation methods for connected surveillance systems such as VMMR.
|
18 |
NON-INTRUSIVE WIRELESS SENSING WITH MACHINE LEARNINGYUCHENG XIE (16558152) 30 August 2023 (has links)
<p>This dissertation explores the world of non-intrusive wireless sensing for diet and fitness activity monitoring, in addition to assessing security risks in human activity recognition (HAR). It delves into the use of WiFi and millimeter wave (mmWave) signals for monitoring eating behaviors, discerning intricate eating activities, and observing fitness movements. The proposed systems harness variations in wireless signal propagation to record human behavior while providing exhaustive details on dietary and exercise habits. Significant contributions encompass unsupervised learning methodologies for detecting dietary and fitness activities, implementing soft-decision and deep neural networks for assorted activity recognition, constructing tiny motion mechanisms for subtle mouth muscle movement recovery, employing space-time-velocity features for multi-person tracking, as well as utilizing generative adversarial networks and domain adaptation structures to enable less cumbersome training efforts and cross-domain deployments. A series of comprehensive tests validate the efficacy and precision of the proposed non-intrusive wireless sensing systems. Additionally, the dissertation probes the security vulnerabilities in mmWave-based HAR systems and puts forth various sophisticated adversarial attacks - targeted, untargeted, universal, and black-box. It designs adversarial perturbations aiming to deceive the HAR models whilst striving to minimize detectability. The research offers powerful insights into issues and efficient solutions relative to non-intrusive sensing tasks and security challenges linked with wireless sensing technologies.</p>
|
19 |
Towards Real-World Adversarial Examples in AI-Driven CybersecurityLiu, Hao January 2022 (has links)
No description available.
|
20 |
<b>Deep Neural Network Structural Vulnerabilities And Remedial Measures</b>Yitao Li (9148706) 02 December 2023 (has links)
<p dir="ltr">In the realm of deep learning and neural networks, there has been substantial advancement, but the persistent DNN vulnerability to adversarial attacks has prompted the search for more efficient defense strategies. Unfortunately, this becomes an arms race. Stronger attacks are being develops, while more sophisticated defense strategies are being proposed, which either require modifying the model's structure or incurring significant computational costs during training. The first part of the work makes a significant progress towards breaking this arms race. Let’s consider natural images, where all the feature values are discrete. Our proposed metrics are able to discover all the vulnerabilities surrounding a given natural image. Given sufficient computation resource, we are able to discover all the adversarial examples given one clean natural image, eliminating the need to develop new attacks. For remedial measures, our approach is to introduce a random factor into DNN classification process. Furthermore, our approach can be combined with existing defense strategy, such as adversarial training, to further improve performance.</p>
|
Page generated in 0.1185 seconds