Spelling suggestions: "subject:"adversarial"" "subject:"adversarialt""
31 |
Rattvisematta / Justice RugCheyne, Bethany Rose January 2018 (has links)
No description available.
|
32 |
Perceptions of Growth in Depression: An Interpretative Phenomenological AnalysisJanuary 2014 (has links)
abstract: It is not a new idea that there may be a "silver lining" in depression for some people; that grappling with this condition has the potential to make them stronger or more capable in some way. Over the past three decades, research has proliferated on growth associated with adversity; from life-threatening illness to natural disasters, the death of a loved one, physical abuse, and numerous other forms of trauma. However, very little empirical attention has been paid to the topic of growth resulting from the process of working through psychological distress. Rather, the extant literature tends to consider conditions like depression and anxiety as unsuccessful outcomes, or failed attempts at coping. Furthermore, evidence suggests there is considerable variability in the types of growth perceived by individuals experiencing different forms of adversity. Using interpretative phenomenological analysis (IPA), a qualitative research method, the current study elucidates the experience of growth associated with depression among six individuals from diverse backgrounds. The superordinate themes that emerged from the analysis include: depression as a catalyst for personal development (creative, spiritual, and intellectual); social support and connection; greater presence or engagement in life; a more adaptive and realized sense of self; feelings of gratitude and appreciation; and a recognition of the timing of depression. Each of these themes is examined in relation to participants' processes of meaning making in their experience of growth. The findings of the current study are broadly compatible with, yet qualitatively distinct from, previously identified models of adversarial growth. Implications for future research and clinical practice are discussed. / Dissertation/Thesis / Ph.D. Counseling Psychology 2014
|
33 |
Robustness of a neural network used for image classification : The effect of applying distortions on adversarial examplesÖstberg, Rasmus January 2018 (has links)
Powerful classifiers as neural networks have long been used to recognise images; these images might depict objects like animals, people or plain text. Distorted images affect the neural network's ability to recognise them, they might be distorted or changed due to distortions related to the camera.Camera related distortions, and how they affect the accuracy, have previously been explored. Recently, it has been proven that images can be intentionally made harder to recognise, an effect that last even after they have been photographed.Such images are known as adversarial examples.The purpose of this thesis is to evaluate how well a neural network can recognise adversarial examples which are also distorted. To evaluate the network, the adversarial examples are distorted in different ways and thereafter fed to the neural network.Different kinds of distortions (rotation, blur, contrast and skew) were used to distort the examples. For each type and strength of distortion the network's ability to classify was measured.Here, it is shown that all distortions influenced the neural network's ability to recognise images.It is concluded that the type and strength of a distortion are important factors when classifying distorted adversarial examples, but also that some distortions, rotation and skew, are able to keep their characteristic influence on the accuracy, even if they are influenced by other distortions.
|
34 |
Adversarial Decision Making in Counterterrorism ApplicationsMazicioglu, Dogucan 01 January 2017 (has links)
Our main objective is to improve decision making in counterterrorism applications by implementing expected utility for prescriptive decision making and prospect theory for descriptive modeling. The areas that we aim to improve are behavioral modeling of adversaries with multi objectives in counterterrorism applications and incorporating risk attitudes of decision makers to risk matrices in assessing risk within an adversarial counterterrorism framework. Traditionally, counterterrorism applications have been approached on a single attribute basis. We utilize a multi-attribute prospect theory approach to more realistically model the attacker’s behavior, while using expected utility theory to prescribe the appropriate actions to the defender. We evaluate our approach by considering an attacker with multiple objectives who wishes to smuggle radioactive material into the United States and a defender who has the option to implement a screening process to hinder the attacker. Next, we consider the use of risk matrices (a method widely used for assessing risk given a consequence and a probability pairing of a potential threat) in an adversarial framework – modeling an attacker and defender risk matrix using utility theory and linking the matrices with the Luce model. A shortcoming with modeling the attacker and the defender risk matrix using utility theory is utility theory’s failure to account for the decision makers’ deviation from rational behavior as seen in experimental literature. We consider an adversarial risk matrix framework that models the attacker risk matrix using prospect theory to overcome this shortcoming, while using expected utility theory to prescribe actions to the defender.
|
35 |
Adversarial Deep Learning Against Intrusion Detection ClassifiersRigaki, Maria January 2017 (has links)
Traditional approaches in network intrusion detection follow a signature-based ap- proach, however the use of anomaly detection approaches based on machine learning techniques have been studied heavily for the past twenty years. The continuous change in the way attacks are appearing, the volume of attacks, as well as the improvements in the big data analytics space, make machine learning approaches more alluring than ever. The intention of this thesis is to show that using machine learning in the intrusion detection domain should be accompanied with an evaluation of its robustness against adversaries. Several adversarial techniques have emerged lately from the deep learning research, largely in the area of image classification. These techniques are based on the idea of introducing small changes in the original input data in order to make a machine learning model to misclassify it. This thesis follows a big data Analytics methodol- ogy and explores adversarial machine learning techniques that have emerged from the deep learning domain, against machine learning classifiers used for network intrusion detection. The study looks at several well known classifiers and studies their performance under attack over several metrics, such as accuracy, F1-score and receiver operating character- istic. The approach used assumes no knowledge of the original classifier and examines both general and targeted misclassification. The results show that using relatively sim- ple methods for generating adversarial samples it is possible to lower the detection accuracy of intrusion detection classifiers from 5% to 28%. Performance degradation is achieved using a methodology that is simpler than previous approaches and it re- quires only 6.25% change between the original and the adversarial sample, making it a candidate for a practical adversarial approach.
|
36 |
Defending Against Adversarial Attacks Using Denoising AutoencodersRehana Mahfuz (8617635) 24 April 2020 (has links)
Gradient-based adversarial attacks on neural networks threaten extremely critical applications such as medical diagnosis and biometric authentication. These attacks use the gradient of the neural network to craft imperceptible perturbations to be added to the test data, in an attempt to decrease the accuracy of the network. We propose a defense to combat such attacks, which can be modified to reduce the training time of the network by as much as 71%, and can be further modified to reduce the training time of the defense by as much as 19%. Further, we address the threat of uncertain behavior on the part of the attacker, a threat previously overlooked in the literature that considers mostly white box scenarios. To combat uncertainty on the attacker's part, we train our defense with an ensemble of attacks, each generated with a different attack algorithm, and using gradients of distinct architecture types. Finally, we discuss how we can prevent the attacker from breaking the defense by estimating the gradient of the defense transformation.
|
37 |
Image Transfer Between Magnetic Resonance Images and Speech DiagramsWang, Kang 03 December 2020 (has links)
Realtime Magnetic Resonance Imaging (MRI) is a method used for human
anatomical study. MRIs give exceptionally detailed information about soft-tissue
structures, such as tongues, that other current imaging techniques cannot achieve.
However, the process requires special equipment and is expensive. Hence, it is not quite
suitable for all patients.
Speech diagrams show the side view positions of organs like the tongue, throat,
and lip of a speaking or singing person. The process of making a speech diagram is like
the semantic segmentation of an MRI, which focuses on the selected edge structure.
Speech diagrams are easy to understand with a clear speech diagram of the tongue and
inside mouth structure. However, it often requires manual annotation on the MRI
machine by an expert in the field.
By using machine learning methods, we achieved transferring images between
MRI and speech diagrams in two directions. We first matched videos of speech diagram
and tongue MRIs. Then we used various image processing methods and data
augmentation methods to make the paired images easy to train. We built our network
model inspired by different cross-domain image transfer methods and applied
reference-based super-resolution methods—to generate high-resolution images. Thus,
we can do the transferring work through our network instead of manually. Also,
generated speech diagram can work as an intermediary part to be transferred to other
medical images like computerized tomography (CT), since it is simpler in structure
compared to an MRI.
We conducted experiments using both the data from our database and other MRI
video sources. We use multiple methods to do the evaluation and comparisons with
several related methods show the superiority of our approach.
|
38 |
Efficient and Secure Deep Learning Inference System: A Software and Hardware Co-design PerspectiveJanuary 2020 (has links)
abstract: The advances of Deep Learning (DL) achieved recently have successfully demonstrated its great potential of surpassing or close to human-level performance across multiple domains. Consequently, there exists a rising demand to deploy state-of-the-art DL algorithms, e.g., Deep Neural Networks (DNN), in real-world applications to release labors from repetitive work. On the one hand, the impressive performance achieved by the DNN normally accompanies with the drawbacks of intensive memory and power usage due to enormous model size and high computation workload, which significantly hampers their deployment on the resource-limited cyber-physical systems or edge devices. Thus, the urgent demand for enhancing the inference efficiency of DNN has also great research interests across various communities. On the other hand, scientists and engineers still have insufficient knowledge about the principles of DNN which makes it mostly be treated as a black-box. Under such circumstance, DNN is like "the sword of Damocles" where its security or fault-tolerance capability is an essential concern which cannot be circumvented.
Motivated by the aforementioned concerns, this dissertation comprehensively investigates the emerging efficiency and security issues of DNNs, from both software and hardware design perspectives. From the efficiency perspective, as the foundation technique for efficient inference of target DNN, the model compression via quantization is elaborated. In order to maximize the inference performance boost, the deployment of quantized DNN on the revolutionary Computing-in-Memory based neural accelerator is presented in a cross-layer (device/circuit/system) fashion. From the security perspective, the well known adversarial attack is investigated spanning from its original input attack form (aka. Adversarial example generation) to its parameter attack variant. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2020
|
39 |
To Encourage or to Restrict: the Label Dependency in Multi-Label LearningYang, Zhuo 06 1900 (has links)
Multi-label learning addresses the problem that one instance can be associated with multiple labels simultaneously. Understanding and exploiting the Label Dependency (LD) is well accepted as the key to build high-performance multi-label classifiers, i.e., classifiers having abilities including but not limited to generalizing well on clean data and being robust under evasion attack.
From the perspective of generalization on clean data, previous works have proved the advantage of exploiting LD in multi-label classification. To further verify the positive role of LD in multi-label classification and address previous limitations, we originally propose an approach named Prototypical Networks for Multi- Label Learning (PNML). Specially, PNML addresses multi-label classification from the angle of estimating the positive and negative class distribution of each label in a shared nonlinear embedding space. PNML achieves the State-Of-The-Art (SOTA) classification performance on clean data.
From the perspective of robustness under evasion attack, as a pioneer, we firstly define the attackability of an multi-label classifier as the expected maximum number of flipped decision outputs by injecting budgeted perturbations to the feature distribution of data. Denote the attackability of a multi-label classifier as C∗, and the empirical evaluation of C∗ is an NP-hard problem. We thus develop a method named Greedy Attack Space Exploration (GASE) to estimate C∗ efficiently. More interestingly, we derive an information-theoretic upper bound for the adversarial risk faced by multi-label classifiers. The bound unveils the key factors determining the attackability of multi-label classifiers and points out the negative role of LD in multi-label classifiers’ adversarial robustness, i.e. LD helps the transfer of attack across labels, which makes multi-label classifiers more attackable. One step forward, inspired by the derived bound, we propose a Soft Attackability Estimator (SAE) and further develop Adversarial Robust Multi-label learning with regularized SAE (ARM-SAE) to improve the adversarial robustness of multi-label classifiers.
This work gives a more comprehensive understanding of LD in multi-label learning. The exploiting of LD should be encouraged since its positive role in models’ generalization on clean data, but be restricted because of its negative role in models’ adversarial robustness.
|
40 |
Deconfounding and Generating Embeddings of Drug-Induced Gene Expression Profiles Using Deep Learning for Drug Repositioning ApplicationsAlsulami, Reem A. 24 April 2022 (has links)
Drug-induced gene expression profiles are rich information sources that can help
to measure the effect of a drug on the transcriptional state of cells. However, the
available experimental data only covers a limited set of conditions such as treatment
time, dosages, and cell lines. This poses a challenge for neural network models to
learn embeddings that can be generalized to new experimental conditions. In this
project, we focus on the cell line as the confounder variable and train an Adversarial
Neural Network to extract transcriptional effects that are conserved across multiple
cell lines, and can thus be more confidently generalized to the biological setting of
interest. Additionally, we investigate several methods to test whether our approach
can simultaneously learn biologically valid embeddings and deconfound the effect of
cell lines on the data distribution
|
Page generated in 0.0625 seconds