Spelling suggestions: "subject:"adversarial"" "subject:"adversarialt""
21 |
TOWARDS SECURE AND ROBUST 3D PERCEPTION IN THE REAL WORLD: AN ADVERSARIAL APPROACHZhiyuan Cheng (19104104) 11 July 2024 (has links)
<p dir="ltr">The advent of advanced machine learning and computer vision techniques has led to the feasibility of 3D perception in the real world, which includes but not limited to tasks of monocular depth estimation (MDE), 3D object detection, semantic scene completion, optical flow estimation (OFE), etc. Due to the 3D nature of our physical world, these techniques have enabled various real-world applications like Autonomous Driving (AD), unmanned aerial vehicle (UAV), virtual/augmented reality (VR/AR) and video composition, revolutionizing the field of transportation and entertainment. However, it is well-documented that Deep Neural Network (DNN) models can be susceptible to adversarial attacks. These attacks, characterized by minimal perturbations, can precipitate substantial malfunctions. Considering that 3D perception techniques are crucial for security-sensitive applications, such as autonomous driving systems (ADS), in the real world, adversarial attacks on these systems represent significant threats. As a result, my goal of research is to build secure and robust real-world 3D perception systems. Through the examination of vulnerabilities in 3D perception techniques under such attacks, my dissertation aims to expose and mitigate these weaknesses. Specifically, I propose stealthy physical-world attacks against MDE, a fundamental component in ADS and AR/VR that facilitates the projection from 2D to 3D. I have advanced the stealth of the patch attack by minimizing the patch size and disguising the adversarial pattern, striking an optimal balance between stealth and efficacy. Moreover, I develop single-modal attacks against camera-LiDAR fusion models for 3D object detection, utilizing adversarial patches. This method underscores that mere fusion of sensors does not assure robustness against adversarial attacks. Additionally, I study black-box attacks against MDE and OFE models, which are more practical and impactful as no model details are required and the models can be compromised through only queries. In parallel, I devise a self-supervised adversarial training method to harden MDE models without the necessity of ground-truth depth labels. This enhanced model is capable of withstanding a range of adversarial attacks, including those in the physical world. Through these innovative designs for both attack and defense, this research contributes to the development of more secure and robust 3D perception systems, particularly in the context of the real world applications.</p>
|
22 |
Bridging the gap between human and computer vision in machine learning, adversarial and manifold learning for high-dimensional dataJungeum Kim (12957389) 01 July 2022 (has links)
<p>In this dissertation, we study three important problems in modern deep learning: adversarial robustness, visualization, and partially monotonic function modeling. In the first part, we study the trade-off between robustness and standard accuracy in deep neural network (DNN) classifiers. We introduce sensible adversarial learning and demonstrate the synergistic effect between pursuits of standard natural accuracy and robustness. Specifically, we define a sensible adversary which is useful for learning a robust model while keeping high natural accuracy. We theoretically establish that the Bayes classifier is the most robust multi-class classifier with the 0-1 loss under sensible adversarial learning. We propose a novel and efficient algorithm that trains a robust model using implicit loss truncation. Our experiments demonstrate that our method is effective in promoting robustness against various attacks and keeping high natural accuracy. </p>
<p>In the second part, we study nonlinear dimensional reduction with the manifold assumption, often called manifold learning. Despite the recent advances in manifold learning, current state-of-the-art techniques focus on preserving only local or global structure information of the data. Moreover, they are transductive; the dimensional reduction results cannot be generalized to unseen data. We propose iGLoMAP, a novel inductive manifold learning method for dimensional reduction and high-dimensional data visualization. iGLoMAP preserves both local and global structure information in the same algorithm by preserving geodesic distance between data points. We establish the consistency property of our geodesic distance estimators. iGLoMAP can provide the lower-dimensional embedding for an unseen, novel point without any additional optimization. We successfully apply iGLoMAP to the simulated and real-data settings with competitive experiments against state-of-the-art methods.</p>
<p>In the third part, we study partially monotonic DNNs. We model such a function by using the fundamental theorem for line integrals, where the gradient is parametrized by DNNs. For the validity of the model formulation, we develop a symmetric penalty for gradient modeling. Unlike existing methods, our method allows partially monotonic modeling for general DNN architectures and monotonic constraints on multiple variables. We empirically show the necessity of the symmetric penalty on a simulated dataset.</p>
|
23 |
NON-INTRUSIVE WIRELESS SENSING WITH MACHINE LEARNINGYUCHENG XIE (16558152) 30 August 2023 (has links)
<p>This dissertation explores the world of non-intrusive wireless sensing for diet and fitness activity monitoring, in addition to assessing security risks in human activity recognition (HAR). It delves into the use of WiFi and millimeter wave (mmWave) signals for monitoring eating behaviors, discerning intricate eating activities, and observing fitness movements. The proposed systems harness variations in wireless signal propagation to record human behavior while providing exhaustive details on dietary and exercise habits. Significant contributions encompass unsupervised learning methodologies for detecting dietary and fitness activities, implementing soft-decision and deep neural networks for assorted activity recognition, constructing tiny motion mechanisms for subtle mouth muscle movement recovery, employing space-time-velocity features for multi-person tracking, as well as utilizing generative adversarial networks and domain adaptation structures to enable less cumbersome training efforts and cross-domain deployments. A series of comprehensive tests validate the efficacy and precision of the proposed non-intrusive wireless sensing systems. Additionally, the dissertation probes the security vulnerabilities in mmWave-based HAR systems and puts forth various sophisticated adversarial attacks - targeted, untargeted, universal, and black-box. It designs adversarial perturbations aiming to deceive the HAR models whilst striving to minimize detectability. The research offers powerful insights into issues and efficient solutions relative to non-intrusive sensing tasks and security challenges linked with wireless sensing technologies.</p>
|
24 |
Towards Real-World Adversarial Examples in AI-Driven CybersecurityLiu, Hao January 2022 (has links)
No description available.
|
25 |
<b>EXPLORING ENSEMBLE MODELS AND GAN-BASED </b><b>APPROACHES FOR AUTOMATED DETECTION OF </b><b>MACHINE-GENERATED TEXT</b>Surbhi Sharma (18437877) 29 April 2024 (has links)
<p dir="ltr">Automated detection of machine-generated text has become increasingly crucial in various fields such as cybersecurity, journalism, and content moderation due to the proliferation of generated content, including fake news, spam, and bot-generated comments. Traditional methods for detecting such content often rely on rule-based systems or supervised learning approaches, which may struggle to adapt to evolving generation techniques and sophisticated manipulations. In this thesis, we explore the use of ensemble models and Generative Adversarial Networks (GANs) for the automated detection of machine-generated text. </p><p dir="ltr">Ensemble models combine the strengths of different approaches, such as utilizing both rule-based systems and machine learning algorithms, to enhance detection accuracy and robustness. We investigate the integration of linguistic features, syntactic patterns, and semantic cues into machine learning pipelines, leveraging the power of Natural Language Processing (NLP) techniques. By combining multiple modalities of information, Ensemble models can effectively capture the subtle characteristics and nuances inherent in machine-generated text, improving detection performance. </p><p dir="ltr">In my latest experiments, I examined the performance of a Random Forest classifier trained on TF-IDF representations in combination with RoBERTa embeddings to calculate probabilities for machine-generated text detection. Test1 results showed promising accuracy rates, indicating the effectiveness of combining TF-IDF with RoBERTa probabilities. Test2 further validated these findings, demonstrating improved detection performance compared to standalone approaches.<br></p><p dir="ltr">These results suggest that leveraging Random Forest TF-IDF representation with RoBERTa embeddings to calculate probabilities can enhance the detection accuracy of machine-generated text.<br></p><p dir="ltr">Furthermore, we delve into the application of GAN-RoBERTa, a class of deep learning models comprising a generator and a discriminator trained adversarially, for generating and detecting machine-generated text. GANs have demonstrated remarkable capabilities in generating realistic text, making them a potential tool for adversaries to produce deceptive content. However, this same adversarial nature can be harnessed for detection purposes,<br>where the discriminator is trained to distinguish between genuine and machine-generated text.<br></p><p dir="ltr">Overall, our findings suggest that the use of Ensemble models and GAN-RoBERTa architectures holds significant promise for the automated detection of machine-generated text. Through a combination of diverse approaches and adversarial training techniques, we have demonstrated improved detection accuracy and robustness, thereby addressing the challenges posed by the proliferation of generated content across various domains. Further research and refinement of these approaches will be essential to stay ahead of evolving generation techniques and ensure the integrity and trustworthiness of textual content in the digital landscape.</p>
|
26 |
A Comprehensive Approach to Evaluating Usability and Hyperparameter Selection for Synthetic Data GenerationAdriana Louise Watson (19180771) 20 July 2024 (has links)
<p dir="ltr">Data is the key component of every machine-learning algorithm. Without sufficient quantities of quality data, the vast majority of machine learning algorithms fail to perform. Acquiring the data necessary to feed algorithms, however, is a universal challenge. Recently, synthetic data production methods have become increasingly relevant as a method of ad-dressing a variety of data issues. Synthetic data allows researchers to produce supplemental data from an existing dataset. Furthermore, synthetic data anonymizes data without losing functionality. To advance the field of synthetic data production, however, measuring the quality of produced synthetic data is an essential step. Although there are existing methods for evaluating synthetic data quality, the methods tend to address finite aspects of the data quality. Furthermore, synthetic data evaluation from one study to another varies immensely adding further challenge to the quality comparison process. Finally, al-though tools exist to automatically tune hyperparameters, the tools fixate on traditional machine learning applications. Thus, identifying ideal hyperparameters for individual syn-thetic data generation use cases is also an ongoing challenge.</p>
|
27 |
Adversarial Anomaly DetectionRadhika Bhargava (7036556) 02 August 2019 (has links)
<p>Considerable attention has been given to the vulnerability of machine learning to adversarial samples. This is particularly critical in anomaly detection; uses such as detecting fraud, intrusion, and malware must assume a malicious adversary. We specifically address poisoning attacks, where the adversary injects carefully crafted benign samples into the data, leading to concept drift that causes the anomaly detection to misclassify the actual attack as benign. Our goal is to estimate the vulnerability of an anomaly detection method to an unknown attack, in particular the expected</p>
<p>minimum number of poison samples the adversary would need to succeed. Such an estimate is a necessary step in risk analysis: do we expect the anomaly detection to be sufficiently robust to be useful in the face of attacks? We analyze DBSCAN, LOF,</p>
<p>one-class SVM as an anomaly detection method, and derive estimates for robustness to poisoning attacks. The analytical estimates are validated against the number of poison samples needed for the actual anomalies in standard anomaly detection test</p>
<p>datasets. We then develop defense mechanism, based on the concept drift caused by the poisonous points, to identify that an attack is underway. We show that while it is possible to detect the attacks, it leads to a degradation in the performance of the</p>
<p>anomaly detection method. Finally, we investigate whether the generated adversarial samples for one anomaly detection method transfer to another anomaly detection method.</p>
|
28 |
Robustness of Neural Networks for Discrete Input: An Adversarial PerspectiveEbrahimi, Javid 30 April 2019 (has links)
In the past few years, evaluating on adversarial examples has become a standard
procedure to measure robustness of deep learning models. Literature on adversarial
examples for neural nets has largely focused on image data, which are represented as
points in continuous space. However, a vast proportion of machine learning models
operate on discrete input, and thus demand a similar rigor in understanding their
vulnerabilities and robustness. We study robustness of neural network architectures
for textual and graph inputs, through the lens of adversarial input perturbations.
We will cover methods for both attacks and defense; we will focus on 1) addressing
challenges in optimization for creating adversarial perturbations for discrete data;
2) evaluating and contrasting white-box and black-box adversarial examples; and 3)
proposing efficient methods to make the models robust against adversarial attacks.
|
29 |
An Introduction to Generative Adversarial NetworksPaget, Bryan 11 September 2019 (has links)
This thesis is a survey of the mathematical theory of Generative Adversarial Networks (GANs). The relevant theories discussed are game theory, information theory and optimal transport theory.
|
30 |
Adversarial planning by strategy switching in a real-time strategy gameKing, Brian D. (Brian David) 12 June 2012 (has links)
We consider the problem of strategic adversarial planning in a Real-Time Strategy (RTS) game. Strategic adversarial planning is the generation of a network of high-level tasks to satisfy goals while anticipating an adversary's actions. In this thesis we describe an abstract state and action space used for planning in an RTS game, an algorithm for generating strategic plans, and a modular architecture for controllers that generate and execute plans. We describe in detail planners that evaluate plans by simulation and select a plan by Game Theoretic criteria. We describe the details of a low-level module of the hierarchy, the combat module. We examine a theoretical performance guarantee for policy switching in Markov Games, and show that policy switching agents can underperform fixed strategy agents. Finally, we present results for strategy switching planners playing against single strategy planners and the game engine's scripted player. The results show that our strategy switching planners outperform single strategy planners in simulation and outperform the game engine's scripted AI. / Graduation date: 2013
|
Page generated in 0.1044 seconds