• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 228
  • 20
  • 19
  • 9
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 365
  • 208
  • 181
  • 138
  • 132
  • 120
  • 110
  • 90
  • 87
  • 70
  • 67
  • 57
  • 55
  • 53
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

NON-INTRUSIVE WIRELESS SENSING WITH MACHINE LEARNING

YUCHENG XIE (16558152) 30 August 2023 (has links)
<p>This dissertation explores the world of non-intrusive wireless sensing for diet and fitness activity monitoring, in addition to assessing security risks in human activity recognition (HAR). It delves into the use of WiFi and millimeter wave (mmWave) signals for monitoring eating behaviors, discerning intricate eating activities, and observing fitness movements. The proposed systems harness variations in wireless signal propagation to record human behavior while providing exhaustive details on dietary and exercise habits. Significant contributions encompass unsupervised learning methodologies for detecting dietary and fitness activities, implementing soft-decision and deep neural networks for assorted activity recognition, constructing tiny motion mechanisms for subtle mouth muscle movement recovery, employing space-time-velocity features for multi-person tracking, as well as utilizing generative adversarial networks and domain adaptation structures to enable less cumbersome training efforts and cross-domain deployments. A series of comprehensive tests validate the efficacy and precision of the proposed non-intrusive wireless sensing systems. Additionally, the dissertation probes the security vulnerabilities in mmWave-based HAR systems and puts forth various sophisticated adversarial attacks - targeted, untargeted, universal, and black-box. It designs adversarial perturbations aiming to deceive the HAR models whilst striving to minimize detectability. The research offers powerful insights into issues and efficient solutions relative to non-intrusive sensing tasks and security challenges linked with wireless sensing technologies.</p>
22

Towards Real-World Adversarial Examples in AI-Driven Cybersecurity

Liu, Hao January 2022 (has links)
No description available.
23

<b>EXPLORING ENSEMBLE MODELS AND GAN-BASED </b><b>APPROACHES FOR AUTOMATED DETECTION OF </b><b>MACHINE-GENERATED TEXT</b>

Surbhi Sharma (18437877) 29 April 2024 (has links)
<p dir="ltr">Automated detection of machine-generated text has become increasingly crucial in various fields such as cybersecurity, journalism, and content moderation due to the proliferation of generated content, including fake news, spam, and bot-generated comments. Traditional methods for detecting such content often rely on rule-based systems or supervised learning approaches, which may struggle to adapt to evolving generation techniques and sophisticated manipulations. In this thesis, we explore the use of ensemble models and Generative Adversarial Networks (GANs) for the automated detection of machine-generated text. </p><p dir="ltr">Ensemble models combine the strengths of different approaches, such as utilizing both rule-based systems and machine learning algorithms, to enhance detection accuracy and robustness. We investigate the integration of linguistic features, syntactic patterns, and semantic cues into machine learning pipelines, leveraging the power of Natural Language Processing (NLP) techniques. By combining multiple modalities of information, Ensemble models can effectively capture the subtle characteristics and nuances inherent in machine-generated text, improving detection performance. </p><p dir="ltr">In my latest experiments, I examined the performance of a Random Forest classifier trained on TF-IDF representations in combination with RoBERTa embeddings to calculate probabilities for machine-generated text detection. Test1 results showed promising accuracy rates, indicating the effectiveness of combining TF-IDF with RoBERTa probabilities. Test2 further validated these findings, demonstrating improved detection performance compared to standalone approaches.<br></p><p dir="ltr">These results suggest that leveraging Random Forest TF-IDF representation with RoBERTa embeddings to calculate probabilities can enhance the detection accuracy of machine-generated text.<br></p><p dir="ltr">Furthermore, we delve into the application of GAN-RoBERTa, a class of deep learning models comprising a generator and a discriminator trained adversarially, for generating and detecting machine-generated text. GANs have demonstrated remarkable capabilities in generating realistic text, making them a potential tool for adversaries to produce deceptive content. However, this same adversarial nature can be harnessed for detection purposes,<br>where the discriminator is trained to distinguish between genuine and machine-generated text.<br></p><p dir="ltr">Overall, our findings suggest that the use of Ensemble models and GAN-RoBERTa architectures holds significant promise for the automated detection of machine-generated text. Through a combination of diverse approaches and adversarial training techniques, we have demonstrated improved detection accuracy and robustness, thereby addressing the challenges posed by the proliferation of generated content across various domains. Further research and refinement of these approaches will be essential to stay ahead of evolving generation techniques and ensure the integrity and trustworthiness of textual content in the digital landscape.</p>
24

Adversarial Anomaly Detection

Radhika Bhargava (7036556) 02 August 2019 (has links)
<p>Considerable attention has been given to the vulnerability of machine learning to adversarial samples. This is particularly critical in anomaly detection; uses such as detecting fraud, intrusion, and malware must assume a malicious adversary. We specifically address poisoning attacks, where the adversary injects carefully crafted benign samples into the data, leading to concept drift that causes the anomaly detection to misclassify the actual attack as benign. Our goal is to estimate the vulnerability of an anomaly detection method to an unknown attack, in particular the expected</p> <p>minimum number of poison samples the adversary would need to succeed. Such an estimate is a necessary step in risk analysis: do we expect the anomaly detection to be sufficiently robust to be useful in the face of attacks? We analyze DBSCAN, LOF,</p> <p>one-class SVM as an anomaly detection method, and derive estimates for robustness to poisoning attacks. The analytical estimates are validated against the number of poison samples needed for the actual anomalies in standard anomaly detection test</p> <p>datasets. We then develop defense mechanism, based on the concept drift caused by the poisonous points, to identify that an attack is underway. We show that while it is possible to detect the attacks, it leads to a degradation in the performance of the</p> <p>anomaly detection method. Finally, we investigate whether the generated adversarial samples for one anomaly detection method transfer to another anomaly detection method.</p>
25

Robustness of Neural Networks for Discrete Input: An Adversarial Perspective

Ebrahimi, Javid 30 April 2019 (has links)
In the past few years, evaluating on adversarial examples has become a standard procedure to measure robustness of deep learning models. Literature on adversarial examples for neural nets has largely focused on image data, which are represented as points in continuous space. However, a vast proportion of machine learning models operate on discrete input, and thus demand a similar rigor in understanding their vulnerabilities and robustness. We study robustness of neural network architectures for textual and graph inputs, through the lens of adversarial input perturbations. We will cover methods for both attacks and defense; we will focus on 1) addressing challenges in optimization for creating adversarial perturbations for discrete data; 2) evaluating and contrasting white-box and black-box adversarial examples; and 3) proposing efficient methods to make the models robust against adversarial attacks.
26

An Introduction to Generative Adversarial Networks

Paget, Bryan 11 September 2019 (has links)
This thesis is a survey of the mathematical theory of Generative Adversarial Networks (GANs). The relevant theories discussed are game theory, information theory and optimal transport theory.
27

Adversarial planning by strategy switching in a real-time strategy game

King, Brian D. (Brian David) 12 June 2012 (has links)
We consider the problem of strategic adversarial planning in a Real-Time Strategy (RTS) game. Strategic adversarial planning is the generation of a network of high-level tasks to satisfy goals while anticipating an adversary's actions. In this thesis we describe an abstract state and action space used for planning in an RTS game, an algorithm for generating strategic plans, and a modular architecture for controllers that generate and execute plans. We describe in detail planners that evaluate plans by simulation and select a plan by Game Theoretic criteria. We describe the details of a low-level module of the hierarchy, the combat module. We examine a theoretical performance guarantee for policy switching in Markov Games, and show that policy switching agents can underperform fixed strategy agents. Finally, we present results for strategy switching planners playing against single strategy planners and the game engine's scripted player. The results show that our strategy switching planners outperform single strategy planners in simulation and outperform the game engine's scripted AI. / Graduation date: 2013
28

Rattvisematta / Justice Rug

Cheyne, Bethany Rose January 2018 (has links)
No description available.
29

Perceptions of Growth in Depression: An Interpretative Phenomenological Analysis

January 2014 (has links)
abstract: It is not a new idea that there may be a "silver lining" in depression for some people; that grappling with this condition has the potential to make them stronger or more capable in some way. Over the past three decades, research has proliferated on growth associated with adversity; from life-threatening illness to natural disasters, the death of a loved one, physical abuse, and numerous other forms of trauma. However, very little empirical attention has been paid to the topic of growth resulting from the process of working through psychological distress. Rather, the extant literature tends to consider conditions like depression and anxiety as unsuccessful outcomes, or failed attempts at coping. Furthermore, evidence suggests there is considerable variability in the types of growth perceived by individuals experiencing different forms of adversity. Using interpretative phenomenological analysis (IPA), a qualitative research method, the current study elucidates the experience of growth associated with depression among six individuals from diverse backgrounds. The superordinate themes that emerged from the analysis include: depression as a catalyst for personal development (creative, spiritual, and intellectual); social support and connection; greater presence or engagement in life; a more adaptive and realized sense of self; feelings of gratitude and appreciation; and a recognition of the timing of depression. Each of these themes is examined in relation to participants' processes of meaning making in their experience of growth. The findings of the current study are broadly compatible with, yet qualitatively distinct from, previously identified models of adversarial growth. Implications for future research and clinical practice are discussed. / Dissertation/Thesis / Ph.D. Counseling Psychology 2014
30

Robustness of a neural network used for image classification : The effect of applying distortions on adversarial examples

Östberg, Rasmus January 2018 (has links)
Powerful classifiers as neural networks have long been used to recognise images; these images might depict objects like animals, people or plain text. Distorted images affect the neural network's ability to recognise them, they might be distorted or changed due to distortions related to the camera.Camera related distortions, and how they affect the accuracy, have previously been explored. Recently, it has been proven that images can be intentionally made harder to recognise, an effect that last even after they have been photographed.Such images are known as adversarial examples.The purpose of this thesis is to evaluate how well a neural network can recognise adversarial examples which are also distorted. To evaluate the network, the adversarial examples are distorted in different ways and thereafter fed to the neural network.Different kinds of distortions (rotation, blur, contrast and skew) were used to distort the examples. For each type and strength of distortion the network's ability to classify was measured.Here, it is shown that all distortions influenced the neural network's ability to recognise images.It is concluded that the type and strength of a distortion are important factors when classifying distorted adversarial examples, but also that some distortions, rotation and skew, are able to keep their characteristic influence on the accuracy, even if they are influenced by other distortions.

Page generated in 0.0831 seconds