• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 232
  • 21
  • 19
  • 9
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 376
  • 217
  • 188
  • 145
  • 134
  • 125
  • 114
  • 93
  • 90
  • 71
  • 71
  • 60
  • 56
  • 55
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Tvorba nepřátelských vzorů hlubokými generativními modely / Adversarial examples design by deep generative models

Čermák, Vojtěch January 2021 (has links)
In the thesis, we explore the prospects of creating adversarial examples using various generative models. We design two algorithms to create unrestricted adversarial examples by perturbing the vectors of latent representation and exploiting the target classifier's decision boundary properties. The first algorithm uses linear interpolation combined with bisection to extract candidate samples near the decision boundary of the targeted classifier. The second algorithm applies the idea behind the FGSM algorithm on vectors of latent representation and uses additional information from gradients to obtain better candidate samples. In an empirical study on MNIST, SVHN and CIFAR10 datasets, we show that the candidate samples contain adversarial examples, samples that look like some class to humans but are classified as a different class by machines. Additionally, we show that standard defence techniques are vulnerable to our attacks.
112

Defending against Adversarial Attacks in Speaker Verification Systems

Li-Chi Chang (11178210) 26 July 2021 (has links)
<p>With the advance of the technologies of Internet of things, smart devices or virtual personal assistants at home, such as Google Assistant, Apple Siri, and Amazon Alexa, have been widely used to control and access different objects like door lock, blobs, air conditioner, and even bank accounts, which makes our life convenient. Because of its ease for operations, voice control becomes a main interface between users and these smart devices. To make voice control more secure, speaker verification systems have been researched to apply human voice as biometrics to accurately identify a legitimate user and avoid the illegal access. In recent studies, however, it has been shown that speaker verification systems are vulnerable to different security attacks such as replay, voice cloning, and adversarial attacks. Among all attacks, adversarial attacks are the most dangerous and very challenging to defend. Currently, there is no known method that can effectively defend against such an attack in speaker verification systems.</p> <p>The goal of this project is to design and implement a defense system that is simple, light-weight, and effectively against adversarial attacks for speaker verification. To achieve this goal, we study the audio samples from adversarial attacks in both the time domain and the Mel spectrogram, and find that the generated adversarial audio is simply a clean illegal audio with small perturbations that are similar to white noises, but well-designed to fool speaker verification. Our intuition is that if these perturbations can be removed or modified, adversarial attacks can potentially loss the attacking ability. Therefore, we propose to add a plugin-function module to preprocess the input audio before it is fed into the verification system. As a first attempt, we study two opposite plugin functions: denoising that attempts to remove or reduce perturbations and noise-adding that adds small Gaussian noises to an input audio. We show through experiments that both methods can significantly degrade the performance of a state-of-the-art adversarial attack. Specifically, it is shown that denoising and noise-adding can reduce the targeted attack success rate of the attack from 100% to only 56% and 5.2%, respectively. Moreover, noise-adding can slow down the attack 25 times in speed and has a minor effect on the normal operations of a speaker verification system. Therefore, we believe that noise-adding can be applied to any speaker verification system against adversarial attacks. To the best of our knowledge, this is the first attempt in applying the noise-adding method to defend against adversarial attacks in speaker verification systems.</p><br>
113

Adversarial Example Transferabilty to Quantized Models

Kratzert, Ludvig January 2021 (has links)
Deep learning has proven to be a major leap in machine learning, allowing completely new problems to be solved. While flexible and powerful, neural networks have the disadvantage of being large and demanding high performance from the devices on which they are run. In order to deploy neural networks on more, and simpler, devices, techniques such as quantization, sparsification and tensor decomposition have been developed. These techniques have shown promising results, but their effects on model robustness against attacks remain largely unexplored. In this thesis, Universal Adversarial Perturbations (UAP) and the Fast Gradient Sign Method (FGSM) are tested against VGG-19 as well as versions of it compressed using 8-bit quantization, TensorFlow’s float16 quantization, and 8-bit and 4-bit single layer quantization as introduced in this thesis. The results show that UAP transfers well to all quantized models, while the transferability of FGSM is high to the float16 quantized model, lower to the 8-bit models, and high to the 4-bit SLQ model. We suggest that this disparity arises from the universal adversarial perturbations’ having been trained on multiple examples rather than just one, which has previously been shown to increase transferability. The results also show that quantizing a single layer, the first layer in this case, can have a disproportionate impact on transferability. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
114

Data augmentation for attack detection on IoT Telehealth Systems

Khan, Zaid A. 11 March 2022 (has links)
Telehealth is an online health care system that is extensively used in the current pandemic situation. Our proposed technique is considered a fog computing-based attack detection architecture to protect IoT Telehealth Networks. As for IoT Telehealth Networks, the sensor/actuator edge devices are considered the weakest link in the IoT system and are obvious targets of attacks such as botnet attacks. In this thesis, we introduce a novel framework that employs several machine learning and data analysis techniques to detect those attacks. We evaluate the effectiveness of the proposed framework using two publicly available datasets from real-world scenarios. These datasets contain a variety of attacks with different characteristics. The robustness of the proposed framework and its ability, to detect and distinguish between the existing IoT attacks that are tested by combining the two datasets for cross-evaluation. This combination is based on a novel technique for generating supplementary data instances, which employs GAN (generative adversarial networks) for data augmentation and to ensure that the number of samples and features are balanced. / Graduate
115

UNCERTAINTY, EDGE, AND REVERSE-ATTENTION GUIDED GENERATIVE ADVERSARIAL NETWORK FOR AUTOMATIC BUILDING DETECTION IN REMOTELY SENSED IMAGES

Somrita Chattopadhyay (12210671) 18 April 2022 (has links)
Despite recent advances in deep-learning based semantic segmentation, automatic building detection from remotely sensed imagery is still a challenging problem owing to large variability in the appearance of buildings across the globe. The errors occur mostly around the boundaries of the building footprints, in shadow areas, and when detecting buildings whose exterior surfaces have reflectivity properties that are very similar to those of the surrounding regions. To overcome these problems, we propose a generative adversarial network based segmentation framework with uncertainty attention unit and refinement module embedded in the generator. The refinement module, composed of edge and reverse attention units, is designed to refine the predicted building map. The edge attention enhances the boundary features to estimate building boundaries with greater precision, and the reverse attention allows the network to explore the features missing in the previously estimated regions. The uncertainty attention unit assists the network in resolving uncertainties in classification. As a measure of the power of our approach, as of January 5, 2022, it ranks at the second place on DeepGlobe’s public leaderboard despite the fact that main focus of our approach — refinement of the building edges — does not align exactly with the metrics used for leaderboard rankings. Our overall F1-score on DeepGlobe’s challenging dataset is 0.745. We also report improvements on the previous-best results for the challenging INRIA Validation Dataset for which our network achieves an overall IoU of 81.28% and an overall accuracy of 97.03%. Along the same lines, for the official INRIA Test Dataset, our network scores 77.86% and 96.41% in overall IoU and accuracy. We have also improved upon the previous best results on two other datasets: For the WHU Building Dataset, our network achieves 92.27% IoU, 96.73% precision, 95.24% recall and 95.98% F1-score. And, finally, for the Massachusetts Buildings Dataset, our network achieves 96.19% relaxed IoU score and 98.03% relaxed F1-score over the previous best scores of 91.55% and 96.78% respectively, and in terms of non-relaxed F1 and IoU scores, our network outperforms the previous best scores by 2.77% and 3.89% respectively.
116

Generation of Synthetic Images with Generative Adversarial Networks

Zeid Baker, Mousa January 2018 (has links)
Machine Learning is a fast growing area that revolutionizes computer programs by providing systems with the ability to automatically learn and improve from experience. In most cases, the training process begins with extracting patterns from data. The data is a key factor for machine learning algorithms, without data the algorithms will not work. Thus, having sufficient and relevant data is crucial for the performance. In this thesis, the researcher tackles the problem of not having a sufficient dataset, in terms of the number of training examples, for an image classification task. The idea is to use Generative Adversarial Networks to generate synthetic images similar to the ground truth, and in this way expand a dataset. Two types of experiments were conducted: the first was used to fine-tune a Deep Convolutional Generative Adversarial Network for a specific dataset, while the second experiment was used to analyze how synthetic data examples affect the accuracy of a Convolutional Neural Network in a classification task. Three well known datasets were used in the first experiment, namely MNIST, Fashion-MNIST and Flower photos, while two datasets were used in the second experiment: MNIST and Fashion-MNIST. The results of the generated images of MNIST and Fashion-MNIST had good overall quality. Some classes had clear visual errors while others were indistinguishable from ground truth examples. When it comes to the Flower photos, the generated images suffered from poor visual quality. One can easily tell the synthetic images from the real ones. One reason for the bad performance is due to the large quantity of noise in the Flower photos dataset. This made it difficult for the model to spot the important features of the flowers. The results from the second experiment show that the accuracy does not increase when the two datasets, MNIST and Fashion-MNIST, are expanded with synthetic images. This is not because the generated images had bad visual quality, but because the accuracy turned out to not be highly dependent on the number of training examples. It can be concluded that Deep Convolutional Generative Adversarial Networks are capable of generating synthetic images similar to the ground truth and thus can be used to expand a dataset. However, this approach does not completely solve the initial problem of not having adequate datasets because Deep Convolutional Generative Adversarial Networks may themselves require, depending on the dataset, a large quantity of training examples.
117

Disocclusion Inpainting using Generative Adversarial Networks

Aftab, Nadeem January 2020 (has links)
The old methods used for images inpainting of the Depth Image Based Rendering (DIBR) process are inefficient in producing high-quality virtual views from captured data. From the viewpoint of the original image, the generated data’s structure seems less distorted in the virtual view obtained by translation but when then the virtual view involves rotation, gaps and missing spaces become visible in the DIBR generated data. The typical approaches for filling the disocclusion tend to be slow, inefficient, and inaccurate. In this project, a modern technique Generative Adversarial Network (GAN) is used to fill the disocclusion. GAN consists of two or more neural networks that compete against each other and get trained. This study result shows that GAN can inpaint the disocclusion with a consistency of the structure. Additionally, another method (Filling) is used to enhance the quality of GAN and DIBR images. The statistical evaluation of results shows that GAN and filling method enhance the quality of DIBR images.
118

Imitation Learning based on Generative Adversarial Networks for Robot Path Planning

Yi, Xianyong 24 November 2020 (has links)
Robot path planning and dynamic obstacle avoidance are defined as a problem that robots plan a feasible path from a given starting point to a destination point in a nonlinear dynamic environment, and safely bypass dynamic obstacles to the destination with minimal deviation from the trajectory. Path planning is a typical sequential decision-making problem. Dynamic local observable environment requires real-time and adaptive decision-making systems. It is an innovation for the robot to learn the policy directly from demonstration trajectories to adapt to similar state spaces that may appear in the future. We aim to develop a method for directly learning navigation behavior from demonstration trajectories without defining the environment and attention models, by using the concepts of Generative Adversarial Imitation Learning (GAIL) and Sequence Generative Adversarial Network (SeqGAN). The proposed SeqGAIL model in this thesis allows the robot to reproduce the desired behavior in different situations. In which, an adversarial net is established, and the Feature Counts Errors reduction is utilized as the forcing objective for the Generator. The refinement measure is taken to solve the instability problem. In addition, we proposed to use the Rapidly-exploring Random Tree* (RRT*) with pre-trained weights to generate adequate demonstration trajectories in dynamic environment as the training data, and this idea can effectively overcome the difficulty of acquiring huge training data.
119

Question-response sequences in the House of Commons : A conversation analytic study of adversarial questioning in the British parliament / Fråga-svar sekvenser i House of Commons : En konversationsanalytisk studie om motstridigt utfrågande i det brittiska parlamentet

Blick, Adam January 2020 (has links)
With the method of conversation analysis, this study examines the level of adverseness in questions between members of parliament from different parties. The data consists of question – response sequences derived from a ministerial statement from the prime minister in the House of Commons. This study finds that, in question – response sequences between oppositional members of parliament and the prime minister, adversarial presuppositions in questions can be used as a strategy to project negative traits upon the respondent. Adversarial dimensions of hostility, assertiveness and directness can also be found in adversarial questions. In these instances, the respondent may adjust their answer to match the level of adverseness from the questioner through the use of certain lexis, creating counter sequences. Adversarial questions are the most common type of question from members of the oppositional party, and there are different adversarial strategies being used. Questions from members of the government party do not make use of adversarial strategies, and should not be described as adversarial.
120

Time Series Prediction for Stock Price and Opioid Incident Location

January 2019 (has links)
abstract: Time series forecasting is the prediction of future data after analyzing the past data for temporal trends. This work investigates two fields of time series forecasting in the form of Stock Data Prediction and the Opioid Incident Prediction. In this thesis, the Stock Data Prediction Problem investigates methods which could predict the trends in the NYSE and NASDAQ stock markets for ten different companies, nine of which are part of the Dow Jones Industrial Average (DJIA). A novel deep learning model which uses a Generative Adversarial Network (GAN) is used to predict future data and the results are compared with the existing regression techniques like Linear, Huber, and Ridge regression and neural network models such as Long-Short Term Memory (LSTMs) models. In this thesis, the Opioid Incident Prediction Problem investigates methods which could predict the location of future opioid overdose incidences using the past opioid overdose incidences data. A similar deep learning model is used to predict the location of the future overdose incidences given the two datasets of the past incidences (Connecticut and Cincinnati Opioid incidence datasets) and compared with the existing neural network models such as Convolution LSTMs, Attention-based Convolution LSTMs, and Encoder-Decoder frameworks. Experimental results on the above-mentioned datasets for both the problems show the superiority of the proposed architectures over the standard statistical models. / Dissertation/Thesis / Masters Thesis Computer Science 2019

Page generated in 0.0609 seconds