• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 231
  • 21
  • 19
  • 9
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 372
  • 213
  • 186
  • 143
  • 133
  • 121
  • 113
  • 92
  • 89
  • 70
  • 68
  • 59
  • 55
  • 55
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Adversarial Example Transferabilty to Quantized Models

Kratzert, Ludvig January 2021 (has links)
Deep learning has proven to be a major leap in machine learning, allowing completely new problems to be solved. While flexible and powerful, neural networks have the disadvantage of being large and demanding high performance from the devices on which they are run. In order to deploy neural networks on more, and simpler, devices, techniques such as quantization, sparsification and tensor decomposition have been developed. These techniques have shown promising results, but their effects on model robustness against attacks remain largely unexplored. In this thesis, Universal Adversarial Perturbations (UAP) and the Fast Gradient Sign Method (FGSM) are tested against VGG-19 as well as versions of it compressed using 8-bit quantization, TensorFlow’s float16 quantization, and 8-bit and 4-bit single layer quantization as introduced in this thesis. The results show that UAP transfers well to all quantized models, while the transferability of FGSM is high to the float16 quantized model, lower to the 8-bit models, and high to the 4-bit SLQ model. We suggest that this disparity arises from the universal adversarial perturbations’ having been trained on multiple examples rather than just one, which has previously been shown to increase transferability. The results also show that quantizing a single layer, the first layer in this case, can have a disproportionate impact on transferability. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
112

Data augmentation for attack detection on IoT Telehealth Systems

Khan, Zaid A. 11 March 2022 (has links)
Telehealth is an online health care system that is extensively used in the current pandemic situation. Our proposed technique is considered a fog computing-based attack detection architecture to protect IoT Telehealth Networks. As for IoT Telehealth Networks, the sensor/actuator edge devices are considered the weakest link in the IoT system and are obvious targets of attacks such as botnet attacks. In this thesis, we introduce a novel framework that employs several machine learning and data analysis techniques to detect those attacks. We evaluate the effectiveness of the proposed framework using two publicly available datasets from real-world scenarios. These datasets contain a variety of attacks with different characteristics. The robustness of the proposed framework and its ability, to detect and distinguish between the existing IoT attacks that are tested by combining the two datasets for cross-evaluation. This combination is based on a novel technique for generating supplementary data instances, which employs GAN (generative adversarial networks) for data augmentation and to ensure that the number of samples and features are balanced. / Graduate
113

UNCERTAINTY, EDGE, AND REVERSE-ATTENTION GUIDED GENERATIVE ADVERSARIAL NETWORK FOR AUTOMATIC BUILDING DETECTION IN REMOTELY SENSED IMAGES

Somrita Chattopadhyay (12210671) 18 April 2022 (has links)
Despite recent advances in deep-learning based semantic segmentation, automatic building detection from remotely sensed imagery is still a challenging problem owing to large variability in the appearance of buildings across the globe. The errors occur mostly around the boundaries of the building footprints, in shadow areas, and when detecting buildings whose exterior surfaces have reflectivity properties that are very similar to those of the surrounding regions. To overcome these problems, we propose a generative adversarial network based segmentation framework with uncertainty attention unit and refinement module embedded in the generator. The refinement module, composed of edge and reverse attention units, is designed to refine the predicted building map. The edge attention enhances the boundary features to estimate building boundaries with greater precision, and the reverse attention allows the network to explore the features missing in the previously estimated regions. The uncertainty attention unit assists the network in resolving uncertainties in classification. As a measure of the power of our approach, as of January 5, 2022, it ranks at the second place on DeepGlobe’s public leaderboard despite the fact that main focus of our approach — refinement of the building edges — does not align exactly with the metrics used for leaderboard rankings. Our overall F1-score on DeepGlobe’s challenging dataset is 0.745. We also report improvements on the previous-best results for the challenging INRIA Validation Dataset for which our network achieves an overall IoU of 81.28% and an overall accuracy of 97.03%. Along the same lines, for the official INRIA Test Dataset, our network scores 77.86% and 96.41% in overall IoU and accuracy. We have also improved upon the previous best results on two other datasets: For the WHU Building Dataset, our network achieves 92.27% IoU, 96.73% precision, 95.24% recall and 95.98% F1-score. And, finally, for the Massachusetts Buildings Dataset, our network achieves 96.19% relaxed IoU score and 98.03% relaxed F1-score over the previous best scores of 91.55% and 96.78% respectively, and in terms of non-relaxed F1 and IoU scores, our network outperforms the previous best scores by 2.77% and 3.89% respectively.
114

Generation of Synthetic Images with Generative Adversarial Networks

Zeid Baker, Mousa January 2018 (has links)
Machine Learning is a fast growing area that revolutionizes computer programs by providing systems with the ability to automatically learn and improve from experience. In most cases, the training process begins with extracting patterns from data. The data is a key factor for machine learning algorithms, without data the algorithms will not work. Thus, having sufficient and relevant data is crucial for the performance. In this thesis, the researcher tackles the problem of not having a sufficient dataset, in terms of the number of training examples, for an image classification task. The idea is to use Generative Adversarial Networks to generate synthetic images similar to the ground truth, and in this way expand a dataset. Two types of experiments were conducted: the first was used to fine-tune a Deep Convolutional Generative Adversarial Network for a specific dataset, while the second experiment was used to analyze how synthetic data examples affect the accuracy of a Convolutional Neural Network in a classification task. Three well known datasets were used in the first experiment, namely MNIST, Fashion-MNIST and Flower photos, while two datasets were used in the second experiment: MNIST and Fashion-MNIST. The results of the generated images of MNIST and Fashion-MNIST had good overall quality. Some classes had clear visual errors while others were indistinguishable from ground truth examples. When it comes to the Flower photos, the generated images suffered from poor visual quality. One can easily tell the synthetic images from the real ones. One reason for the bad performance is due to the large quantity of noise in the Flower photos dataset. This made it difficult for the model to spot the important features of the flowers. The results from the second experiment show that the accuracy does not increase when the two datasets, MNIST and Fashion-MNIST, are expanded with synthetic images. This is not because the generated images had bad visual quality, but because the accuracy turned out to not be highly dependent on the number of training examples. It can be concluded that Deep Convolutional Generative Adversarial Networks are capable of generating synthetic images similar to the ground truth and thus can be used to expand a dataset. However, this approach does not completely solve the initial problem of not having adequate datasets because Deep Convolutional Generative Adversarial Networks may themselves require, depending on the dataset, a large quantity of training examples.
115

Disocclusion Inpainting using Generative Adversarial Networks

Aftab, Nadeem January 2020 (has links)
The old methods used for images inpainting of the Depth Image Based Rendering (DIBR) process are inefficient in producing high-quality virtual views from captured data. From the viewpoint of the original image, the generated data’s structure seems less distorted in the virtual view obtained by translation but when then the virtual view involves rotation, gaps and missing spaces become visible in the DIBR generated data. The typical approaches for filling the disocclusion tend to be slow, inefficient, and inaccurate. In this project, a modern technique Generative Adversarial Network (GAN) is used to fill the disocclusion. GAN consists of two or more neural networks that compete against each other and get trained. This study result shows that GAN can inpaint the disocclusion with a consistency of the structure. Additionally, another method (Filling) is used to enhance the quality of GAN and DIBR images. The statistical evaluation of results shows that GAN and filling method enhance the quality of DIBR images.
116

Imitation Learning based on Generative Adversarial Networks for Robot Path Planning

Yi, Xianyong 24 November 2020 (has links)
Robot path planning and dynamic obstacle avoidance are defined as a problem that robots plan a feasible path from a given starting point to a destination point in a nonlinear dynamic environment, and safely bypass dynamic obstacles to the destination with minimal deviation from the trajectory. Path planning is a typical sequential decision-making problem. Dynamic local observable environment requires real-time and adaptive decision-making systems. It is an innovation for the robot to learn the policy directly from demonstration trajectories to adapt to similar state spaces that may appear in the future. We aim to develop a method for directly learning navigation behavior from demonstration trajectories without defining the environment and attention models, by using the concepts of Generative Adversarial Imitation Learning (GAIL) and Sequence Generative Adversarial Network (SeqGAN). The proposed SeqGAIL model in this thesis allows the robot to reproduce the desired behavior in different situations. In which, an adversarial net is established, and the Feature Counts Errors reduction is utilized as the forcing objective for the Generator. The refinement measure is taken to solve the instability problem. In addition, we proposed to use the Rapidly-exploring Random Tree* (RRT*) with pre-trained weights to generate adequate demonstration trajectories in dynamic environment as the training data, and this idea can effectively overcome the difficulty of acquiring huge training data.
117

Question-response sequences in the House of Commons : A conversation analytic study of adversarial questioning in the British parliament / Fråga-svar sekvenser i House of Commons : En konversationsanalytisk studie om motstridigt utfrågande i det brittiska parlamentet

Blick, Adam January 2020 (has links)
With the method of conversation analysis, this study examines the level of adverseness in questions between members of parliament from different parties. The data consists of question – response sequences derived from a ministerial statement from the prime minister in the House of Commons. This study finds that, in question – response sequences between oppositional members of parliament and the prime minister, adversarial presuppositions in questions can be used as a strategy to project negative traits upon the respondent. Adversarial dimensions of hostility, assertiveness and directness can also be found in adversarial questions. In these instances, the respondent may adjust their answer to match the level of adverseness from the questioner through the use of certain lexis, creating counter sequences. Adversarial questions are the most common type of question from members of the oppositional party, and there are different adversarial strategies being used. Questions from members of the government party do not make use of adversarial strategies, and should not be described as adversarial.
118

Time Series Prediction for Stock Price and Opioid Incident Location

January 2019 (has links)
abstract: Time series forecasting is the prediction of future data after analyzing the past data for temporal trends. This work investigates two fields of time series forecasting in the form of Stock Data Prediction and the Opioid Incident Prediction. In this thesis, the Stock Data Prediction Problem investigates methods which could predict the trends in the NYSE and NASDAQ stock markets for ten different companies, nine of which are part of the Dow Jones Industrial Average (DJIA). A novel deep learning model which uses a Generative Adversarial Network (GAN) is used to predict future data and the results are compared with the existing regression techniques like Linear, Huber, and Ridge regression and neural network models such as Long-Short Term Memory (LSTMs) models. In this thesis, the Opioid Incident Prediction Problem investigates methods which could predict the location of future opioid overdose incidences using the past opioid overdose incidences data. A similar deep learning model is used to predict the location of the future overdose incidences given the two datasets of the past incidences (Connecticut and Cincinnati Opioid incidence datasets) and compared with the existing neural network models such as Convolution LSTMs, Attention-based Convolution LSTMs, and Encoder-Decoder frameworks. Experimental results on the above-mentioned datasets for both the problems show the superiority of the proposed architectures over the standard statistical models. / Dissertation/Thesis / Masters Thesis Computer Science 2019
119

Adversarial Framework with Temperature as a Regularizer for Semantic Segmentation

Kim, Chanho 14 January 2022 (has links)
Semantic Segmentation processes RGB scenes and classifies pixels collectively as an object. Recent deep learning methods have shown promising results in the accuracy and the speed of semantic segmentation. However, it is inevitable for the deep learning models to fall in overfitting to data used in training due to its nature of data-centric approaches. There have been numerous Regularization methods to overcome an overfitting problem, such as data augmentation, additional loss methods such as Euclidean or Least-Square terms, and structure-related methods by adding or modifying layers like Dropout and DropConnect in a network. Among those methods, penalizing a model via an additional loss or a weight constraint does not require memory increase. With this sight, our work purposes to improve a given segmentation model through temperatures and a lightweight discriminator. Temperatures have the role of generating different versions of probability maps through the division in softmax calculations. On top of probability maps from temperatures, we concatenate a simple discriminator after the segmentation network for the competition between groundtruth feature maps and modified feature maps. We pass the additional loss calculated from those probability maps into the principal network. Our contribution consists of two parts. Firstly, we use the adversarial loss as the regularization loss in the segmentation networks and validate that it can substitute the L2 regularization loss with better validation results. Also, we apply temperatures in segmentation probability maps for providing different information without using additional convolutional layers. The experiments indicate that the spiking temperature in a generator with keeping an original probability map in a discriminator provides the model improvement in terms of pixel accuracy and mean Intersection-of-Union (mIoU). Our framework shows that the segmentation model can be improved with a small increase in training time and the number of parameters.
120

Abusive and Hate Speech Tweets Detection with Text Generation

Nalamothu, Abhishek 06 September 2019 (has links)
No description available.

Page generated in 0.0942 seconds