Spelling suggestions: "subject:"adversarial"" "subject:"adversarialt""
41 |
Deep Learning for Crack-Like Object DetectionZhang, Kaige 01 August 2019 (has links)
Cracks are common defects on surfaces of man-made structures such as pavements, bridges, walls of nuclear power plants, ceilings of tunnels, etc. Timely discovering and repairing of the cracks are of great significance and importance for keeping healthy infrastructures and preventing further damages. Traditionally, the cracking inspection was conducted manually which was labor-intensive, time-consuming and costly. For example, statistics from the Central Intelligence Agency show that the world’s road network length has reached 64,285,009 km, of which the United States has 6,586,610 km. It is a huge cost to maintain and upgrade such an immense road network. Thus, fully automatic crack detection has received increasing attention.
With the development of artificial intelligence (AI), the deep learning technique has achieved great success and has been viewed as the most promising way for crack detection. Based on deep learning, this research has solved four important issues existing in crack-like object detection. First, the noise problem caused by the textured background is solved by using a deep classification network to remove the non-crack region before conducting crack detection. Second, the computational efficiency is highly improved. Third, the crack localization accuracy is improved. Fourth, the proposed model is very stable and can be used to deal with a wide range of crack detection tasks. In addition, this research performs a preliminary study about the future AI system, which provides a concept that has potential to realize fully automatic crack detection without human’s intervention.
|
42 |
A Discrete Wavelet Transform GAN for NonHomogeneous DehazingFu, Minghan January 2021 (has links)
Hazy images are often subject to color distortion, blurring and other visible quality degradation. Some existing CNN-based methods have shown great performance on removing the homogeneous haze, but they are not robust in the non-homogeneous case. The reason is twofold. Firstly, due to the complicated haze distribution, texture details are easy to get lost during the dehazing process. Secondly, since the training pairs are hard to be collected, training on limited data can easily lead to the over-fitting problem. To tackle these two issues, we introduce a novel dehazing network using the 2D discrete wavelet transform, namely DW-GAN. Specifically, we propose a two-branch network to deal with the aforementioned problems. By utilizing the wavelet transform in the DWT branch, our proposed method can retain more high-frequency information in feature maps. To prevent over-fitting, ImageNet pre-trained Res2Net is adopted in the knowledge adaptation branch. Owing to the robust feature representations of ImageNet pre-training, the generalization ability of our network is improved dramatically. Finally, a patch-based discriminator is used to reduce artifacts of the restored images. Extensive experimental results demonstrate that the proposed method outperforms the state-of-the-art quantitatively and qualitatively. / Thesis / Master of Applied Science (MASc)
|
43 |
Fibromyalgia Impact and Depressive Symptoms: Can Perceiving a Silver Lining Make a Difference?Hirsch, Jameson K., Treaster, Morgan K., Kaniuka, Andrea R., Brooks, Byron D., Sirois, Fuschia M., Kohls, Niko, Nöfer, Eberhard, Toussaint, Loren L., Offenbächer, Martin 01 August 2020 (has links)
Individuals with fibromyalgia are at greater risk for depressive symptoms than the general population, and this may be partially attributable to physical symptoms that impair day-to-day functioning. However, individual-level protective characteristics may buffer risk for psychopathology. For instance, the ability to perceive a “silver lining” in one’s illness may be related to better mental and physical health. We examined perceived silver lining as a potential moderator of the relation between fibromyalgia impact and depressive symptoms. Our sample of persons with fibromyalgia (N = 401) completed self-report measures including the Fibromyalgia Impact Questionnaire-Revised, Depression Anxiety Stress Scales, and the Silver Lining Questionnaire. Moderation analyses covaried age, sex, and ethnicity. Supporting hypotheses, increasing impact of disease was related to greater depressive symptoms, and perceptions of a silver lining attenuated that association. Despite the linkage between impairment and depressive symptoms, identifying positive aspects or outcomes of illness may reduce risk for psychopathology. Therapeutically promoting perception of a silver lining, perhaps via signature strengths exercises or a blessings journal, and encouraging cognitive reframing of the illness experience, perhaps via Motivational Interviewing or Cognitive Behavioral Therapy, may reduce depressive symptoms in persons with fibromyalgia.
|
44 |
On Depth and Complexity of Generative Adversarial Networks / Djup och komplexitet hos generativa motstridanade nätverkYamazaki, Hiroyuki Vincent January 2017 (has links)
Although generative adversarial networks (GANs) have achieved state-of-the-art results in generating realistic look- ing images, they are often parameterized by neural net- works with relatively few learnable weights compared to those that are used for discriminative tasks. We argue that this is suboptimal in a generative setting where data is of- ten entangled in high dimensional space and models are ex- pected to benefit from high expressive power. Additionally, in a generative setting, a model often needs to extrapo- late missing information from low dimensional latent space when generating data samples while in a typical discrimina- tive task, the model only needs to extract lower dimensional features from high dimensional space. We evaluate different architectures for GANs with varying model capacities using shortcut connections in order to study the impacts of the capacity on training stability and sample quality. We show that while training tends to oscillate and not benefit from additional capacity of naively stacked layers, GANs are ca- pable of generating samples with higher quality, specifically for images, samples of higher visual fidelity given proper regularization and careful balancing. / Trots att Generative Adversarial Networks (GAN) har lyckats generera realistiska bilder består de än idag av neurala nätverk som är parametriserade med relativt få tränbara vikter jämfört med neurala nätverk som används för klassificering. Vi tror att en sådan modell är suboptimal vad gäller generering av högdimensionell och komplicerad data och anser att modeller med högre kapaciteter bör ge bättre estimeringar. Dessutom, i en generativ uppgift så förväntas en modell kunna extrapolera information från lägre till högre dimensioner medan i en klassificeringsuppgift så behöver modellen endast att extrahera lågdimensionell information från högdimensionell data. Vi evaluerar ett flertal GAN med varierande kapaciteter genom att använda shortcut connections för att studera hur kapaciteten påverkar träningsstabiliteten, samt kvaliteten av de genererade datapunkterna. Resultaten visar att träningen blir mindre stabil för modeller som fått högre kapaciteter genom naivt tillsatta lager men visar samtidigt att datapunkternas kvaliteter kan öka, specifikt för bilder, bilder med hög visuell fidelitet. Detta åstadkoms med hjälp utav regularisering och noggrann balansering.
|
45 |
A Model Extraction Attack on Deep Neural Networks Running on GPUsO'Brien Weiss, Jonah G 09 August 2023 (has links) (PDF)
Deep Neural Networks (DNNs) have become ubiquitous due to their performance on prediction and classification problems. However, they face a variety of threats as their usage spreads. Model extraction attacks, which steal DNN models, endanger intellectual property, data privacy, and security. Previous research has shown that system-level side channels can be used to leak the architecture of a victim DNN, exacerbating these risks. We propose a novel DNN architecture extraction attack, called EZClone, which uses aggregate rather than time-series GPU profiles as a side-channel to predict DNN architecture. This approach is not only simpler, but also requires less adversary capability than earlier works. We investigate the effectiveness of EZClone under various scenarios including reduction of attack complexity, against pruned models, and across GPUs with varied resources. We find that EZClone correctly predicts DNN architectures for the entire set of PyTorch vision architectures with 100\% accuracy. No other work has shown this degree of architecture prediction accuracy with the same adversarial constraints or using aggregate side-channel information. Prior work has shown that, once a DNN has been successfully cloned, further attacks such as model evasion or model inversion can be accelerated significantly. Then, we evaluate several mitigation techniques against EZClone, showing that carefully inserted dummy computation reduces the success rate of the attack.
|
46 |
Detecting Irregular Network Activity with Adversarial Learning and Expert FeedbackRathinavel, Gopikrishna 15 June 2022 (has links)
Anomaly detection is a ubiquitous and challenging task relevant across many disciplines. With the vital role communication networks play in our daily lives, the security of these networks is imperative for smooth functioning of society. This thesis proposes a novel self-supervised deep learning framework CAAD for anomaly detection in wireless communication systems. Specifically, CAAD employs powerful adversarial learning and contrastive learning techniques to learn effective representations of normal and anomalous behavior in wireless networks. Rigorous performance comparisons of CAAD with several state-of-the-art anomaly detection techniques has been conducted and verified that CAAD yields a mean performance improvement of 92.84%. Additionally, CAAD is augmented with the ability to systematically incorporate expert feedback through a novel contrastive learning feedback loop to improve the learned representations and thereby reduce prediction uncertainty (CAAD-EF). CAAD-EF is a novel, holistic and widely applicable solution to anomaly detection. / Master of Science / Anomaly detection is a technique that can be used to detect if there is any abnormal behavior in data. It is a ubiquitous and a challenging task relevant across many disciplines. With the vital role communication networks play in our daily lives, the security of these networks is imperative for smooth functioning of society. Anomaly detection in such communication networks is essential in ensuring security. This thesis proposes a novel framework CAAD for anomaly detection in wireless communication systems. Rigorous performance comparisons of CAAD with several state-of-the-art anomaly detection techniques has been conducted and verified that CAAD yields a mean performance improvement of 92.84% over state-of-the-art anomaly detection models. Additionally, CAAD is augmented with the ability to incorporate feedback from experts about whether a sample is normal or anomalous through a novel feedback loop (CAAD-EF). CAAD-EF is a novel, holistic and a widely applicable solution to anomaly detection.
|
47 |
Adversarial Learning based framework for Anomaly Detection in the context of Unmanned Aerial SystemsBhaskar, Sandhya 18 June 2020 (has links)
Anomaly detection aims to identify the data samples that do not conform to a known normal (regular) behavior. As the definition of an anomaly is often ambiguous, unsupervised and semi-supervised deep learning (DL) algorithms that primarily use unlabeled datasets to model normal (regular) behaviors, are popularly studied in this context. The unmanned aerial system (UAS) can use contextual anomaly detection algorithms to identify interesting objects of concern in applications like search and rescue, disaster management, public security etc. This thesis presents a novel multi-stage framework that supports detection of frames with unknown anomalies, localization of anomalies in the detected frames, and validation of detected frames for incremental semi-supervised learning, with the help of a human operator. The proposed architecture is tested on two new datasets collected for a UAV-based system. In order to detect and localize anomalies, it is important to both model the normal data distribution accurately as well as formulate powerful discriminant (anomaly scoring) techniques. We implement a generative adversarial network (GAN)-based anomaly detection architecture to study the effect of loss terms and regularization on the modeling of normal (regular) data and arrive at the most effective anomaly scoring method for the given application. Following this, we use incremental semi-supervised learning techniques that utilize a small set of labeled data (obtained through validation from a human operator), with large unlabeled datasets to improve the knowledge-base of the anomaly detection system. / Master of Science / Anomaly detection aims to identify the data samples that do not conform to a known normal (regular) behavior. As the definition of an anomaly is often ambiguous, most techniques use unlabeled datasets, to model normal (regular) behaviors. The availability of large unlabeled datasets combined with novel applications in various domains, has led to an increasing interest in the study of anomaly detection. In particular, the unmanned aerial system (UAS) can use contextual anomaly detection algorithms to identify interesting objects of concern in applications like search and rescue (SAR), disaster management, public security etc. This thesis presents a novel multi-stage framework that supports detection and localization of unknown anomalies, as well as the validation of detected anomalies, for incremental learning, with the help of a human operator. The proposed architecture is tested on two new datasets collected for a UAV-based system. In order to detect and localize anomalies, it is important to both model the normal data distribution accurately and formulate powerful discriminant (anomaly scoring) techniques. To this end, we study the state-of-the-art generative adversarial networks (GAN)-based anomaly detection algorithms for modeling of normal (regular) behavior and formulate effective anomaly detection scores. We also propose techniques to incrementally learn the new normal data as well as anomalies, using the validation provided by a human operator. This framework is introduced with the aim to support temporally critical applications that involve human search and rescue, particularly in disaster management.
|
48 |
Attack Strategies in Federated Learning for Regression Models : A Comparative Analysis with Classification ModelsLeksell, Sofia January 2024 (has links)
Federated Learning (FL) has emerged as a promising approach for decentralized model training across multiple devices, while still preserving data privacy. Previous research has predominantly concentrated on classification tasks in FL settings, leaving a noticeable gap in FL research specifically for regression models. This thesis addresses this gap by examining the vulnerabilities of Deep Neural Network (DNN) regression models within FL, with a specific emphasis on adversarial attacks. The primary objective is to examine the impact on model performance of two distinct adversarial attacks-output-flipping and random weights attacks. The investigation involves training FL models on three distinct data sets, engaging eight clients in the training process. The study varies the presence of malicious clients to understand how adversarial attacks influence model performance. Results indicate that the output-flipping attack significantly decreases the model performance with involvement of at least two malicious clients. Meanwhile, the random weights attack demonstrates a substantial decrease even with just one malicious client out of the eight. It is crucial to note that this study's focus is on a theoretical level and does not explicitly account for real-world settings such as non-identically distributed (non-IID) settings, extensive data sets, and a larger number of clients. In conclusion, this study contributes to the understanding of adversarial attacks in FL, specifically focusing on DNN regression models. The results highlights the importance of defending FL models against adversarial attacks, emphasizing the significance of future research in this domain.
|
49 |
Generative adversarial network for point cloud upsamplingWidell Delgado, Edison January 2024 (has links)
Point clouds are a widely used system for the collection and application of 3D data. But most timesthe data gathered is too scarce to reliably be used in any application. Therefore this thesis presentsa GAN based upsampling method within a patch based approach together with a GCN based featureextractor, in an attempt to enhance the density and reliability of point cloud data. Our approachis rigorously compared with existing methods to compare the performance. The thesis also makescorrelations between input sizes and how the quality of the inputs affects the upsampled result. TheGAN is also applied to real-world data to assess the viability of its current state, and to test how it isaffected by the interference that occurs in an unsupervised scenario.
|
50 |
Attack Strategies in Federated Learning for Regression Models : A Comparative Analysis with Classification ModelsLeksell, Sofia January 2024 (has links)
Federated Learning (FL) has emerged as a promising approach for decentralized model training across multiple devices, while still preserving data privacy. Previous research has predominantly concentrated on classification tasks in FL settings, leaving a noticeable gap in FL research specifically for regression models. This thesis addresses this gap by examining the vulnerabilities of Deep Neural Network (DNN) regression models within FL, with a specific emphasis on adversarial attacks. The primary objective is to examine the impact on model performance of two distinct adversarial attacks-output-flipping and random weights attacks. The investigation involves training FL models on three distinct data sets, engaging eight clients in the training process. The study varies the presence of malicious clients to understand how adversarial attacks influence model performance. Results indicate that the output-flipping attack significantly decreases the model performance with involvement of at least two malicious clients. Meanwhile, the random weights attack demonstrates a substantial decrease even with just one malicious client out of the eight. It is crucial to note that this study's focus is on a theoretical level and does not explicitly account for real-world settings such as non-identically distributed (non-IID) settings, extensive data sets, and a larger number of clients. In conclusion, this study contributes to the understanding of adversarial attacks in FL, specifically focusing on DNN regression models. The results highlights the importance of defending FL models against adversarial attacks, emphasizing the significance of future research in this domain.
|
Page generated in 0.117 seconds