121 |
Adversarial Framework with Temperature as a Regularizer for Semantic SegmentationKim, Chanho 14 January 2022 (has links)
Semantic Segmentation processes RGB scenes and classifies pixels collectively as an object. Recent deep learning methods have shown promising results in the accuracy and the speed of semantic segmentation. However, it is inevitable for the deep learning models to fall in overfitting to data used in training due to its nature of data-centric approaches.
There have been numerous Regularization methods to overcome an overfitting problem, such as data augmentation, additional loss methods such as Euclidean or Least-Square terms, and structure-related methods by adding or modifying layers like Dropout and DropConnect in a network. Among those methods, penalizing a model via an additional loss or a weight constraint does not require memory increase.
With this sight, our work purposes to improve a given segmentation model through temperatures and a lightweight discriminator. Temperatures have the role of generating different versions of probability maps through the division in softmax calculations. On top of probability maps from temperatures, we concatenate a simple discriminator after the segmentation network for the competition between groundtruth feature maps and modified feature maps. We pass the additional loss calculated from those probability maps into the principal network.
Our contribution consists of two parts. Firstly, we use the adversarial loss as the regularization loss in the segmentation networks and validate that it can substitute the L2 regularization loss with better validation results. Also, we apply temperatures in segmentation probability maps for providing different information without using additional convolutional layers.
The experiments indicate that the spiking temperature in a generator with keeping an original probability map in a discriminator provides the model improvement in terms of pixel accuracy and mean Intersection-of-Union (mIoU). Our framework shows that the segmentation model can be improved with a small increase in training time and the number of parameters.
|
122 |
Abusive and Hate Speech Tweets Detection with Text GenerationNalamothu, Abhishek 06 September 2019 (has links)
No description available.
|
123 |
Text-Based Speech Video Synthesis from a Single Face ImageZheng, Yilin January 2019 (has links)
No description available.
|
124 |
Semi Supervised Learning for Accurate Segmentation of Roughly Labeled DataRajan, Rachel 01 September 2020 (has links)
No description available.
|
125 |
Generating a synthetic dataset for kidney transplantation using generative adversarial networks and categorical logit encodingBartocci, John Timothy 24 May 2021 (has links)
No description available.
|
126 |
Vytváření matoucích vzorů ve strojovém učení / Creating Adversarial Examples in Machine LearningKumová, Věra January 2021 (has links)
This thesis examines adversarial examples in machine learning, specifically in the im- age classification domain. State-of-the-art deep learning models are able to recognize patterns better than humans. However, we can significantly reduce the model's accu- racy by adding imperceptible, yet intentionally harmful noise. This work investigates various methods of creating adversarial images as well as techniques that aim to defend deep learning models against these malicious inputs. We choose one of the contemporary defenses and design an attack that utilizes evolutionary algorithms to deceive it. Our experiments show an interesting difference between adversarial images created by evolu- tion and images created with the knowledge of gradients. Last but not least, we test the transferability of our created samples between various deep learning models. 1
|
127 |
Adversarial Attacks and Defense Mechanisms to Improve Robustness of Deep Temporal Point ProcessesKhorshidi, Samira 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Temporal point processes (TPP) are mathematical approaches for modeling asynchronous
event sequences by considering the temporal dependency of each event on past events and its
instantaneous rate. Temporal point processes can model various problems, from earthquake
aftershocks, trade orders, gang violence, and reported crime patterns, to network analysis,
infectious disease transmissions, and virus spread forecasting. In each of these cases, the
entity’s behavior with the corresponding information is noted over time as an asynchronous
event sequence, and the analysis is done using temporal point processes, which provides a
means to define the generative mechanism of the sequence of events and ultimately predict
events and investigate causality.
Among point processes, Hawkes process as a stochastic point process is able to model
a wide range of contagious and self-exciting patterns. One of Hawkes process’s well-known
applications is predicting the evolution of viral processes on networks, which is an important
problem in biology, the social sciences, and the study of the Internet. In existing works,
mean-field analysis based upon degree distribution is used to predict viral spreading across
networks of different types. However, it has been shown that degree distribution alone
fails to predict the behavior of viruses on some real-world networks. Recent attempts have
been made to use assortativity to address this shortcoming. This thesis illustrates how the
evolution of such a viral process is sensitive to the underlying network’s structure.
In Chapter 3 , we show that adding assortativity does not fully explain the variance in
the spread of viruses for a number of real-world networks. We propose using the graphlet
frequency distribution combined with assortativity to explain variations in the evolution
of viral processes across networks with identical degree distribution. Using a data-driven
approach, by coupling predictive modeling with viral process simulation on real-world networks,
we show that simple regression models based on graphlet frequency distribution can
explain over 95% of the variance in virality on networks with the same degree distribution
but different network topologies. Our results highlight the importance of graphlets and identify
a small collection of graphlets that may have the most significant influence over the viral
processes on a network.
Due to the flexibility and expressiveness of deep learning techniques, several neural
network-based approaches have recently shown promise for modeling point process intensities.
However, there is a lack of research on the possible adversarial attacks and the
robustness of such models regarding adversarial attacks and natural shocks to systems.
Furthermore, while neural point processes may outperform simpler parametric models on
in-sample tests, how these models perform when encountering adversarial examples or sharp
non-stationary trends remains unknown.
In Chapter 4 , we propose several white-box and black-box adversarial attacks against
deep temporal point processes. Additionally, we investigate the transferability of whitebox
adversarial attacks against point processes modeled by deep neural networks, which are
considered a more elevated risk. Extensive experiments confirm that neural point processes
are vulnerable to adversarial attacks. Such a vulnerability is illustrated both in terms of
predictive metrics and the effect of attacks on the underlying point process’s parameters.
Expressly, adversarial attacks successfully transform the temporal Hawkes process regime
from sub-critical to into a super-critical and manipulate the modeled parameters that is
considered a risk against parametric modeling approaches. Additionally, we evaluate the
vulnerability and performance of these models in the presence of non-stationary abrupt
changes, using the crimes and Covid-19 pandemic dataset as an example.
Considering the security vulnerability of deep-learning models, including deep temporal
point processes, to adversarial attacks, it is essential to ensure the robustness of the deployed
algorithms that is despite the success of deep learning techniques in modeling temporal point
processes.
In Chapter 5 , we study the robustness of deep temporal point processes against several
proposed adversarial attacks from the adversarial defense viewpoint. Specifically, we
investigate the effectiveness of adversarial training using universal adversarial samples in
improving the robustness of the deep point processes. Additionally, we propose a general
point process domain-adopted (GPDA) regularization, which is strictly applicable to temporal
point processes, to reduce the effect of adversarial attacks and acquire an empirically
robust model. In this approach, unlike other computationally expensive approaches, there
is no need for additional back-propagation in the training step, and no further network isrequired. Ultimately, we propose an adversarial detection framework that has been trained
in the Generative Adversarial Network (GAN) manner and solely on clean training data.
Finally, in Chapter 6 , we discuss implications of the research and future research directions.
|
128 |
Analysis of Artifact Formation and Removal in GAN TrainingHackney, Daniel 05 June 2023 (has links)
No description available.
|
129 |
Improving Unreal Engine Imagery using Generative Adversarial Networks / Förbättring av Unreal Engine-renderingar med hjälp av Generativa MotståndarnätverkJareman, Erik, Knast, Ludvig January 2023 (has links)
Game engines such as Unreal Engine 5 are widely used to create photo-realistic renderings. To run these renderings at high quality without experiencing any performance issues,high-performance hardware is often required. In situations where the hardware is lacking,users may be forced to lower the quality and resolution of renderings to maintain goodperformance. While this may be acceptable in some situations, it limits the benefit that apowerful tool like Unreal Engine 5 can provide. This thesis aims to explore the possibilityof using a Real-ESRGAN, fine-tuned on a custom data set, to increase both the resolutionand quality of screenshots taken in Unreal Engine 5. By doing this, users can lower theresolution and quality of their Unreal Engine 5 rendering while still being able to generatehigh quality screenshots similar to those produced when running the rendering at higherresolution and higher quality. To accomplish this, a custom data set was created by randomizing camera positionsand capturing screenshots in an Unreal Engine 5 rendering. This data set was used to finetune a pre-trained Real-ESRGAN model. The fine-tuned model could then generate imagesfrom low resolution and low quality screenshots taken in Unreal Engine 5. The resultingimages were analyzed and evaluated using both quantitative and qualitative methods.The conclusions drawn from this thesis indicate that images generated using the finetuned weights are of high quality. This conclusion is supported by quantitative measurements, demonstrating that the generated images and the ground truth images are similar.Furthermore, visual inspection conducted by the authors confirms that the generated images are similar to the reference images, despite occasional artifacts.
|
130 |
Musikgenerering med Generativa motståndsnätverk / Music Generation with Generative Adversarial NetworksLi, Yupeng, Linberg, Jonatan January 2023 (has links)
At present, state-of-the-art deep learning music generation systems require a lot time and hardware resources to develop. This means that they are almost exclusively available to large companies. In order to reduce these requirements, more efficient techniques and methods need to be utilised. This project aims to investigate various approaches by developing a music generation system using generative adversarial networks, comparing different techniques and their effect on the system's performance. Our results show the difficulties in generating music in a more resource-constrained environment. We find that structuring the input space with conditional model constraints improves the systems' ability to conform to musical standards. The results also indicate the importance of a patch-based discriminator for evaluating the texture of the generated music. Finally, we propose a similarity loss as a way of reducing mode collapse in the generator, thus stabilising the training process.
|
Page generated in 0.0657 seconds