In this thesis, we address two major problems in Generative Adversarial Networks (GAN), an important sub-field in deep learning. The first problem that we address is the instability in the training process that happens in many real-world problems and the second problem that we address is the lack of a good evaluation metric for the performance of GAN algorithms. To understand and address the first problem, three approaches are developed. Namely, we introduce randomness to the training process; we investigate various normalization methods; most importantly we develop a better parameter initialization strategy to help stabilize training. In the randomness techniques part of the thesis, we developed two randomness approaches, namely the addition of gradient noise and the batch random flipping of the results from the discrimination section of a GAN. In the normalization part of the thesis, we compared the performances of the z-score transform, the min-max normalization, affine transformations and batch normalization. In the most novel and important part of this thesis, we developed techniques to initialize the GAN generator section with parameters that can produce a uniform distribution on the range of the training data. As far as we are aware, this seemingly simple idea has not yet appeared in the extant literature, and the empirical results we obtain on 2-dimensional synthetic data show marked improvement. As to better evaluation metrics, we demonstrate a simple yet effective way to evaluate the effectiveness of the generator using a novel "overlap loss".
Identifer | oai:union.ndltd.org:wpi.edu/oai:digitalcommons.wpi.edu:etd-theses-2308 |
Date | 26 April 2018 |
Creators | Zou, Xiaozhou |
Contributors | Randy C. Paffenroth, Advisor, Xiangnan Kong, Reader |
Publisher | Digital WPI |
Source Sets | Worcester Polytechnic Institute |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Masters Theses (All Theses, All Years) |
Page generated in 0.0023 seconds