• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Improve the Convergence Speed and Stability of Generative Adversarial Networks

Zou, Xiaozhou 26 April 2018 (has links)
In this thesis, we address two major problems in Generative Adversarial Networks (GAN), an important sub-field in deep learning. The first problem that we address is the instability in the training process that happens in many real-world problems and the second problem that we address is the lack of a good evaluation metric for the performance of GAN algorithms. To understand and address the first problem, three approaches are developed. Namely, we introduce randomness to the training process; we investigate various normalization methods; most importantly we develop a better parameter initialization strategy to help stabilize training. In the randomness techniques part of the thesis, we developed two randomness approaches, namely the addition of gradient noise and the batch random flipping of the results from the discrimination section of a GAN. In the normalization part of the thesis, we compared the performances of the z-score transform, the min-max normalization, affine transformations and batch normalization. In the most novel and important part of this thesis, we developed techniques to initialize the GAN generator section with parameters that can produce a uniform distribution on the range of the training data. As far as we are aware, this seemingly simple idea has not yet appeared in the extant literature, and the empirical results we obtain on 2-dimensional synthetic data show marked improvement. As to better evaluation metrics, we demonstrate a simple yet effective way to evaluate the effectiveness of the generator using a novel "overlap loss".
2

Improve the Convergence Speed and Stability of Generative Adversarial Networks

Zou, Xiaozhou 26 April 2018 (has links)
In this thesis, we address two major problems in Generative Adversarial Networks (GAN), an important sub-field in deep learning. The first problem that we address is the instability in the training process that happens in many real-world problems and the second problem that we address is the lack of a good evaluation metric for the performance of GAN algorithms. To understand and address the first problem, three approaches are developed. Namely, we introduce randomness to the training process; we investigate various normalization methods; most importantly we develop a better parameter initialization strategy to help stabilize training. In the randomness techniques part of the thesis, we developed two randomness approaches, namely the addition of gradient noise and the batch random flipping of the results from the discrimination section of a GAN. In the normalization part of the thesis, we compared the performances of the z-score transform, the min-max normalization, affine transformations and batch normalization. In the most novel and important part of this thesis, we developed techniques to initialize the GAN generator section with parameters that can produce a uniform distribution on the range of the training data. As far as we are aware, this seemingly simple idea has not yet appeared in the extant literature, and the empirical results we obtain on 2-dimensional synthetic data show marked improvement. As to better evaluation metrics, we demonstrate a simple yet effective way to evaluate the effectiveness of the generator using a novel "overlap loss".

Page generated in 0.0649 seconds