Instability analysis for generative adversarial networks and its solving techniques
journal contribution
posted on 2021-01-01, 00:00authored byH Tan, L Zhou, G Wang, Zili ZhangZili Zhang
Training instability in generative adversarial networks (GANs) remains one of the most challenging problems, for which both the theoretical root and an effective solution are needed. In this study, we theoretically determined that the mutual contradiction between training the optimal discriminator and minimizing the generator leads to training instability in GANs. To address this problem, we propose a targeted gradient penalty technique. Unlike other penalty techniques, we penalize the Lipschitz constant of the discriminator, which is the key to dealing with the instability problem (this amounts to controlling the Lipschitz constant of the discriminator). We performed a series of experimental comparisons from three different perspectives: the oscillation amplitude of the loss function (convergence), the general variation trend of the gradient, and the holistic performance of the network. The results demonstrated that the proposed technique has a significant and positive effect on the training instability in GANs.