Improving generalization and stability of generative adversarial networks
conference contribution
posted on 2019-01-01, 00:00 authored by H Thanh-Tung, Svetha VenkateshSvetha Venkatesh, Truyen TranTruyen Tran© 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Generative Adversarial Networks (GANs) are one of the most popular tools for learning complex high dimensional distributions. However, generalization properties of GANs have not been well understood. In this paper, we analyze the generalization of GANs in practical settings. We show that discriminators trained on discrete datasets with the original GAN loss have poor generalization capability and do not approximate the theoretically optimal discriminator. We propose a zero-centered gradient penalty for improving the generalization of the discriminator by pushing it toward the optimal discriminator. The penalty guarantees the generalization and convergence of GANs. Experiments on synthetic and large scale datasets verify our theoretical analysis.
History
Location
New Orleans, LouisianaStart date
2019-05-06End date
2019-05-09Language
engPublication classification
E1 Full written paper - refereedCopyright notice
2019, 7th International Conference on Learning Representations, ICLR 2019Title of proceedings
ICLR 2019: Proceedings of the 7th International Conference on Learning RepresentationsEvent
Learning Representations. International Conference (7th : 2019 : New Orleans, Louisiana)Publisher
ICLRPlace of publication
[New Orleans, Louisiana]Publication URL
Usage metrics
Categories
No categories selectedKeywords
Licence
Exports
RefWorksRefWorks
BibTeXBibTeX
Ref. managerRef. manager
EndnoteEndnote
DataCiteDataCite
NLMNLM
DCDC