Deakin University
Browse

File(s) under permanent embargo

Improving generalization and stability of generative adversarial networks

conference contribution
posted on 2019-01-01, 00:00 authored by H Thanh-Tung, Svetha VenkateshSvetha Venkatesh, Truyen TranTruyen Tran
© 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Generative Adversarial Networks (GANs) are one of the most popular tools for learning complex high dimensional distributions. However, generalization properties of GANs have not been well understood. In this paper, we analyze the generalization of GANs in practical settings. We show that discriminators trained on discrete datasets with the original GAN loss have poor generalization capability and do not approximate the theoretically optimal discriminator. We propose a zero-centered gradient penalty for improving the generalization of the discriminator by pushing it toward the optimal discriminator. The penalty guarantees the generalization and convergence of GANs. Experiments on synthetic and large scale datasets verify our theoretical analysis.

History

Event

Learning Representations. International Conference (7th : 2019 : New Orleans, Louisiana)

Publisher

ICLR

Location

New Orleans, Louisiana

Place of publication

[New Orleans, Louisiana]

Start date

2019-05-06

End date

2019-05-09

Language

eng

Publication classification

E1 Full written paper - refereed

Copyright notice

2019, 7th International Conference on Learning Representations, ICLR 2019

Title of proceedings

ICLR 2019: Proceedings of the 7th International Conference on Learning Representations

Usage metrics

    Research Publications

    Categories

    No categories selected

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC