Deakin University
Browse

File(s) under permanent embargo

Instability analysis for generative adversarial networks and its solving techniques

journal contribution
posted on 2021-01-01, 00:00 authored by H Tan, L Zhou, G Wang, Zili ZhangZili Zhang
Training instability in generative adversarial networks (GANs) remains one of the most challenging problems, for which both the theoretical root and an effective solution are needed. In this study, we theoretically determined that the mutual contradiction between training the optimal discriminator and minimizing the generator leads to training instability in GANs. To address this problem, we propose a targeted gradient penalty technique. Unlike other penalty techniques, we penalize the Lipschitz constant of the discriminator, which is the key to dealing with the instability problem (this amounts to controlling the Lipschitz constant of the discriminator). We performed a series of experimental comparisons from three different perspectives: the oscillation amplitude of the loss function (convergence), the general variation trend of the gradient, and the holistic performance of the network. The results demonstrated that the proposed technique has a significant and positive effect on the training instability in GANs.

History

Journal

Scientia Sinica Informationis

Volume

51

Issue

4

Pagination

602 - 617

Publisher

Science China Press

Location

Beijing, China

ISSN

1674-7267

eISSN

2095-9486

Language

Mandarin

Publication classification

C1 Refereed article in a scholarly journal

Usage metrics

    Research Publications

    Categories

    No categories selected

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC