Abstract
Generative adversarial networks (GANs), modeled as a zero-sum game between a generator (G) and a discriminator (D), allow generating synthetic data with formal guarantees. Noting that D is a classifier, we begin by reformulating the GAN value function using class probability estimation (CPE) losses. We prove a two-way correspondence between CPE loss GANs and f-GANs which minimize f-divergences. We also show that all symmetric f-divergences are equivalent in convergence. In the finite sample and model capacity setting, we define and obtain bounds on estimation and generalization errors. We specialize these results to α -GANs, defined using α -loss, a tunable CPE loss family parametrized by α in (0,∞ ]. We next introduce a class of dual-objective GANs to address training instabilities of GANs by modeling each player's objective using α -loss to obtain (αD,αG) -GANs. We show that the resulting non-zero sum game simplifies to minimizing an f-divergence under appropriate conditions on (αD,αG). Generalizing this dual-objective formulation using CPE losses, we define and obtain upper bounds on an appropriately defined estimation error. Finally, we highlight the value of tuning (αD,αG) in alleviating training instabilities for the synthetic 2D Gaussian mixture ring as well as the large publicly available Celeb-A and LSUN Classroom image datasets.
Original language | English (US) |
---|---|
Pages (from-to) | 534-553 |
Number of pages | 20 |
Journal | IEEE Journal on Selected Areas in Information Theory |
Volume | 5 |
DOIs | |
State | Published - 2024 |
Keywords
- CPE loss formulation
- Generative adversarial networks
- dual objectives
- estimation error
- training instabilities
ASJC Scopus subject areas
- Computer Networks and Communications
- Media Technology
- Artificial Intelligence
- Applied Mathematics