Date:

Unanswered GANs: Key Questions in Generative Adversarial Networks

What are the Trade-Offs Between GANs and other Generative Models?

By some metrics, research on Generative Adversarial Networks (GANs) has progressed substantially in the past 2 years. Practical improvements to image synthesis models are being made almost too quickly to keep up with.


Odena et al., 2016
Miyato et al., 2017
Zhang et al., 2018
Brock et al., 2018

However, by other metrics, less has happened. For instance, there is still widespread disagreement about how GANs should be evaluated.

What are the Trade-Offs Between GANs and other Generative Models?

In addition to GANs, two other types of generative model are currently popular: Flow Models and Autoregressive Models. Roughly speaking, Flow Models apply a stack of invertible transformations to a sample from a prior so that exact log-likelihoods of observations can be computed. On the other hand, Autoregressive Models factorize the distribution over observations into conditional distributions and process one component of the observation at a time.

We think that accurately characterizing these trade-offs and deciding whether they are intrinsic to the model families is an interesting open question.

What are the Trade-Offs Between GANs and other Generative Models?

Parallel Efficient Reversible
GANs Yes Yes No
Flow Models Yes No Yes
Autoregressive Models No Yes Yes

Problem 1

What are the fundamental trade-offs between GANs and other generative models?

In particular, can we make some sort of CAP Theorem type statement about reversibility, parallelism, and parameter/time efficiency?

How does GAN Training Scale with Batch Size?

Large minibatches have helped to scale up image classification can they also help us scale up GANs?

How does GAN Training Scale with Batch Size?

At first glance, it seems like the answer should be yes after all, the discriminator in most GANs is just an image classifier. Larger batches can accelerate training if it is bottlenecked on gradient noise. However, GANs have a separate bottleneck that classifiers don’t: the training procedure can diverge.

How does GAN Training Scale with Batch Size?

Problem 6

How does GAN training scale with batch size?

How big a role does gradient noise play in GAN training?

Can GAN training be modified so that it scales better with batch size?

What is the Relationship Between GANs and Adversarial Examples?

It’s well known that image classifiers suffer from adversarial examples human-imperceptible perturbations that cause classifiers to give the wrong output when added to images.

What is the Relationship Between GANs and Adversarial Examples?

Since the GAN discriminator is an image classifier, one might worry about it suffering from adversarial examples. Despite the large bodies of literature on GANs and adversarial examples, there doesn’t seem to be much work on how they relate.

What is the Relationship Between GANs and Adversarial Examples?

Problem 7

How does the adversarial robustness of the discriminator affect GAN training?

Conclusion

GANs have made substantial progress in the past 2 years, but there is still widespread disagreement about how GANs should be evaluated. We have identified several open problems related to GANs and other generative models.

Frequently Asked Questions

Q: What is the fundamental trade-off between GANs and other generative models?

A: The fundamental trade-off between GANs and other generative models is related to reversibility, parallelism, and parameter/time efficiency.

Q: How does GAN training scale with batch size?

A: GAN training scales with batch size, but there are separate bottlenecks that classifiers don’t, such as the training procedure can diverge.

Q: What is the relationship between GANs and adversarial examples?

A: GANs are susceptible to adversarial examples, but there is limited work on how they relate.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here