/ 3 min read

Boundary Seeking GAN

Boundary Seeking GAN (BGAN) is a recently introduced modification of GAN training. Here, in this post, we will look at the intuition behind BGAN, and also the implementation, which consists of one line change from vanilla GAN.

Intuition of Boundary Seeking GAN

Recall, in GAN the following objective is optimized:

Following the objective above, as shown in the original GAN paper [1], the optimal discriminator is given by:

Hence, if we know the optimal discriminator with respect to our generator, , we are good to go, as we have this following amount by rearranging the above equation:

What does it tell us is that, even if we have non-optimal generator , we could still find the true data distribution by weighting , the generator’s distribution, with the ratio of optimal discriminator for that generator.

Unfortunately, perfect discriminator is hard to get. But we can work with its approximation instead. The assumption is that if we train more and more, it becomes closer and closer to , and our GAN training becomes better and better.

If we think further at the above equation, we would get , i.e. our generator is optimal, if the ratio of the discriminator is equal to one. If that ratio is equal to one, then consequently must be equal to . Therefore, the optimal generator is the one that can make make the discriminator to be everywhere. Notice that is the decision boundary. Hence, we want to generate such that is near the decision boundary. Therefore, the authors of the paper named this method Boundary Seeking GAN (BGAN).

That statement has a very intuitive explanation. If we consider the generator to be perfect, can’t distinguish the real and the fake data. In other words, real and fake data are equally likely, as far as concerned. As has two outputs (real or fake), then, those outputs has the probability of each.

Now, we could modify the generator’s objective in order to make the discriminator outputting for every data we generated. One way to do it is to minimize the distance between and for all . If we do so, as is a probability measure, we will get the minimum at , which is what we want.

Therefore, the new objective for the generator is:

which is just an loss. We added as is a probability measure, and we want to undo that, as we are talking about distance, not divergence.

Implementation

This should be the shortest ever implementation note in my blog.

We just need to change the original GAN’s objective from:

G_loss = -torch.mean(log(D_fake))

to:

G_loss = 0.5 * torch.mean((log(D_fake) - log(1 - D_fake))**2)

And we’re done. For full code, check out https://github.com/wiseodd/generative-modes.

Conclusion

In this post we looked at a new GAN variation called Boundary Seeking GAN (BGAN). We looked at the intuition of BGAN, and tried to understand why it’s called “boundary seeking”.

We also implemented BGAN in Pytorch with just one line of code change.

References

  1. Hjelm, R. Devon, et al. “Boundary-Seeking Generative Adversarial Networks.” arXiv preprint arXiv:1702.08431 (2017). arxiv
  2. Goodfellow, Ian, et al. “Generative adversarial nets.” Advances in Neural Information Processing Systems. 2014. arxiv