Generative Adversarial Networks (GAN) is one of the most exciting generative models in recent years. The idea behind it is to learn generative distribution of data through two-player minimax game, i.e. the objective is to find the Nash Equilibrium. For more about the intuition and implementation of GAN, please see my previous post about GAN and CGAN.
One natural extension of GAN is to learn a conditional generative distribution. The conditional could be anything, e.g. class label or even another image.
However, we need to provide those conditionals manually, somewhat similar to supervised learning. InfoGAN, therefore, attempted to make this conditional learned automatically, instead of telling GAN what that is.
InfoGAN intuition
Recall, in CGAN, the generator network has an additional parameter: , i.e. , where is a conditional variable. During training, will learn the conditional distribution of data . Although principally what CGAN and InfoGAN learn is the same distribution: , what different is how they see .
In CGAN, is assumed to be semantically known, e.g. labels, so during training we have to supply it. In InfoGAN we assume to be unknown, so what we do instead is to put a prior for and infer it based on the data, i.e. we want to find posterior .
As in InfoGAN is inferred automatically, InfoGAN could assign it to anything related to the distribution of data, depending to the choice of the prior. For example, although we could not specify what should encodes, we could hope that InfoGAN captures label information into it by assigning a Categorical prior. Another example, if we assign a Gaussian prior for , InfoGAN might assign a continuous propery for , e.g. rotation angle.
So how does InfoGAN do that? This is when information theory takes part.
In information theory, if we want to express the knowledge about something if we know something else, we could use mutual information. So, if we maximize mutual information, we could find something that could contribute to the knowledge of another something the most. In our case, we want to maximize the knowledge about our conditional variable , if we know .
The InfoGAN mutual information loss is formulated as follows:
where is the entropy of the prior , is the generator net, and is a neural net that takes image input and producing the conditional . is a variational distribution to model the posterior , which we do not know and as in any Bayesian inference, it is often hard to compute.
This mutual information term fits in the overall GAN loss as a regularization:
where is GAN loss.
InfoGAN training
During training, we provide a prior , which could be any distribution. In fact, we could add as many priors as we want, and InfoGAN might assign different properties to them. The author of InfoGAN called this as “disentangled representations”, as it kind of breaking down the properties of data into several conditional parameters.
The training process is similar for discriminator net and generator net is quite similar to CGAN, which could be read further here. The differences, however, are:
instead of , we use discriminator as in vanilla GAN: , i.e. unconditional discriminator,
instead of feeding observed data for the , e.g. labels, into , we sample from prior .
In addition to and , we also train so that we could compute the mutual information. What we do is to sample and use it to sample and finally pass it to . The result, along with prior are used to compute the mutual information. The mutual information is then backpropagated to both and to update both networks so that we could maximize the mutual information.
InfoGAN implementation in TensorFlow
The implementation for vanilla and conditional GAN could be found here: GAN, CGAN. We will focus on the additional implementation for InfoGAN in this section.
We will implement InfoGAN for MNIST data, with categorically distributed, i.e. one-hot vector with ten elements.
As seen in the loss function of InfoGAN, we need one additional network, :
that is, we model as a two-layer net with softmax on top. The choice of softmax is because is categorically distributed, and softmax could pose as its parameter. If we choose to be Gaussian, then we could design the network so that the outputs are mean and variance.
Next, we specify our prior:
which is a categorical distribution, with equal probability for each of the ten elements.
As training and is not different than vanila GAN and CGAN, we will omit it from this section. To train , as seen in the regularization term above, we first sample from , and use it to sample from :
during runtime, we will populate with values from sample_c().
Having all ingredients in hands, we could compute the mutual information term, which is the conditional entropy of the prior and our variational distribution, plus the entropy of our prior. Observe however, our prior is a fixed distribution, thus the entropy will be constant and can be left out.
Then, we optimize both and , based on that:
We initialized the training as follows:
After training, we could see what property our prior encodes. In this experiment, our will encode label property nicely, i.e. if we pass c = [0, 0, 1, 0, 0, 0, 0, 0, 0, 0], we might get this:
Note, naturally, there is no guarantee on the ordering of .
We could try different values for :
We could see that our implementation of InfoGAN could capture the conditional variable, which in this case is the labels, in unsupervised manner.
Conclusion
In this post we learned the intuition of InfoGAN: a conditional GAN trained in unsupervised manner.
We saw that InfoGAN learns to map the prior , together with noise prior into data distribution by adding maximization of the mutual information between and into GAN training. The rationale is that at the maximum mutual information between those two, they can explain each other well, e.g. could explain why are a images of the same digit.
We also implemented InfoGAN in TensorFlow, which as we saw, it is a simple modification from the original GAN and CGAN.
Chen, Xi, et al. “Infogan: Interpretable representation learning by information maximizing generative adversarial nets.” Advances in Neural Information Processing Systems. 2016.