Back
Tags: #machine learning

Natural Gradient Descent
Intuition and derivation of natural gradient descent.

Fisher Information Matrix
An introduction and intuition of Fisher Information Matrix.

Introduction to Annealed Importance Sampling
An introduction and implementation of Annealed Importance Sampling (AIS).

Gibbs Sampler for LDA
Implementation of Gibbs Sampler for the inference of Latent Dirichlet Allocation (LDA)

Boundary Seeking GAN
Training GAN by moving the generated samples to the decision boundary.

Least Squares GAN
2017 is the year GAN loss its logarithm. First, it was Wasserstein GAN, and now, it's LSGAN's turn.

CoGAN: Learning joint distribution with GAN
Original GAN and Conditional GAN are for learning marginal and conditional distribution of data respectively. But how can we extend ...

Wasserstein GAN implementation in TensorFlow and Pytorch
Wasserstein GAN comes with promise to stabilize GAN training and abolish mode collapse problem in GAN.

InfoGAN: unsupervised conditional GAN in TensorFlow and Pytorch
Adding Mutual Information regularization to a GAN turns out gives us a very nice effect: learning data representation and its ...

Maximizing likelihood is equivalent to minimizing KLDivergence
We will show that doing MLE is equivalent to minimizing the KLDivergence between the estimator and the true distribution.