/ 5 min read

Gibbs Sampler for LDA

Latent Dirichlet Allocation (LDA) [1] is a mixed membership model for topic modeling. Given a set of documents in bag of words representation, we want to infer the underlying topics those documents represent. To get a better intuition, we shall look at LDA’s generative story. Note, the full code is available at https://github.com/wiseodd/mixture-models.

Given the document index, the word index, the topic index, LDA assumes:

where and are the parameters for the Dirichlet priors. They tell us how narrow or spread the document topic and topic word distributions are.

Details for the above generative process above in words:

  1. Assume each document generated by selecting the topic first. Thus, sample , the topic distribution for -th document.
  2. Assume each words in -th document comes from one of the topics. Therefore, we sample , the topic for each word in document .
  3. Assume each topic is composed of words, e.g. topic “computer” consits of words “cpu”, “gpu”, etc. Therefore, we sample , the distribution those words for particular topic .
  4. Finally, to actually generate the word, given that we already know it comes from topic , we sample the word given the -th topic word distribution.

Inference

The goal of inference in LDA is that given a corpus, we infer the underlying topics that explain those documents, according to the generative process above. Essentially, given , we are inverting the above process to find , , and .

We will infer those variables using Gibbs Sampling algorithm. In short, it works by sampling each of those variables given the other variables (full conditional distribution). Because of the conjugacy, the full conditionals are as follows:

Essentially, what we are doing is to count the assignment of words and documents to particular topics. Those are the sufficient statistics for the full conditionals

Given those full conditionals, the rest is as easy as plugging those into the Gibbs Sampling framework, as we shall discuss in the next section.

Implementation

We begin with randomly initializing topic assignment matrix . We also sample the initial values of and .

# Dirichlet priors
alpha = 1
gamma = 1
# Z := word topic assignment
Z = np.zeros(shape=[N_D, N_W])
for i in range(N_D):
for l in range(N_W):
Z[i, l] = np.random.randint(N_K) # randomly assign word's topic
# Pi := document topic distribution
Pi = np.zeros([N_D, N_K])
for i in range(N_D):
Pi[i] = np.random.dirichlet(alpha*np.ones(N_K))
# B := word topic distribution
B = np.zeros([N_K, N_W])
for k in range(N_K):
B[k] = np.random.dirichlet(gamma*np.ones(N_W))

Then we sample the new values for each of those variables from the full conditionals in the previous section, and iterate:

for it in range(1000):
# Sample from full conditional of Z
# ---------------------------------
for i in range(N_D):
for v in range(N_W):
# Calculate params for Z
p_iv = np.exp(np.log(Pi[i]) + np.log(B[:, X[i, v]]))
p_iv /= np.sum(p_iv)
# Resample word topic assignment Z
Z[i, v] = np.random.multinomial(1, p_iv).argmax()
# Sample from full conditional of Pi
# ----------------------------------
for i in range(N_D):
m = np.zeros(N_K)
# Gather sufficient statistics
for k in range(N_K):
m[k] = np.sum(Z[i] == k)
# Resample doc topic dist.
Pi[i, :] = np.random.dirichlet(alpha + m)
# Sample from full conditional of B
# ---------------------------------
for k in range(N_K):
n = np.zeros(N_W)
# Gather sufficient statistics
for v in range(N_W):
for i in range(N_D):
for l in range(N_W):
n[v] += (X[i, l] == v) and (Z[i, l] == k)
# Resample word topic dist.
B[k, :] = np.random.dirichlet(gamma + n)

And basically we are done. We could inspect the result by looking at those variables after some iterations of the algorithm.

Example

Let’s say we have these data:

# Words
W = np.array([0, 1, 2, 3, 4])
# D := document words
X = np.array([
[0, 0, 1, 2, 2],
[0, 0, 1, 1, 1],
[0, 1, 2, 2, 2],
[4, 4, 4, 4, 4],
[3, 3, 4, 4, 4],
[3, 4, 4, 4, 4]
])
N_D = X.shape[0] # num of docs
N_W = W.shape[0] # num of words
N_K = 2 # num of topics

Those data are already in bag of words representation, so it is a little abstract at a glance. However if we look at it, we could see two big clusters of documents based on their words: and . Therefore, we expect after our sampler converges to the posterior, the topic distribution for those documents will follow our intuition.

Here is the result:

Document topic distribution:
----------------------------
[[ 0.81960751 0.18039249]
[ 0.8458758 0.1541242 ]
[ 0.78974177 0.21025823]
[ 0.20697807 0.79302193]
[ 0.05665149 0.94334851]
[ 0.15477016 0.84522984]]

As we can see, indeed document 1, 2, and 3 tend to be in the same cluster. The same could be said for document 4, 5, 6.

References

  1. Blei, David M., Andrew Y. Ng, and Michael I. Jordan. “Latent dirichlet allocation.” Journal of machine Learning research 3.Jan (2003): 993-1022.
  2. Murphy, Kevin P. Machine learning: a probabilistic perspective. MIT press, 2012.