Back
Tags: #probability
-
Maximizing likelihood is equivalent to minimizing KL-Divergence
We will show that doing MLE is equivalent to minimizing the KL-Divergence between the estimator and the true distribution.
Back
We will show that doing MLE is equivalent to minimizing the KL-Divergence between the estimator and the true distribution.