/ 4 min read

Fisher Information Matrix

Suppose we have a model parameterized by parameter vector that models a distribution . In frequentist statistics, the way we learn is to maximize the likelihood wrt. parameter . To assess the goodness of our estimate of we define a score function:

that is, score function is the gradient of log likelihood function. The result about score function below is important building block on our discussion.

Claim: The expected value of score wrt. our model is zero.

Proof.    Below, the gradient is wrt. .

But how certain are we to our estimate? We can define an uncertainty measure around the expected estimate. That is, we look at the covariance of score of our model. Taking the result from above:

We can then see it as an information. The covariance of score function above is the definition of Fisher Information. As we assume is a vector, the Fisher Information is in a matrix form, called Fisher Information Matrix:

However, usually our likelihood function is complicated and computing the expectation is intractable. We can approximate the expectation in using empirical distribution , which is given by our training data . In this form, is called Empirical Fisher:

Fisher and Hessian

One property of that is not obvious is that it has the interpretation of being the negative expected Hessian of our model’s log likelihood.

Claim: The negative expected Hessian of log likelihood is equal to the Fisher Information Matrix .

Proof.    The Hessian of the log likelihood is given by the Jacobian of its gradient:

where the second line is a result of applying quotient rule of derivative. Taking expectation wrt. our model, we have:

Thus we have .

Indeed knowing this result, we can see the role of as a measure of curvature of the log likelihood function.

Conclusion

Fisher Information Matrix is defined as the covariance of score function. It is a curvature matrix and has interpretation as the negative expected Hessian of log likelihood function. Thus the immediate application of is as drop-in replacement of in second order optimization methods.

One of the most exciting results of is that it has connection to KL-divergence. This gives rise to natural gradient method, which we shall discuss further in the next article.

References

  1. Martens, James. “New insights and perspectives on the natural gradient method.” arXiv preprint arXiv:1412.1193 (2014).
  2. Ly, Alexander, et al. “A tutorial on Fisher information.” Journal of Mathematical Psychology 80 (2017): 40-55.