/ 6 min read

Linear Regression: A Bayesian Point of View

We all know the first model we learned when learning Machine Learning: Linear Regression. It is a simple, intuitive, and stimulating our mind to go deeper into Machine Learning hole.

Linear Regression could be intuitively interpreted in several point of views, e.g. geometry and statistics (frequentist one!). Having frequentist statistics point of view, usually there should be the Bayesian counterpart. Hence, in this post, we would address the Bayesian point of view of Linear Regression.

Linear Regression: Refreshments

Recall, in Linear Regression, we want to map our inputs into real numbers, i.e. . For example, given some features, e.g. how many hour of studying, number of subject taken, and the IQ of a student, we want to predict his or her GPA.

There are several types of Linear Regression, depending on their cost function and the regularizer. In this post, we would focus on Linear Regression with cost and regularization. In statistics, this kind of regression is called Ridge Regression.

Formally, the objective is as follows:

where is the ground truth value, and is given by:

which is a linear combination of feature vector and weight matrix. The additional in both terms is just for mathematical convenience when taking the derivative.

The idea is then to minimize this objective function with regard to . That is, we want to find weight matrix that minimize the squared error.

Of course we could ignore the regularization term. What we end up with then, is a vanilla Linear Regression:

Minimization this objective is the definition of Linear Least Square problem.

Frequentist view of Linear Regression

We could write the regression target of the above model as the predicted value plus some error:

or equivalently, we could say that the error is:

Now, let’s say we model the regression target as a Gaussian random variable, i.e. , with , the prediction of our model. Formally:

Then, to find the optimum , we could use Maximum Likelihood Estimation (MLE). As the above model is a likelihood, i.e. describing our data under parameter , we will do MLE on that:

The PDF of Gaussian is given by:

As we are doing maximization, we could ignore the normalizing constant of the likelihood. Hence:

As always, it is easier to optimize the log likelihood:

For simplicity, let’s say , then:

So we see, doing MLE on Gaussian likelihood is equal to Linear Regression!

Bayesian view of Linear Regression

But what if we want to go Bayesian, i.e. introduce a prior, and working with the posterior instead? Well, then we are doing MAP estimation! The posterior is likelihood times prior:

Since we have already known the likelihood, now we ask, what should be the prior? If we set it to be uniformly distributed, then we will be back to the MLE estimation. So, for non-trivial example, let’s use Gaussian prior for weight :

Expanding the PDF, and again ignoring the normalizing constant and keeping in mind that , we have:

Let’s derive the posterior:

And the log posterior is then:

Seems familiar, right! Now if we assume that and , then our log posterior becomes:

That is, the log posterior of Gaussian likelihood and Gaussian prior is the same as the objective function for Ridge Regression! Hence, Gaussian prior is equal to regularization!

Full Bayesian Approach

Of course, above is not a full Bayesian, as we are doing a point estimation in the form of MAP. This is just a “shortcut”, as we do not need to compute the full posterior distribution. For full Bayesian approach, we report the full posterior distribution. And in test time, we use the posterior to weight the new data, i.e. we marginalize the posterior predictive distribution:

that is, given the likelihood of our new data point , we compute the likelihood, and weigh it with the posterior.

Intuitively, given all possible value for in the posterior, we try those values one by one to predict the new data. The result is then averaged proportionality to the probability of those values, hence we are taking expectation.

And of course, that is the reason why we use a shortcut in the form of MAP. For illustration, if each component of is binary, i.e. have two possible values, and there are components in , we are talking about possible assignments for , which is exponential! In real world, each component of is a real number, which makes the problem of enumerating all possible values of intractable!

Of course we could use approximate method like Variational Bayes or MCMC, but they are still more costly than MAP. As MAP and MLE is guaranteed to find one of the modes (local maxima), it is good enough.

Conclusion

In this post we saw Linear Regression with several different point of view.

First, we looked at the definition of Linear Regression in plain Machine Learning PoV, then frequentist statistics, and finally Bayesian statistics.

We noted that the Bayesian version of the Linear Regression using MAP estimation is not a full Bayesian approach, since MAP is just a shortcut.

We then noted why full Bayesian approach is difficult and often intractable, even on this simple regression model.

References

  1. Murphy, Kevin P. Machine learning: a probabilistic perspective. MIT press, 2012.