machine-learningprobabilitybayesianmlebeta-distribution

Why do we choose Beta distribution as a prior on hypothesis?


I saw machine learning class videos of course 10-701 year 2011 by Tom Mitchell at CMU. He was teaching on topic Maximum Likelihood Estimation when he used Beta distribution as prior on theta, I wonder he chose that only?

This is screenshot of the lecture


Solution

  • In this lecture, prof Mitchell gives an example of coin flipping and estimating its fairness, i.e. the probability of heads - theta. He reasonably chose a binomial distribution for this experiment.

    The reason to choose beta distribution for prior is to simplify the math when computing the posterior. This works well, because beta is a conjugate prior for binomial - at the very end of the same lecture the prof mentions it. This doesn't mean that one can't possibly use any other prior, e.g. normal, Poisson, etc. But other priors lead to complicated posterior distributions, which are hard to optimize, calculate the integral, etc.

    This is a general principle: prefer a conjugate prior to more complex distributions even if it doesn't fit the data exactly, because the math is simpler.