Index: The Book of Statistical ProofsGeneral TheoremsFrequentist statisticsLikelihood theory ▷ MLE can be biased

Theorem: Maximum likelihood estimation can result in biased estimates of model parameters, i.e. estimates whose long-term expected value is unequal to the quantities they estimate:

\[\label{eq:aicc-aic} \mathrm{E}\left[ \hat{\theta}_\mathrm{MLE} \right] = \mathrm{E}\left[ \operatorname*{arg\,max}_\theta \mathrm{LL}_m(\theta) \right] \neq \theta \; .\]

Proof: Consider a set of independent and identical normally distributed observations $x = \left\lbrace x_1, \ldots, x_n \right\rbrace$ with unknown mean $\mu$ and variance $\sigma^2$:

\[\label{eq:ug} x_i \overset{\text{i.i.d.}}{\sim} \mathcal{N}(\mu, \sigma^2), \quad i = 1,\ldots,n \; .\]

Then, we know that the maximum likelihood estimator for the variance $\sigma^2$ is underestimating the true variance of the data distribution:

\[\label{eq:resvar-bias} \mathrm{E}\left[ \hat{\sigma}^2_\mathrm{MLE} \right] = \frac{n-1}{n} \sigma^2 \neq \sigma^2 \; .\]

This proofs the existence of cases such as those stated by the theorem.

Sources:

Metadata: ID: P317 | shortcut: mle-bias | author: JoramSoch | date: 2022-03-18, 17:26.