Proof: Corrected Akaike information criterion in terms of maximum log-likelihood
Index:
The Book of Statistical Proofs ▷
Model Selection ▷
Classical information criteria ▷
Akaike information criterion ▷
Corrected AIC and maximum log-likelihood
Metadata: ID: P315 | shortcut: aicc-mll | author: JoramSoch | date: 2022-03-11, 16:53.
Theorem: The corrected Akaike information criterion of a generative model with likelihood function $p(y \vert \theta, m)$ is equal to
\[\label{eq:aicc-mll} \mathrm{AIC}_\mathrm{c}(m) = -2 \log p(y | \hat{\theta}, m) + \frac{2nk}{n-k-1}\]where $\log p(y \vert \hat{\theta}, m)$ is the maximum log-likelihood, $k$ is the number of free parameters and $n$ is the number of observations.
Proof: The Akaike information criterion (AIC) is defined as
\[\label{eq:aic} \mathrm{AIC}(m) = -2 \log p(y | \hat{\theta}, m) + 2 \, k\]and the corrected Akaike information criterion is defined as
\[\label{eq:aicc} \mathrm{AIC}_\mathrm{c}(m) = \mathrm{AIC}(m) + \frac{2k^2 + 2k}{n-k-1} \; .\]Plugging \eqref{eq:aic} into \eqref{eq:aicc}, we obtain:
\[\label{eq:aicc-mll-qed} \begin{split} \mathrm{AIC}_\mathrm{c}(m) &= -2 \log p(y | \hat{\theta}, m) + 2 \, k + \frac{2k^2 + 2k}{n-k-1} \\ &= -2 \log p(y | \hat{\theta}, m) + \frac{2k(n-k-1)}{n-k-1} + \frac{2k^2 + 2k}{n-k-1} \\ &= -2 \log p(y | \hat{\theta}, m) + \frac{2nk - 2k^2 - 2k}{n-k-1} + \frac{2k^2 + 2k}{n-k-1} \\ &= -2 \log p(y | \hat{\theta}, m) + \frac{2nk}{n-k-1} \; . \end{split}\]∎
Sources: - Wikipedia (2022): "Akaike information criterion"; in: Wikipedia, the free encyclopedia, retrieved on 2022-03-11; URL: https://en.wikipedia.org/wiki/Akaike_information_criterion#Modification_for_small_sample_size.
Metadata: ID: P315 | shortcut: aicc-mll | author: JoramSoch | date: 2022-03-11, 16:53.