Proof: Log model evidence in terms of prior and posterior distribution
Index:
The Book of Statistical Proofs ▷
Model Selection ▷
Bayesian model selection ▷
Model evidence ▷
Expression using prior and posterior
Metadata: ID: P314 | shortcut: lme-pnp | author: JoramSoch | date: 2022-03-11, 16:25.
Theorem: Let $p(y \vert \theta,m)$ be a likelihood function of a generative model $m$ for making inferences on model parameters $\theta$ given measured data $y$. Moreover, let $p(\theta \vert m)$ be a prior distribution on model parameters $\theta$. Then, the log model evidence (LME), also called marginal log-likelihood,
\[\label{eq:LME-term} \mathrm{LME}(m) = \log p(y|m) \; ,\]can be expressed in terms of prior and posterior as
\[\label{eq:LME-bayes} \mathrm{LME}(m) = \log p(y|\theta,m) + \log p(\theta|m) - \log p(\theta|y,m) \; .\]Proof: For a full probability model, Bayes’ theorem makes a statement about the posterior distribution:
\[\label{eq:BT} p(\theta|y,m) = \frac{p(y|\theta,m) \, p(\theta|m)}{p(y|m)} \; .\]Rearranging for $p(y \vert m)$ and logarithmizing, we have:
\[\label{eq:LME-bayes-qed} \begin{split} \mathrm{LME}(m) = \log p(y|m) & = \log \frac{p(y|\theta,m) \, p(\theta|m)}{p(\theta|y,m)} \\ &= \log p(y|\theta,m) + \log p(\theta|m) - \log p(\theta|y,m) \; . \end{split}\]∎
Sources: Metadata: ID: P314 | shortcut: lme-pnp | author: JoramSoch | date: 2022-03-11, 16:25.