Index: The Book of Statistical ProofsModel SelectionBayesian model selectionPosterior model probability ▷ Derivation

Theorem: Let there be a set of generative models $m_1, \ldots, m_M$ with model evidences $p(y \vert m_1), \ldots, p(y \vert m_M)$ and prior probabilities $p(m_1), \ldots, p(m_M)$. Then, the posterior probability of model $m_i$ is given by

\[\label{eq:PMP} p(m_i|y) = \frac{p(y|m_i) \, p(m_i)}{\sum_{j=1}^{M} p(y|m_j) \, p(m_j)}, \; i = 1, \ldots, M \; .\]

Proof: From Bayes’ theorem, the posterior model probability of the $i$-th model can be derived as

\[\label{eq:PMP-s1} p(m_i|y) = \frac{p(y|m_i) \, p(m_i)}{p(y)} \; .\]

Using the law of marginal probability, the denominator can be rewritten, such that

\[\label{eq:PMP-s2} p(m_i|y) = \frac{p(y|m_i) \, p(m_i)}{\sum_{j=1}^{M} p(y,m_j)} \; .\]

Finally, using the law of conditional probability, we have

\[\label{eq:PMP-s3} p(m_i|y) = \frac{p(y|m_i) \, p(m_i)}{\sum_{j=1}^{M} p(y|m_j) \, p(m_j)} \; .\]
Sources:

Metadata: ID: P139 | shortcut: pmp-der | author: JoramSoch | date: 2020-07-28, 03:58.