# Proof: Derivation of Bayesian model averaging

**Index:**The Book of Statistical Proofs ▷ Model Selection ▷ Bayesian model selection ▷ Bayesian model averaging ▷ Derivation

**Theorem:** Let $m_1, \ldots, m_M$ be $M$ statistical models with posterior model probabilities $p(m_1 \vert y), \ldots, p(m_M \vert y)$ and posterior distributions $p(\theta \vert y, m_1), \ldots, p(\theta \vert y, m_M)$. Then, the marginal posterior density, conditional on the measured data $y$, but unconditional on the modelling approach $m$, is given by:

**Proof:** Using the law of marginal probability, the probability distribution of the shared parameters $\theta$ conditional on the measured data $y$ can be obtained by marginalizing over the discrete random variable model $m$:

Using the law of the conditional probability, the summand can be expanded to give

\[\label{eq:BMA-s2} p(\theta|y) = \sum_{i=1}^{M} p(\theta|y,m_i) \cdot p(m_i|y)\]where $p(\theta \vert y,m_i)$ is the posterior distribution of the $i$-th model and $p(m_i \vert y)$ happens to be the posterior probability of the $i$-th model.

**∎**

**Sources:**

**Metadata:**ID: P143 | shortcut: bma-der | author: JoramSoch | date: 2020-08-03, 22:05.