Index: The Book of Statistical ProofsModel Selection ▷ Bayesian model selection ▷ Bayesian model averaging ▷ Calculation from log model evidences

Theorem: Let $m_1, \ldots, m_M$ be $M$ statistical models describing the same measured data $y$ with log model evidences $\mathrm{LME}(m_1), \ldots, \mathrm{LME}(m_M)$ and shared model parameters $\theta$. Then, Bayesian model averaging determines the following posterior distribution over $\theta$:

\[\label{eq:BMA-LME} p(\theta|y) = \sum_{i=1}^{M} p(\theta|m_i,y) \cdot \frac{\exp[\mathrm{LME}(m_i)] \, p(m_i)}{\sum_{j=1}^{M} \exp[\mathrm{LME}(m_j)] \, p(m_j)} \; ,\]

where $p(\theta \vert m_i,y)$ is the posterior distributions over $\theta$ obtained using $m_i$.

Proof: According to the law of marginal probability, the probability of the shared parameters $\theta$ conditional on the measured data $y$ can be obtained by marginalizing over the discrete variable model $m$:

\[\label{eq:BMA-PMP} p(\theta|y) = \sum_{i=1}^{M} p(\theta|m_i,y) \cdot p(m_i|y) \; ,\]

where $p(m_i \vert y)$ is the posterior probability of the $i$-th model. One can express posterior model probabilities in terms of log model evidences as

\[\label{eq:PMP-LME} p(m_i|y) = \frac{\exp[\mathrm{LME}(m_i)] \, p(m_i)}{\sum_{j=1}^{M} \exp[\mathrm{LME}(m_j)] \, p(m_j)}\]

and by plugging \eqref{eq:PMP-LME} into \eqref{eq:BMA-PMP}, one arrives at \eqref{eq:BMA-LME}.

Sources:

Metadata: ID: P67 | shortcut: bma-lme | author: JoramSoch | date: 2020-02-27, 21:58.