Index: The Book of Statistical ProofsStatistical ModelsUnivariate normal dataUnivariate Gaussian with known variance ▷ Expectation of log Bayes factor

Theorem: Let

\[\label{eq:ugkv} y = \left\lbrace y_1, \ldots, y_n \right\rbrace, \quad y_i \sim \mathcal{N}(\mu, \sigma^2), \quad i = 1, \ldots, n\]

be a univariate Gaussian data set with unknown mean $\mu$ and known variance $\sigma^2$. Moreover, assume two statistical models, one assuming that $\mu$ is zero (null model), the other imposing a normal distribution as the prior distribution on the model parameter $\mu$ (alternative):

\[\label{eq:UGkv-m01} \begin{split} m_0&: \; y_i \sim \mathcal{N}(\mu, \sigma^2), \; \mu = 0 \\ m_1&: \; y_i \sim \mathcal{N}(\mu, \sigma^2), \; \mu \sim \mathcal{N}(\mu_0, \lambda_0^{-1}) \; . \end{split}\]

Then, under the null hypothesis that $m_0$ generated the data, the expectation of the log Bayes factor in favor of $m_1$ with $\mu_0 = 0$ against $m_0$ is

\[\label{eq:UGkv-LBF} \left\langle \mathrm{LBF}_{10} \right\rangle = \frac{1}{2} \log\left( \frac{\lambda_0}{\lambda_n} \right) + \frac{1}{2} \left( \frac{\lambda_n - \lambda_0}{\lambda_n} \right)\]

where $\lambda_n$ is the posterior precision for the univariate Gaussian with known variance.

Proof: The log Bayes factor for the univariate Gaussian with known variance is

\[\label{eq:UGkv-LBF-m10-s1} \mathrm{LBF}_{10} = \frac{1}{2} \log\left( \frac{\lambda_0}{\lambda_n} \right) - \frac{1}{2} \left( \lambda_0 \mu_0^2 - \lambda_n \mu_n^2 \right)\]

where the posterior hyperparameters are given by

\[\label{eq:UGkv-post-par} \begin{split} \mu_n &= \frac{\lambda_0 \mu_0 + \tau n \bar{y}}{\lambda_0 + \tau n} \\ \lambda_n &= \lambda_0 + \tau n \end{split}\]

with the sample mean $\bar{y}$ and the inverse variance or precision $\tau = 1/\sigma^2$. Plugging $\mu_n$ from \eqref{eq:UGkv-post-par} into \eqref{eq:UGkv-LBF-m10-s1}, we obtain:

\[\label{eq:UGkv-LBF-m10-s2} \begin{split} \mathrm{LBF}_{10} &= \frac{1}{2} \log\left( \frac{\lambda_0}{\lambda_n} \right) - \frac{1}{2} \left( \lambda_0 \mu_0^2 - \lambda_n \, \frac{(\lambda_0 \mu_0 + \tau n \bar{y})^2}{\lambda_n^2} \right) \\ &= \frac{1}{2} \log\left( \frac{\lambda_0}{\lambda_n} \right) - \frac{1}{2} \left( \lambda_0 \mu_0^2 - \frac{1}{\lambda_n} (\lambda_0^2 \mu_0^2 - 2 \tau n \lambda_0 \mu_0 \bar{y} + \tau^2 (n \bar{y})^2) \right) \end{split}\]

Because $m_1$ uses a zero-mean prior distribution with prior mean $\mu_0 = 0$ per construction, the log Bayes factor simplifies to:

\[\label{eq:UGkv-LBF-m10-s3} \mathrm{LBF}_{10} = \frac{1}{2} \log\left( \frac{\lambda_0}{\lambda_n} \right) + \frac{1}{2} \left( \frac{\tau^2 (n \bar{y})^2}{\lambda_n} \right) \; .\]

From \eqref{eq:ugkv}, we know that the data are distributed as $y_i \sim \mathcal{N}(\mu, \sigma^2)$, such that we can derive the expectation of $(n \bar{y})^2$ as follows:

\[\label{eq:UGkv-E(ny2)} \begin{split} \left\langle (n \bar{y})^2 \right\rangle = \left\langle \sum_{i=1}^n \sum_{j=1}^n y_i y_j \right\rangle &= \left\langle n y_i^2 + (n^2-n) [y_i y_j]_{i \neq j} \right\rangle \\ &= n (\mu^2 + \sigma^2) + (n^2 - n) \mu^2 \\ &= n^2 \mu^2 + n \sigma^2 \; . \end{split}\]

Applying this expected value to \eqref{eq:UGkv-LBF-m10-s3}, the expected LBF emerges as:

\[\label{eq:UGkv-LBF-m10-s4} \begin{split} \left\langle \mathrm{LBF}_{10} \right\rangle &= \frac{1}{2} \log\left( \frac{\lambda_0}{\lambda_n} \right) + \frac{1}{2} \left( \frac{\tau^2 (n^2 \mu^2 + n \sigma^2)}{\lambda_n} \right) \\ &= \frac{1}{2} \log\left( \frac{\lambda_0}{\lambda_n} \right) + \frac{1}{2} \left( \frac{(\tau n \mu)^2 + \tau n}{\lambda_n} \right) \end{split}\]

Under the null hypothesis that $m_0$ generated the data, the unknown mean is $\mu = 0$, such that the log Bayes factor further simplifies to:

\[\label{eq:UGkv-LBF-m10-s5} \left\langle \mathrm{LBF}_{10} \right\rangle = \frac{1}{2} \log\left( \frac{\lambda_0}{\lambda_n} \right) + \frac{1}{2} \left( \frac{\tau n}{\lambda_n} \right) \; .\]

Finally, plugging $\lambda_n$ from \eqref{eq:UGkv-post-par} into \eqref{eq:UGkv-LBF-m10-s5}, we obtain:

\[\label{eq:UGkv-LBF-m10-s6} \left\langle \mathrm{LBF}_{10} \right\rangle = \frac{1}{2} \log\left( \frac{\lambda_0}{\lambda_n} \right) + \frac{1}{2} \left( \frac{\lambda_n - \lambda_0}{\lambda_n} \right) \; .\]
Sources:

Metadata: ID: P216 | shortcut: ugkv-lbfmean | author: JoramSoch | date: 2021-03-24, 10:03.