Index: The Book of Statistical ProofsStatistical ModelsUnivariate normal dataUnivariate Gaussian with known variance ▷ One-sample z-test

Theorem: Let

\[\label{eq:ugkv} y_i \sim \mathcal{N}(\mu, \sigma^2), \quad i = 1, \ldots, n\]

be a univariate Gaussian data set with unknown mean $\mu$ and known variance $\sigma^2$. Then, the test statistic

\[\label{eq:z} z = \sqrt{n} \, \frac{\bar{y}-\mu_0}{\sigma}\]

with sample mean $\bar{y}$ follows a standard normal distribution

\[\label{eq:z-dist} z \sim \mathcal{N}(0, 1)\]

under the null hypothesis

\[\label{eq:ztest1-h0} H_0: \; \mu = \mu_0 \; .\]

Proof: The sample mean is given by

\[\label{eq:mean-samp} \bar{y} = \frac{1}{n} \sum_{i=1}^{n} y_i \; .\]

Using the linear combination formula for normal random variables, the sample mean follows a normal distribution with the following parameters:

\[\label{eq:mean-samp-dist} \bar{y} = \frac{1}{n} \sum_{i=1}^{n} y_i \sim \mathcal{N}\left( \frac{1}{n} n \mu, \left(\frac{1}{n}\right)^2 n \sigma^2 \right) = \mathcal{N}\left( \mu, \sigma^2/n \right) \; .\]

Again employing the linear combination theorem, the distribution of $z = \sqrt{n/\sigma^2} (\bar{y}-\mu_0)$ becomes

\[\label{eq:z-dist-s1} z = \sqrt{\frac{n}{\sigma^2}} (\bar{y} - \mu_0) \sim \mathcal{N}\left( \sqrt{\frac{n}{\sigma^2}} (\mu - \mu_0), \left(\sqrt{\frac{n}{\sigma^2}}\right)^2 \frac{\sigma^2}{n} \right) = \mathcal{N}\left( \sqrt{n} \, \frac{\mu-\mu_0}{\sigma}, 1 \right) \; ,\]

such that, under the null hypothesis in \eqref{eq:ztest1-h0}, we have:

\[\label{eq:z-dist-s2} z \sim \mathcal{N}(0, 1), \quad \text{if } \mu = \mu_0 \; .\]

This means that the null hypothesis can be rejected when $z$ is as extreme or more extreme than the critical value obtained from the standard normal distribution using a significance level $\alpha$.

Sources:

Metadata: ID: P208 | shortcut: ugkv-ztest1 | author: JoramSoch | date: 2021-03-24, 04:23.