Index: The Book of Statistical ProofsStatistical Models ▷ Univariate normal data ▷ Multiple linear regression ▷ t-contrast

Definition: Consider a linear regression model with $n \times p$ design matrix $X$ and $p \times 1$ regression coefficients $\beta$:

\[\label{eq:mlr} y = X\beta + \varepsilon, \; \varepsilon \sim \mathcal{N}(0, \sigma^2 V) \; .\]

Then, a t-contrast is specified by a $p \times 1$ vector $c$ and it entails the null hypothesis that the product of this vector and the regression coefficients is zero:

\[\label{eq:mlr-t-h0} H_0: \; c^\mathrm{T} \beta = 0 \; .\]

Consequently, the alternative hypothesis of a two-tailed t-test is

\[\label{eq:mlr-t-h1} H_1: \; c^\mathrm{T} \beta \neq 0\]

and the alternative hypothesis of a one-sided t-test would be

\[\label{eq:mlr-t-h1lr} H_1: \; c^\mathrm{T} \beta < 0 \quad \text{or} \quad H_1: \; c^\mathrm{T} \beta > 0 \; .\]

Here, $c$ is called the “contrast vector” and $c^\mathrm{T} \beta$ is called the “contrast value”. With estimated regression coefficients, $c^\mathrm{T} \hat{\beta}$ is called the “estimated contrast value”.

 
Sources:

Metadata: ID: D185 | shortcut: tcon | author: JoramSoch | date: 2022-12-16, 12:35.