Index: The Book of Statistical ProofsStatistical ModelsMultivariate normal dataInverse general linear model ▷ Derivation of the distribution

Theorem: Let there be a general linear model of $Y \in \mathbb{R}^{n \times v}$

\[\label{eq:glm} Y = X B + E, \; E \sim \mathcal{MN}(0, V, \Sigma) \; .\]

Then, the inverse general linear model of $X \in \mathbb{R}^{n \times p}$ is given by

\[\label{eq:iglm} X = Y W + N, \; N \sim \mathcal{MN}(0, V, \Sigma_x)\]

where $W \in \mathbb{R}^{v \times p}$ is a matrix, such that $B \, W = I_p$, and the covariance across columns is $\Sigma_x = W^\mathrm{T} \Sigma W$.

Proof: The linear transformation theorem for the matrix-normal distribution states:

\[\label{eq:matn-ltt} X \sim \mathcal{MN}(M, U, V) \quad \Rightarrow \quad Y = AXB + C \sim \mathcal{MN}(AMB+C, AUA^\mathrm{T}, B^\mathrm{T}VB) \; .\]

The matrix $W$ exists, if the rows of $B \in \mathbb{R}^{p \times v}$ are linearly independent, such that $\mathrm{rk}(B) = p$. Then, right-multiplying the model \eqref{eq:glm} with $W$ and applying \eqref{eq:matn-ltt} yields

\[\label{eq:iglm-s1} Y W = X B W + E W, \; E W \sim \mathcal{MN}(0, V, W^\mathrm{T} \Sigma W) \; .\]

Employing $B \, W = I_p$ and rearranging, we have

\[\label{eq:iglm-s2} X = Y W - E W, \; E W \sim \mathcal{MN}(0, V, W^\mathrm{T} \Sigma W) \; .\]

Substituting $N = - E W$, we get

\[\label{eq:iglm-s3} X = Y W + N, \; N \sim \mathcal{MN}(0, V, W^\mathrm{T} \Sigma W)\]

which is equivalent to \eqref{eq:iglm}.

Sources:

Metadata: ID: P267 | shortcut: iglm-dist | author: JoramSoch | date: 2021-10-21, 16:03.