Proof: Redundancy of parameters describing the matrix-normal distribution
Theorem: The covariance parameters of the matrix-normal distribution are redundant up to a scalar factor, i.e. the two probability distributions
\[\label{eq:matn-red} \begin{split} X &\sim \mathcal{MN}(M, U, V) \\ X &\sim \mathcal{MN}\left( M, a \cdot U, \frac{1}{a} \cdot V \right) \end{split}\]are equivalent for any $a \in \mathbb{R}$ with $a > 0$ where $X \in \mathbb{R}^{n \times p}$ is a random matrix and $U \in \mathbb{R}^{n \times n}$ and $V \in \mathbb{R}^{p \times p}$ are positive-definite matrices.
Proof: Since $U$ and $V$ must be positive-definite in the matrix-normal distribution, the scalar $a$ must be larger than zero. A random matrix follows a matrix-normal distribution, if and only if its vectorization is multivariate normally distributed
\[\label{eq:matn-mvn} X \sim \mathcal{MN}(M, U, V) \quad \Leftrightarrow \quad \mathrm{vec}(X) \sim \mathcal{N}(\mathrm{vec}(M), V \otimes U)\]where $\mathrm{vec}(X)$ is the vectorization operator and $\otimes$ is the Kronecker product. Thus, the second distribution in \eqref{eq:matn-red} is equivalent to
\[\label{eq:matn-red-qed} \begin{split} X \sim \mathcal{MN}\left( M, a \cdot U, \frac{1}{a} \cdot V \right) \quad \Leftrightarrow \quad \mathrm{vec}(X) &\sim \mathcal{N}\left( \mathrm{vec}(M), \frac{1}{a} V \otimes a U \right) \\ &\sim \mathcal{N}\left( \mathrm{vec}(M), \frac{1}{a} \left( V \otimes a U \right) \right) \\ &\sim \mathcal{N}\left( \mathrm{vec}(M), \frac{a}{a} \left( V \otimes U \right) \right) \\ &\sim \mathcal{N}\left( \mathrm{vec}(M), V \otimes U \right) \end{split}\]which proves the equivalence of the two distributions in \eqref{eq:matn-red}.
- Glanz, Hunter; Carvalho, Luis (2013): "An Expectation-Maximization Algorithm for the Matrix Normal Distribution"; in: arXiv stat.ME, sect. 2.1, p. 3; URL: https://arxiv.org/abs/1309.6609; DOI: 10.48550/arXiv.1309.6609.
Metadata: ID: P505 | shortcut: matn-red | author: JoramSoch | date: 2025-06-24, 11:55.