Index: The Book of Statistical ProofsProbability DistributionsUnivariate continuous distributionsGamma distribution ▷ Differential entropy

Theorem: Let $X$ be a random variable following a gamma distribution:

\[\label{eq:gam} X \sim \mathrm{Gam}(a, b)\]

Then, the differential entropy of $X$ in nats is

\[\label{eq:gam-dent} \mathrm{h}(X) = a + \ln \Gamma(a) + (1-a) \cdot \psi(a) + \ln b \; .\]

Proof: The differential entropy of a random variable is defined as

\[\label{eq:dent} \mathrm{h}(X) = - \int_{\mathcal{X}} p(x) \, \log_b p(x) \, \mathrm{d}x \; .\]

To measure $h(X)$ in nats, we set $b = e$, such that

\[\label{eq:dent-nats} \mathrm{h}(X) = - \mathrm{E}\left[ \ln p(x) \right] \; .\]

With the probability density function of the gamma distribution, the differential entropy of $X$ is:

\[\label{eq:gam-dent-s1} \begin{split} \mathrm{h}(X) &= - \mathrm{E}\left[ \ln \left( \frac{b^a}{\Gamma(a)} x^{a-1} \exp[-b x] \right) \right] \\ &= - \mathrm{E}\left[ a \cdot \ln b - \ln \Gamma(a) + (a-1) \ln x - b x \right] \\ &= - a \cdot \ln b + \ln \Gamma(a) - (a-1) \cdot \mathrm{E}(\ln x) + b \cdot \mathrm{E}(x) \; . \end{split}\]

Using the mean and logarithmic expectation of the gamma distribution

\[\label{eq:gam-mean-logmean} X \sim \mathrm{Gam}(a, b) \quad \Rightarrow \quad \mathrm{E}(X) = \frac{a}{b} \quad \text{and} \quad \mathrm{E}(\ln X) = \psi(a) - \ln(b) \; ,\]

the differential entropy of $X$ becomes:

\[\label{eq:gam-dent-s2} \begin{split} \mathrm{h}(X) &= - a \cdot \ln b + \ln \Gamma(a) - (a-1) \cdot (\psi(a) - \ln b) + b \cdot \frac{a}{b} \\ &= - a \cdot \ln b + \ln \Gamma(a) + (1-a) \cdot \psi(a) + a \cdot \ln b - \ln b + a \\ &= a + \ln \Gamma(a) + (1-a) \cdot \psi(a) - \ln b \; . \end{split}\]
Sources:

Metadata: ID: P239 | shortcut: gam-dent | author: JoramSoch | date: 2021-07-14, 07:37.