Index: The Book of Statistical ProofsProbability DistributionsUnivariate discrete distributionsBernoulli distribution ▷ Shannon entropy

Theorem: Let $X$ be a random variable following a Bernoulli distribution:

\[\label{eq:bern} X \sim \mathrm{Bern}(p) \; .\]

Then, the (Shannon) entropy of $X$ in bits is

\[\label{eq:bern-ent} \mathrm{H}(X) = -p \log_2 p - (1-p) \log_2 (1-p) \; .\]

Proof: The entropy is defined as the probability-weighted average of the logarithmized probabilities for all possible values:

\[\label{eq:ent} \mathrm{H}(X) = - \sum_{x \in \mathcal{X}} p(x) \cdot \log_b p(x) \; .\]

Entropy is measured in bits by setting $b = 2$. Since there are only two possible outcomes for a Bernoulli random variable, we have:

\[\label{eq:bern-ent-qed} \begin{split} \mathrm{H}(X) &= - \mathrm{Pr}(X = 0) \cdot \log_2 \mathrm{Pr}(X = 0) - \mathrm{Pr}(X = 1) \cdot \log_2 \mathrm{Pr}(X = 1) \\ &= -p \log_2 p - (1-p) \log_2 (1-p) \; . \\ \end{split}\]
Sources:

Metadata: ID: P334 | shortcut: bern-ent | author: JoramSoch | date: 2022-09-02, 12:21.