A note on Fanos inequality
Fanos inequality is a sharp upper bound on conditional entropy in terms of the probability of error. It plays a fundamental role in the proof of converse part in coding theorems of information theory. A standard proof of Fanos inequality as seen in textbooks is based on properties of Shannons information measures with a trick of introducing an auxiliary random variable. In this brief, it is observed that the generic ℒ1 bound on entropy can straightforwardly provide an upper bound on conditional entropy in terms of the probability of error. This generic ℒ1 bound on conditional entropy is still equally effective as the Fano bound for applications in the converse proof. Compared with the generic ℒ1 bound, Fanos inequality can be regarded as a specific bound on conditional entropy by employing the structural property of the two joint probability distributions. This viewpoint motivates us to find an identity to connect the conditional entropy and the probability of error. As a corollary, a necessary and sufficient condition for tightness of the Fano bound is also obtained. © 2011 IEEE.
Publication Source (Journal or Book title)
2011 45th Annual Conference on Information Sciences and Systems, CISS 2011
Liang, X. (2011). A note on Fanos inequality. 2011 45th Annual Conference on Information Sciences and Systems, CISS 2011 https://doi.org/10.1109/CISS.2011.5766186