×

Four families of measures of entropy. (English) Zbl 0589.62007

The following four new parametric measures of entropy for a probability distribution \(P=(p_ 1,p_ 2,...,p_ n)\) are proposed \[ H_ a(P)=- \sum^{n}_{i=1}p_ i \ln p_ i+a^{-1}\sum^{n}_{i=1}((1+ap_ i)\ln (1+ap_ i)-ap_ i], \]
\[ H_ b(P)=-\sum^{n}_{i=1}p_ i \ln p_ i+b^{-1}\sum^{n}_{i=1}(1+bp_ i)l\quad n (1+bp_ i)-b^{- 1}(1+b)\ln (1+b), \]
\[ H_ c(P)=-\sum^{n}_{i=1}p_ i \ln p_ i+c^{-2}\sum^{n}_{i=1}[(1+cp_ i\quad) \ln (1+cp_ i)-cp_ i], \]
\[ H_ d(P)=-\sum^{n}_{i=1}p_ i \ln p_ i+d^{- 2}\sum^{n}_{i=1}(1+dp_ i) \ln (1+dp_ i)-d^{-2}(1+d)\ln (1+d), \] where a, b, c, d are real numbers greater than -1. Each of them is a continuous concave function of \(p_ 1,p_ 2,...,p_ n\) which is maximum when all the probabilities are equal and which is minimum when one of the probabilities is unity and all others are zero.
Each includes Shannon’s measure as a limiting case. However unlike Shannon’s measure, these are non-additive, but have the advantage of greater flexibility in applications due to the presence of the parameters.
The variations of these entropies with the parameters are also studied. The main advantage is that when they are maximized subject to linear constraints by using Lagrange’s method in accordance with the maximum entropy principle, automatically they give non-negative probabilities.
In particular when \(H_ a(P)\) is maximized subject to the average energy constraint, this gives rise to Maxwell-Boltzmann, Fermi-Dirac, Bose- Einstein distributions as well as to a new category of intermediate statistical distributions.

MSC:

62B10 Statistical aspects of information-theoretic topics
94A17 Measures of information, entropy