Preprint
Article

This version is not peer-reviewed.

Bayesian Estimations of Shannon Entropy and Rényi Entropy of Inverse Weibull Distribution

A peer-reviewed article of this preprint also exists.

Submitted:

12 May 2023

Posted:

15 May 2023

You are already at the latest version

Abstract
In this paper, under the symmetric entropy and the scale squared error loss functions, we consider the maximum likelihood (ML) estimation and Bayesian estimation of the Shannon entropy and Rényi entropy of the two-parameter inverse Weibull distribution. In the ML estimation, the dichotomy is used to solve likelihood equation. In addition, the approximation confidence interval is given by the Delta method. Because the form of estimation results is more complex in the Bayesian estimation, the Lindley approximation method is used to achieve the numerical calculation. Finally, Monte Carlo simulations and a real data set are used to illustrate the results derived. By comparing the mean square error between the estimated value and the real value, it can be found that the performance of ML estimation of Shannon entropy is better than that of Bayesian estimation, and there is no significant difference between the performance of ML estimation of Rényi entropy and that of Bayesian estimation.
Keywords: 
;  ;  ;  ;  

1. Introduction

Information is an abstract concept. In the face of a large amount of data, it is easy to know how much data there is, but it is not clear how much information this data contain. Entropy is one of the important terms in physics. Shannon [1] introduced the concept of entropy into statistics, which represents the uncertainty of events. This entropy is generally called “Shannon entropy”. Generally speaking, when we hear a message within expectation, we think it contains less information. When we hear an unexpected message, we think that the amount of information it conveys to us is huge. In statistics, the probability is usually used to describe the uncertainty of an event. Therefore, Shannon believes that probability can be used to describe the amount of information contained in an event. After that, Alred Rényi [2] generalized Shannon entropy and put forward the concept of Rényi entropy. Since then, the study of entropy has attracted a lot of attention [3,4]. For example, Chacko and Asha [5] considered the maximum likelihood (ML) estimation and Bayesian estimation of Shannon entropy for generalized exponential distribution by the importance sampling method based on record values. Liu and Gui [6] considered the ML estimation and Bayesian estimation of Shannon entropy for two-parameter Lomax distribution by the Lindley method and the Tierney-Kadane method under a generalized progressively hybrid censoring test. Shrahili et al. [7] considered the estimation of entropy of Log-Logistic distribution. The estimations of different entropy functions are obtained by the ML method, and the approximate confidence intervals are obtained by using various censoring methods and sample sizes. Mahmoud et al. [8] considered the estimation of entropy and residual entropy of two-parameter Lomax distribution based on the generalized type-II hybrid censoring scheme. The ML estimators and Bayesian estimators of entropy and residual entropy are obtained. The simulation study of estimating performance under different sample sizes is described. Finally, the conclusion is discussed. Hassan and Mazen [9] estimated three entropy measures for the inverse Weibull distribution using progressively Type-II censored data, which are Shannon entropy, Rényi entropy, and q-entropy. The method of maximum likelihood and maximum product of spacing are used to estimate them. Mavis et al. [10] proposed and studied gamma-inverse Weibull distribution, and some mathematical properties were given including moments, mean deviations, Bonferroni and Lorenz curves, and entropies. Basheer [11] introduced a new generalized alpha power inverse Weibull distribution, and the Shannon entropy and Rényi entropy were obtained. Valeriia and Broderick [12] proposed the weighted inverse Weibull class of distributions and derived the expressions of Shannon entropy and Rényi entropy.
In 1982, Keller and Kamath [13] introduced the Inverse Weibull Distribution (IWD) to model the degradation of mechanical components of diesel engines. It is a useful lifetime probability distribution, and it can be used to represent various failure characteristics. Depending on the value of the shape parameter of the IWD, the risk function can be changed flexibly. The use of IWD for data fitting is therefore more appropriate in many cases. For example, Abhijit and Anindya [14] found that the use of IWD was superior to previous normal models when measuring concrete structures using ultrasonic pulse velocities. Chiodo et al. [15] proposed a new model generated from an appropriate mixture of IWD for modeling extreme wind speed scenarios. Langlands et al. [16] observed that breast cancer mortality data could be analyzed using IWD for modeling analysis. That is why two-parameter IWD has attracted more and more researchers to pay attention to and discuss in recent years [17,18]. For example, Asuman and Mahmut [19] considered the classical and Bayesian estimation of parameters and the reliability function of the IWD. In classical estimation, they derived the ML estimators and modified ML estimators. In Bayesian estimation, they utilized the Lindley method to calculate the Bayesian estimators of parameters under symmetric and asymmetric loss functions. Sultan et al. [20] discussed the estimation of parameters of IWD based on the progressive type-II censored sample. They put forward an approximate maximum likelihood method to obtain the ML estimator and used Lindley's approximation to obtain the Bayesian estimators. Amirzadi et al. [21] considered the Bayesian estimation of scale parameter and reliability in the inverse generalized Weibull distribution. In addition to general entropy, squared log error, and weight squared error function. They introduced a new loss function to carry out Bayesian estimation. Peng and Yan [22] studied the Bayesian estimation and prediction for shape and scale parameters of the IWD under a general progressive censoring test. Sindhu et al. [23] assumed different priors and loss functions, and discussed the Bayesian estimation of inverse Weibull mixture distributions based on doubly censored data. Mohammad and Sana [24] obtained the Bayes estimators and ML estimators for the unknown parameters of IWD under lower record values. Faud [25] developed a linear exponential loss function, and estimated parameter and reliability of IWD based on lower record values under this loss function. Li and Hao [26] considered the estimation of a stress-strength model when stress and strength are two independent IWDs with different parameters. Ismail and Tamimi [27] proposed a constant stress partially accelerated life test model and analyzed it using type-I censored data from IWD. Kang and Han [28] derived the approximate maximum likelihood estimators of parameters of IWD under multiply type-II censoring and also proposed a simple graphical method for a goodness-on-fit test. Saboori et al. [29] introduced generalized modified inverse Weibull distribution, and some statistical and probabilistic properties were derived.
This paper will consider the Bayesian estimation of Shannon entropy and Rényi entropy of two-parameter IWD based on complete samples. In Section 2, some related knowledge is introduced first, and then the specific expressions of Shannon entropy and Rényi entropy of two-parameter IWD are derived. In Section 3, the maximum likelihood estimators of the scale parameter and shape parameter of IWD are derived by the dichotomy method, and then the ML estimators of Shannon entropy and Rényi entropy are obtained. In Section 4, the gamma distribution is adopted as the prior distribution (PD) of the scale parameter. A non-informative PD is adopted as the PD of the shape parameter. And then the Bayesian estimators of Shannon entropy and Rényi entropy are obtained based on the symmetric entropy loss function and scale squared error loss function. Lindley approximation is used to achieve the numerical calculation of the Bayesian estimators of entropy, on account of the complexity of these Bayesian estimators. In Section 5, Monte Carlo simulations are utilized to simulate and compare the estimators that are mentioned above. In Section 6, a real data set has been analyzed for illustrative purposes. Finally, the conclusions of the article are given in Section 7.

2. Preliminary Knowledge

The probability density function (pdf) of two-parameter IWD is defined as Eq. (1).
f ( t ; ω , υ ) = ω υ t υ 1 exp ( ω t υ ) , ω > 0 , υ > 0 , t > 0
and the cumulative distribution function (cdf) of two-parameter IWD is defined as Eq. (2)
F ( t ; ω , υ ) = exp ( ω t υ ) , ω > 0 , υ > 0 , t > 0
where the scale parameter is ω , and the shape parameter is υ .
Figure 1 shows the pdf of IWD under different values of shape and scale parameters, respectively.
The Shannon entropy is defined in Equation (Eq. 3) [1]
H s ( t ) = + f ( t ) ln [ f ( t ) ] d t
and the Rényi entropy is defined in Equation (Eq. 4) [2]
H r ( t ) = 1 1 r ln + f r ( t ) d t , r > 0 , r 1 ,
where f ( t ) is the pdf of a continuous random variable T .
Theorem 1.
Let T 1 , T 2 , ... , T n is a random sample that follows IWD with the pdf (1), t 1 , t 2 , ... , t n are the sample observations of T 1 , T 2 , ... , T n .
(i) Shannon entropy of IWD is showed in Eq. (5)
H s = υ + 1 υ ( ln ω + γ ) ln ω υ + 1
(ii) Rényi entropy of IWD is showed in Eq. (6)
H r = 1 1 r [ r υ + r υ ln r + r υ ln ω + r ln υ + ln Γ ( r + r υ + 1 ) ]
where Γ ( ) the gamma function and γ is Euler constant.
Proof. The log density of pdf (1) of IWD is showed in Eq. (7)
ln [ f ( t ) ] = ln ω υ ( υ + 1 ) ln t ω t υ
According to the log-density function (7), and Eq. (3), the Shannon entropy of IWD can be derived as follows:
H s = 0 + f ( t ) [ ln ω υ ( υ + 1 ) ln t ω t υ ] d t = ( ln ω υ ) 0 + f ( t ) d t + ( υ + 1 ) 0 + f ( t ) ln t d t + ω 0 + t υ f ( t ) d t = ln ω υ + ( υ + 1 ) E ( ln T ) + ω E ( T υ )
Obviously,
E ( T c ) = 0 + ω υ t c t υ 1 e ω t υ d t = ω t υ 0 + ( ω t υ ) c υ e ω t υ d ( ω t υ ) = ω c υ Γ ( 1 c υ )
Let c = υ ,
E ( T υ ) = ω 1 Γ ( 2 ) = 1 ω
Because
E ( T c ln T ) = d E ( T c ) d c = 1 υ ω c υ Γ ( 1 c υ ) ln ω 1 υ ω c υ Γ ( 1 c υ ) = 1 υ ω c υ [ Γ ( 1 c υ ) ln ω Γ ( 1 c υ ) ] .
Let c = 0 ,
E ( ln T ) = 1 υ ( ln ω + γ )
Therefore, Shannon entropy of two-parameter IWD can be expressed as
H s = ln ω υ + ( υ + 1 ) E ( ln T ) + ω E ( T υ ) = υ + 1 υ ( ln ω + γ ) ln ω υ + 1
Obviously,
+ f r ( t ) d t = 0 + ( ω υ t υ 1 e ω t υ ) r d t = ( ω υ ) r 0 + t r υ r e r ω t υ d t = r r υ r υ ω r υ υ r Γ ( 1 + r + r υ υ )
Then, according to Eq. (4), the Rényi entropy of two-parameter IWD can be expressed as
H r = 1 1 r [ r υ r υ ln r + r υ ln ω + r ln υ + ln Γ ( 1 + r υ + r υ ) ]

3. Maximum Likelihood Estimation

Suppose that T 1 , T 2 , ... , T n is a random sample that follows IWD with the pdf (1), t 1 , t 2 , ... , t n are the sample observations of T 1 , T 2 , ... , T n . Thus, the likelihood function (LF) can be derived as Eq. (8)
s ( t ; ω , υ ) = ω n υ n ( i = 1 n t i υ 1 ) exp ( ω i = 1 n t i υ )
Then, the corresponding log LF of Eq. (8) is showed in Eq. (9)
S ( t ; ω , υ ) = ln s ( t ; ω , υ ) = n ln ω υ ( υ + 1 ) i = 1 n ln t i ω i = 1 n t i υ
For convenience, we denote S ( t ; ω , υ ) as S . Thus, the likelihood equations can be expressed respectively as Eq. (10) and Eq. (11)
S ω = n ω i = 1 n t i υ = 0
S υ = n υ i = 1 n ln t i + ω i = 1 n t i υ ln t i = 0
The ML estimators ω ^ and υ ^ can be obtained by solving Eq. (10) and Eq. (11) with dichotomy, whose calculation steps are listed as follows:
(i) According to Eq. (10) and Eq. (11), there are
ω ^ = n ( i = 1 n t i υ ^ ) 1
n υ i = 1 n ln t i + n ( i = 1 n t i υ ) 1 i = 1 n t i υ ln t i = 0
(ii) Denote y ( υ ) = n υ i = 1 n ln t i + n ( i = 1 n t i υ ) 1 i = 1 n t i υ ln t i , given the accuracy ε , determine the interval [ υ u , υ l ] and verify y ( υ u ) y ( υ l ) < 0 .
(iii) Find the midpoint υ m of the interval [ υ u , υ l ] and calculate y ( υ m ) .
(iv) If y ( υ m ) = 0 , υ ^ = υ m .
(v) If y ( υ u ) y ( υ m ) < 0 , υ l = υ m ; If y ( υ l ) y ( υ m ) < 0 , υ u = υ m .
(vi) If | υ u υ l | < ε , υ ^ is equal to υ u or υ l . If not, return to step (iii) to step (vi).
Due to the invariance of ML estimation, the ML estimators of Shannon entropy and Rényi entropy can be obtained by putting ω ^ and υ ^ into Eq. (3) and Eq. (4), and their mathematical expressions are showed in Eq. (14) and Eq. (15)
H ^ s 1 = υ ^ + 1 υ ^ ( ln ω ^ + γ ) ln ω ^ υ ^ + 1
H ^ r 1 = 1 1 r [ r υ ^ r υ ^ ln r + r υ ^ ln ω ^ + r ln υ ^ + ln Γ ( 1 + r υ ^ + r υ ^ ) ]
Next, the Delta method is used to derive the approximate confidence intervals (briefly, ACIs) of Shannon entropy and Rényi entropy.
Denote vector D s and D r as
D s = ( H s ω , H s υ ) |   ω = ω ^ , υ = υ ^
D r = ( H r ω , H r υ ) |   ω = ω ^ , υ = υ ^
in which D s and D r are calculated through Eq. (18) and Eq. (19).
H s ω = 1 ω υ , H s υ = γ ln ω υ 2 1 υ
H r ω = r ( 1 r ) ω υ ,   H r υ = r ( 1 r ) υ [ 1 + ln r ln ω υ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) ]
According to the Delta method, calculate the estimated variance of H ^ s 1 and H ^ r 1 as Eq. (20) and Eq. (21). I is the Fisher information matrix of ω and υ , and Eq. (22) gives the elements of I . I 1 is the inverse matrix of I .
V s = D s I 1 D s T |   ω = ω ^ , υ = υ ^
V r = D r I 1 D r T |   ω = ω ^ , υ = υ ^
2 S ω 2 = n ω 2 , 2 S ω υ = 2 S υ ω = i = 1 n t i υ ln t i 2 S υ 2 = n υ 2 ω i = 1 n t i υ ( ln t i ) 2
Then, the 100 ( 1 α ) % ACI of Shannon entropy is Eq. (23), and the 100 ( 1 α ) % ACI of Rényi entropy is Eq. (24), where z α 2 is the upper ( α 2 )th quantile of the standardized normal distribution.
( H ^ s 1 z α 2 V s   ,   H ^ s 1 + z α 2 V s )
( H ^ r 1 z α 2 V r   ,   H ^ r 1 + z α 2 V r )

4. Bayesian Estimation

Bayesian estimation is a method of introducing prior information to deal with decision problems. The advantage is that it can include the prior information in statistical inference and improve the accuracy of the taken decision. From the time when Bayesian estimation was proposed to now, many researchers have adopted this method in estimating parameters and related functions. For example, Kundu and Howlader [30] considered the Bayesian inference and prediction of inverse Weibull distribution, based on type-II censored data. A Gibbs sampling procedure was used for MCMC samples, and Bayes estimation was computed by this sample. Sultan et al. [31] considered the Bayesian estimation of inverse Weibull parameters based on progressive type-II censored data. Because the Bayes estimators can’t be obtained explicitly, the Lindley approximation was used to calculate them. Mohammad and Mina [32] presented the Bayesian inferences of parameters of inverse Weibull distribution based on type-I hybrid censored data and computed the Bayes estimates using Lindley approximation. Algarni et al. [33] considered the Bayes estimation of parameters for the inverse Weibull distribution employing a progressive type-I censored sample. Metropolis-Hasting (MH) algorithm was used to compute the Bayesian estimates.
In addition to the areas mentioned above, there are some recent applications of the Bayesian method. Zhou and Luo [34] developed a supplier's recursive multiperiod discounted profit model based on Bayesian information updating. Yulin et al. [35] put forward a Bayesian approach to tackle the misalignments for over-the-air computation. Taborsky et al. [36] presented a novel generic Bayesian probabilistic model to solve the problem of parameters marginalization under the constraint of forced community structure. Oliver [37] introduced the Bayesian toolkit and showed how geomorphic models might benefit from probabilistic concepts. Ran et al. [38] proposed a Bayesian approach to measure the loss of privacy in a mechanism. Luo et al. [39] used the Bayesian information criterion for model selection when revisiting the lifetime data of brake pads. Peng et al. [40] extended a general Bayesian framework to deal with the degradation analysis of sparse degradation observations and evolving observations. František et al. [41] illustrated Bayesian estimation of how to benefit parametric survival analysis. Liu et al. [42] proposed fuzzy Bayesian knowledge tracing models to address continuous score scenarios.
In this paper, the Bayesian estimations of Shannon entropy and Rényi entropy of IWD are investigated under symmetric entropy (SE) and scale squared error (SSE) loss functions, which are widely used in Bayesian statistical inference [43,44,45].
(i) SE loss function is defined in Equation (Eq. 25) [43]
L 1 ( H , H ^ ) = H H ^ + H ^ H 2
where H ^ is the estimator of H .
Lemma 1. Suppose that T is the historical data information about the entropy function H . Then, under the SE loss function (25), the Bayesian estimator H ^ 1 for any prior distribution is showed in Eq. (26)
H ^ 1 = [ E ( H | T ) E ( H 1 | T ) ] 1 2
where E ( H | T ) is the posterior expectation of H and E ( H 1 | T ) is the posterior expectation of H 1 .
Proof. Under the SE loss function (25), the Bayesian risk of H ^ is
R ( H ^ ) = E H ( E ( L 1 ( H , H ^ ) | T ) )
To minimize R ( H ^ ) , only need to minimize E ( L 1 ( H , H ^ ) | T ) . For convenience, let g ( H ^ ) = E ( L 1 ( H , H ^ ) | T ) .
Because
g ( H ^ ) = H ^ 1 E ( H | T ) + H ^ E ( H 1 | T ) 2
and the derivative is
g ( H ^ ) = H ^ 2 E ( H | T ) + E ( H 1 | T )
The Bayesian estimator H ^ 1 can be obtained by g ( H ^ ) = 0 .
(ii) SSE loss function is defined in Equation (Eq. 27) [45]
L 2 ( H , H ^ ) = ( H H ^ ) 2 H k
where  k  is a nonnegative integer.
Lemma 2. Suppose that   T  is the historical data information about the entropy function  H Then, under the SSE loss function (27), the Bayesian estimator  H ^ 2  for any prior distribution is
H ^ 2 = E ( H 1 k | T ) E ( H k | T )
where E ( H 1 k | T ) is the posterior expectation of H 1 k and E ( H k | T ) is the posterior expectation of H k .
Proof. Under the SSE loss function (27), the Bayesian risk of H ^ is
R ( H ^ ) = E H ( E ( L 2 ( H , H ^ ) | T ) )
To minimize R ( H ^ ) , only need to minimize E ( L 2 ( H , H ^ ) | T ) . Similarly, let h ( H ^ ) = E ( L 2 ( H , H ^ ) | T ) .
Because
h ( H ^ ) = E ( H 2 2 H H ^ + H ^ 2 H k | T ) = E ( H 2 k | T ) 2 H ^ E ( H 1 k | T ) + H ^ 2 E ( H k | T )
and the derivative of h ( H ^ ) is
h ( H ^ ) = 2 E ( H 1 k | T ) + 2 H ^ E ( H k | T )
The Bayes estimator H ^ 2 can be obtained by h ( H ^ ) = 0 .
Assume that the scale parameter ω and shape parameter υ of two-parameter IWD are independent random variables, which ω obey Γ ( a , b ) and υ obey non-informative PD as follows:
P 1 ( ω ) = a b Γ ( b ) ω b 1 e a ω , a > 0 , b > 0
P 2 ( υ ) 1 υ
Thus, the joint PD of ω and υ is
P ( ω , υ ) a b υ Γ ( b ) ω b 1 e a ω
Referring to Bayesian formulation, the posterior distribution of ω and υ is
P ( ω , υ | T ) = P ( ω , υ ) s ( t ; ω , υ ) 0 + 0 + P ( ω , υ ) s ( t ; ω , υ ) d ω d υ
Thus, the Bayesian estimators of Shannon entropy and Rényi entropy under SE can be expressed as
H ^ s 2 = [ E ( H s | T ) E ( H s 1 | T ) ] 1 2 = [ 0 + 0 + H s P ( ω , υ | T ) d ω d υ 0 + 0 + H s   1 P ( ω , υ | T ) d ω d υ ] 1 2
H ^ r 2 = [ E ( H r | T ) E ( H r 1 | T ) ] 1 2 = [ 0 + 0 + H r P ( ω , υ | T ) d ω d υ 0 + 0 + H r 1 P ( ω , υ | T ) d ω d υ ] 1 2
The Bayesian estimators of Shannon entropy and Rényi entropy under SSE can be expressed as
H ^ s 3 = E ( H s 1 k | T ) E ( H s k | T ) = 0 + 0 + H s 1 k P ( ω , υ | T ) d ω d υ 0 + 0 + H s k P ( ω , υ | T ) d ω d υ
H ^ r 3 = E ( H r 1 k | T ) E ( H r k | T ) = 0 + 0 + H r 1 k P ( ω , υ | T ) d ω d υ 0 + 0 + H r k P ( ω , υ | T ) d ω d υ
From Eq. (33) to Eq. (36), it can be seen that the calculation of Bayesian estimators of Shannon and Rényi entropy are complex and difficult to calculate. Thus, Lindley approximation will be employed to achieve the approximate calculation results of H ^ s 3 and H ^ r 3 .

4.1. Bayesian Estimation by using Lindley approximation under SE loss function

Referring to Lindley approximation, I ( t ) can be defined as
I ( t ) = E [ U ( ω , υ ) | T ] = U ( ω , υ ) e S ( t ; ω , υ ) + G ( ω , υ ) d ( ω , υ ) e S ( t ; ω , υ ) + G ( ω , υ ) d ( ω , υ )
where U ( ω , υ ) is a function of independent variables ω and υ , S ( t ; ω , υ ) is log LF defined in Eq. (9), G ( ω , υ ) is the log of joint PD defined in Eq. (31).
If the sample size is large, Eq. (37) can be expressed as
I ( t ) = U ( ω ^ , υ ^ ) + 1 2 ( A + B + C + D )
where ω ^ and υ ^ are the ML estimators of ω and υ , and
A = ( U ^ ω ω + 2 U ^ ω G ^ ω ) σ ^ ω ω + ( U ^ υ ω + 2 U ^ υ G ^ ω ) σ ^ υ ω B = ( U ^ ω υ + 2 U ^ ω G ^ υ ) σ ^ ω υ + ( U ^ υ υ + 2 U ^ υ G ^ υ ) σ ^ υ υ C = ( U ^ ω σ ^ ω ω + U ^ υ σ ^ ω υ ) ( S ^ ω ω ω σ ^ ω ω + S ^ ω υ ω σ ^ ω υ + S ^ υ ω ω σ ^ υ ω + S ^ υ υ ω σ ^ υ υ ) D = ( U ^ ω σ ^ υ ω + U ^ υ σ ^ υ υ ) ( S ^ ω ω υ σ ^ ω ω + S ^ ω υ υ σ ^ ω υ + S ^ υ ω υ σ ^ υ ω + S ^ υ υ υ σ ^ υ υ )
σ i j ( i , j = ω , υ ) is the element of inverse matrix of S i j .
The U ^ ω ω is denoted that taking the second derivative of U ( ω , υ ) with respect to ω and putting ω ^ into it. Similarly, others can be expressed as
S ω ω υ = S ω υ ω = S υ ω ω = 0 , S ω ω ω = 3 n ω 3 S ω υ υ = S υ ω υ = S υ υ ω = i = 1 n t i υ ln t i , S υ υ υ = 2 n υ 3 + ω i = 1 n t i υ ( ln t i ) 3 G ω = b 1 ω a , G υ = 1 υ
Under the SE loss function, the step of numerical calculation of Shannon entropy H ^ s 2 by Lindley approximation is shown as follows:
When U ( ω , υ ) = H s ,
U ω = 1 ω υ , U υ = γ ln ω υ 2 1 υ U ω ω = 1 ω 2 υ , U υ υ = 2 ln ω + 2 γ υ 3 + 1 υ 2 , U ω υ = U υ ω = 1 ω υ 2
Put Eq. (40) and Eq. (41) into Eq. (38), E ( H s | T ) is obtained.
U ω = H s 2 1 ω υ , U υ = H s 2 ( γ + ln ω υ 2 + 1 υ ) U ω ω = 2 H s 3 1 ω 2 υ 2 + H s 2 1 ω 2 υ U υ υ = 2 H s 3 ( γ ln ω υ 2 1 υ ) 2 H s 2 ( 2 ln ω + 2 γ υ 3 + 1 υ 2 ) U ω υ = U υ ω = 2 H s 3 ( γ ln ω ω υ 3 1 ω υ 2 ) + H s 2 1 ω υ 2
Similarly, when U ( ω , υ ) = H s   1 ,
Then, put Eq. (40) and Eq. (42) into Eq. (38), E ( H s 1 | T ) is obtained. Thus, the numerical calculation of Shannon entropy H ^ s 2 is calculated by Eq. (33).
Under the SE loss function, the numerical calculation of Rényi entropy H ^ r 2 by Lindley approximation is shown as follows:
When U ( ω , υ ) = H r ,
U ω = r ( 1 r ) ω υ , U ω ω = r ( 1 r ) ω 2 υ , U ω υ = U υ ω = r ( 1 r ) ω υ 2 U υ = r ( 1 r ) υ [ 1 + ln r ln ω υ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) ] U υ υ = 2 r ( ln ω ln r ) ( 1 r ) υ 3 [ Γ ( 1 + r + r υ 1 ) ] 2 Γ ( 1 + r + r υ 1 ) Γ ( 1 + r + r υ 1 ) ( 1 r ) r 2 υ 4 [ Γ ( 1 + r + r υ 1 ) ] 2 r ( 1 r ) υ 2
Put Eq. (40) and Eq. (43) into Eq. (38), E ( H r | T ) is obtained.
When U ( ω , υ ) = H r   1 ,
U ω = r H r 2 ( 1 r ) ω υ , U υ = r H r 2 ( 1 r ) υ [ 1 + ln r ln ω υ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) ] U ω ω = 2 r 2 H r 3 ( 1 r ) 2 ω 2 υ 2 + r H r 2 ( 1 r ) ω 2 υ , U ω υ = U υ ω = r 2 2 H r 3 ( 1 r ) 2 ω υ 2 [ 1 + ln r ln ω υ 1 υ Γ ( 1 + r + r υ 1 ) Γ ( 1 + r + r υ 1 ) ] U υ υ = [ Γ ( 1 + r + r υ 1 ) ] 2 Γ ( 1 + r + r υ 1 ) Γ ( 1 + r + r υ 1 ) ( 1 r ) r 2 υ 4 [ Γ ( 1 + r + r υ 1 ) ] 2 H r 2 + r H r 2 ( 1 r ) υ 2 2 r ( ln ω ln r ) ( 1 r ) υ 3 H r 2 + 2 r 2 H r 3 ( 1 r ) 2 υ 2 [ 1 + ln r ln ω υ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) ]
Put Eq. (40) and Eq. (44) into Eq. (38), E ( H r 1 | T ) is obtained. Thus, the numerical calculation of Shannon entropy H ^ r 2 is calculated by Eq. (34).

4.2. Bayesian Estimation by using Lindley approximation under SSE loss function

Under the SSE loss function, the step of numerical calculation of Shannon entropy H ^ s 3 by Lindley approximation is shown as follows:
When U ( ω , υ ) = H s 1 k ,
U ω = ( 1 k ) H s k 1 ω υ , U υ = ( 1 k ) H s k ( γ ln ω υ 2 1 υ ) U ω ω = k ( 1 k ) H s k 1 ( 1 ω υ ) 2 ( 1 k ) H s k 1 ω 2 υ U υ υ = k ( 1 k ) H s k 1 ( γ + ln ω υ 2 + 1 υ ) 2 + ( 1 k ) H s k ( 2 ln ω + 2 γ υ 3 + 1 υ 2 ) U ω υ = U υ ω = k ( 1 k ) H s k 1 1 ω υ ( γ + ln ω υ 2 + 1 υ ) ( 1 k ) H s k 1 ω υ 2
Then, putting Eq. (40) and Eq. (45) into Eq. (38), E ( H s 1 k | T ) is obtained.
When U ( ω , υ ) = H s k ,
U ω = k H s k 1 1 ω υ , U υ = k H s k 1 ( γ + ln ω υ 2 + 1 υ ) U ω ω = k ( k + 1 ) H s k 2 ( 1 ω υ ) 2 + k H s k 1 1 ω 2 υ U υ υ = k ( k + 1 ) H s k 2 ( γ ln ω υ 2 1 υ ) 2 k H s k 1 ( 2 ln ω + 2 γ υ 3 + 1 υ 2 ) U ω υ = U υ ω = k ( k + 1 ) H s k 2 1 ω υ ( γ ln ω υ 2 1 υ ) + k H s k 1 1 ω υ 2
Then, putting Eq. (40) and Eq. (46) into Eq. (38), E ( H s k | T ) is obtained. Thus, the numerical calculation of Shannon entropy H ^ s 3 is calculated by Eq. (35).
Under the SSE loss function, the step of numerical calculation of Rényi entropy H ^ r 3 by Lindley approximation is shown as follows:
When U ( ω , υ ) = H r 1 k ,
U ω = ( 1 k ) r H r k ( 1 r ) ω υ , U υ = ( 1 k ) r H r k ( 1 r ) υ [ 1 + ln r ln ω υ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) ] U ω ω = ( k 1 ) [ k r 2 H r k 1 ( 1 r ) 2 ω 2 υ 2 + r H r k ( 1 r ) ω 2 υ ] U ω υ = U υ ω = k r 2 H r k 1 ( k 1 ) ( 1 r ) 2 ω υ 2 [ 1 + ln r ln ω υ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) ] + r H r k ( k 1 ) ( 1 r ) ω υ 2 U υ υ = r 2 k H r k 1 ( k 1 ) ( 1 r ) 2 υ 2 [ 1 + ln r ln ω υ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) ] 2 + 2 r ( ln ω ln r ) ( 1 k ) ( 1 r ) υ 3 H r k [ Γ ( 1 + r + r υ 1 ) ] 2 Γ ( 1 + r + r υ 1 ) Γ ( 1 + r + r υ 1 ) ( 1 r ) ( 1 k ) 1 r 2 υ 4 [ Γ ( 1 + r + r υ 1 ) ] 2 H r k ( 1 k ) H r k r ( 1 r ) υ 2
Put Eq. (40) and Eq. (47) into Eq. (38), E ( H r 1 k | T ) is obtained.
When U ( ω , υ ) = H r k ,
U ω = r k H r k 1 ( 1 r ) ω υ , U υ = r k H r k 1 ( 1 r ) υ [ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) 1 ln r ln ω υ ] U ω ω = k r ( 1 r ) ω 2 υ [ r H r k 2 ( k 1 ) υ + H r k 1 ] U ω υ = U υ ω = r 2 k ( k 1 ) H r k 2 ( 1 r ) 2 ω υ 2 [ 1 + ln r ln ω υ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) ] + r k H r k 1 ( 1 r ) ω υ 2 U υ υ = r 2 k ( k 1 ) H r k 2 ( 1 r ) 2 υ 2 [ 1 + ln r ln ω υ Γ ( 1 + r + r υ 1 ) υ Γ ( 1 + r + r υ 1 ) ] 2 2 r k ( ln ω ln r ) ( 1 r ) υ 3 H r k + 1 + [ Γ ( 1 + r + r υ 1 ) ] 2 Γ ( 1 + r + r υ 1 ) Γ ( 1 + r + r υ 1 ) ( 1 r ) r 2 υ 4 k 1 H r k + 1 [ Γ ( 1 + r + r υ 1 ) ] 2 + r k H r k 1 ( 1 r ) υ 2
Put Eq. (40) and Eq. (48) into Eq. (38), E ( H r k | T ) is obtained. Thus, the numerical calculation of Rényi entropy H ^ r 3 is calculated by Eq. (36).

5. Monte Carlo simulation

In this chapter, Monte Carlo simulation is used to generate random samples that obey two-parameter IWD, and repeat 1000 experiments respectively with different sample sizes ( n = 10 , 20 , 30 , 40 , 50 , 60 , 70 , 80 , 90 , 100 ). The true values of the parameters in the two-parameter IWD are taken as ω = 1 and υ = 2 , the parameters of the gamma distribution are taken as a = 5 and b = 1 , the parameters in SSE are taken as k = 10 , and the parameters of the Rényi entropy are taken as r = 0.5 . Then, the mean squared error (briefly, MSE) is used to compare the performance of each estimator. The results of Shannon entropy are shown in Table 1, and the results of Rényi entropy are shown in Table 2. For showing the performance of 100 ( 1 α ) % ACIs, the coverage probability is calculated and the results are shown in Table 3.
For convenience, H s 0 and H r 0 represent the true values of Shannon entropy and Rényi entropy, M ^ s 1 and M ^ r 1 represent mean values of 1000 ML estimates of entropy respectively, M ^ s 2 and M ^ r 2 represent mean values of 1000 Bayesian estimates of entropy respectively under SE loss function, M ^ s 3 and M ^ r 3 represent mean values of 1000 Bayesian estimates of entropy respectively under SSE loss function, M S E s 1 and M S E r 1 represent MSEs of ML estimates of entropy respectively, M S E s 2 and M S E r 2 represent MSEs of Bayesian estimates of entropy respectively under SE loss function, M S E s 3 and M S E r 3 represent MSEs of Bayesian estimates of entropy respectively under SSE loss function. The M ^ s j and M S E s j ( j = 1 , 2 , 3 ) are calculated by Eq. (49) and Eq. (50), where m = 1000 and H ^ s j , i represents the i-th ML estimate or Bayesian estimate of Shannon entropy. The M ^ r j and M S E r j ( j = 1 , 2 , 3 ) are calculated by Eq. (51) and Eq. (52), where m = 1000 and H ^ r j , i represents the i-th ML estimate or Bayesian estimate of Rényi entropy.
M ^ s j = 1 m i = 1 m H ^ s j , i
M S E s j = 1 m i = 1 m ( H ^ s j , i H s 0 ) 2
M ^ r j = 1 m i = 1 m H ^ r j , i
M S E r j = 1 m i = 1 m ( H ^ r j , i H r 0 ) 2
Based on the above tables, the following conclusions can be drawn:
(1) For Shannon entropy, the ML estimation performs better than the Bayesian estimation. While for Rényi entropy, the performance of ML estimation is similar to Bayesian estimation.
(2) In Bayesian estimation, it is better to select the SE to estimate Shannon entropy. On the contrary, it is better to select the SSE to estimate Rényi entropy.
(3) The sample size has a great influence on Shannon entropy than Rényi entropy. When the sample size increases gradually, the Bayesian estimation of Shannon entropy under SE is close to the ML estimation, but it has no obvious effect on Rényi entropy.
(4) In Table 3, it can be noted that the coverage probability of ACIs is quite close to confidence levels.

6. Real data analysis

There is a real data set given by Bjerkdal [46], which represents the survival time (in days) of guinea pigs after the injection of different doses of tubercle bacilli. Kundu and Howlader [47] proved that this set of data sets using the IWD fitting effect is very good, therefore, this data set can be seen as a sample of IWD. In Reference [46], the regimen number refers to the common logarithm of bacillary units contained in 0.5 ml of challenge solution. In other words, regimen 6.6 represents 4.0*106 bacillary units per 0.5 ml. Corresponding to regimen 6.6, the 72 observed observations are listed as follows:
12, 15, 22, 24, 24, 32, 32, 33, 34, 38, 38, 43, 44, 48, 52, 53, 54, 54, 55, 56, 57, 58, 58, 59, 60, 60, 60, 60, 61, 62, 63, 65, 65, 67,68, 70, 70, 72, 73, 75, 76, 76, 81, 83, 84, 85, 87, 91, 95, 96, 98, 99, 109, 110, 121, 127, 129, 131, 143, 146, 146, 175, 175, 211,233, 258, 258, 263, 297, 341, 341, 376.
Using the proposed estimates described in the above sections, the ML estimates and Bayesian estimates of Shannon entropy and Rényi entropy are displayed in Table 4. It’s obvious that the ML estimates of entropies are all smaller than Bayesian estimates under SE respectively, and the Bayesian estimates under SSE of entropies are all smaller than ML estimates respectively.

7. Conclusions

This paper considers the Bayesian estimations of Shannon entropy and Rényi entropy based on two-parameter IWD. First, the expressions of these entropies of two-parameter IWD are derived in Theorem 1. For ML estimation, due to the invariance of ML estimation, the ML estimators of parameters are obtained by the dichotomy method at first. And then, the ML estimators of entropies can be obtained. Additionally, the approximate confidence intervals are given by the Delta method. For Bayesian estimation, the symmetric entropy loss function and scale squared error loss function are chosen. However, the forms of Bayesian estimators are complex and difficult to calculate. Lindley approximation is used to solve this problem. Finally, the mean square errors of the above estimators are used to compare their performances. For Shannon entropy, it’s better to use ML estimator. And for Rényi entropy, the performances of the ML estimator and Bayesian estimator are analogous.

Author Contributions

Conceptualization, H.P. and X.H.; methodology, H.P.; software, X.H.; validation, H.P. and X.H.; writing—original draft preparation, X.H.; writing—review and editing, H.P.; funding acquisition, H.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China, grant number 71661012.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C. E. A mathematical theory of communication. Bell. Labs. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  2. Rényi, A. On measures of entropy and information. Berkeley Symp on Math. Statist. and Prob. 1970, 4, 547–561. [Google Scholar]
  3. Alexander Bulinsk; Denis Dimitrov. Statistical estimation of the Shannon entropy. Acta Math Sin 2019, 35, 17–46. [Google Scholar] [CrossRef]
  4. Wolf Ramona. Information and entropies. Lect Notes Phys 2021, 988, 53–89. [Google Scholar]
  5. Chacko, M.; Asha, P. S. Estimation of entropy for generalized exponential distribution based on record values. J. Indian Soc. Prob. St. 2018, 19, 79–96. [Google Scholar] [CrossRef]
  6. Liu, S.; Gui, W. Estimating the entropy for Lomax distribution based on generalized progressively hybrid censoring. Symmetry 2019, 11, 1–17. [Google Scholar] [CrossRef]
  7. Shrahili, M.; El-Saeed, A.R.; Hassan, A.S.; Elbatal, I. Estimation of entropy for Log-Logistic distribution under progressive type II censoring. J. Nanomater 2022, 3, 1–10. [Google Scholar] [CrossRef]
  8. Mahmoud, M.R.; Ahmad, M.A.M.; Mohamed, B.S.Kh. Estimating the entropy and residual entropy of a Lomax distribution under generalized type-II hybrid censoring. Math. Stat. 2021, 9, 780–791. [Google Scholar] [CrossRef]
  9. Hassan, O.; Mazen, N. Product of spacing estimation of entropy for inverse Weibull distribution under progressive type-II censored data with applications. J. Taibah Univ. Sci. 2022, 16, 259–269. [Google Scholar]
  10. Mavis, P.; Gayan, W. L.; Broderick, O. O. A New Class of Generalized Inverse Weibull Distribution with Applications. J Appl Math Bioinformatics 2014, 4, 17–35. [Google Scholar]
  11. Basheer, A. M. Alpha power inverse Weibull distribution with reliability application. J Taibah Univ Sci 2019, 13, 423–432. [Google Scholar] [CrossRef]
  12. Valeriia, S.; Broderick, O. O. Weighted Inverse Weibull Distribution: Statistical Properties and Applications. Theor Math Appl 2014, 4, 1–30. [Google Scholar]
  13. Keller A Z; Kamath A R R. Alternative reliability models for mechanical systems. 3rd International Conference on Reliability and Maintainability, Toulose, France, 1982.
  14. Abhijit, C; Anindya, C. Use of the Fréchet distribution for UPV measurements in concrete. NDT E Int. 2012, 52, 122–128. [Google Scholar] [CrossRef]
  15. Chiodo, E; Falco, P D; Noia, L P D; Mottola, F. Inverse loglogistic distribution for Extreme Wind Speed modeling: Genesis, identification and Bayes estimation. AIMS Energy 2018, 6, 926–948. [Google Scholar] [CrossRef]
  16. A O Langlands; S J Pocock; G R Kerr; S M Gore. Long-term survival of patients with breast cancer: a study of the curability of the disease. Brit Med J 1979, 2, 1247–1251. [Google Scholar] [CrossRef] [PubMed]
  17. Ellah, A. Bayesian and non-Bayesian estimation of the inverse Weibull model based on generalized order statistics. Intell Inf Manag 2012, 4, 23–31. [Google Scholar]
  18. Singh, S K; Singh, U; Kumar, D. Bayesian estimation of parameters of inverse Weibull distribution. J Appl Stat 2013, 40, 1597–1607. [Google Scholar] [CrossRef]
  19. Asuman, Y.; Mahmut, K. Reliability estimation and parameter estimation for inverse Weibull distribution under different loss functions. Kuwait J. Sci. 2022, 49, 1–24. [Google Scholar]
  20. Sultan, K.S.; Alsadat, N.H.; Kundu, D. Bayesian and maximum likelihood estimations of the inverse Weibull parameters under progressive type-II censoring. J. Stat. Comput. Sim. 2014, 84, 2248–2265. [Google Scholar] [CrossRef]
  21. Amirzadi, A.; Jamkhaneh, E.B.; Deiri, E. A comparison of estimation methods for reliability function of inverse generalized Weibull distribution under new loss function. J. Stat. Comput. Sim. 2021, 91, 2595–2622. [Google Scholar] [CrossRef]
  22. Peng, X.; Yan, Z. Z. Bayesian estimation and prediction for the inverse Weibull distribution under general progressive censoring, Commu. Stat.- Theor. M. 2016, 45, 621–635. [Google Scholar]
  23. Sindhu, T. N; Feroze, N; Aslam, M. Doubly censored data from two-component mixture of inverse Weibull distributions: Theory and Applications. J. Mod. Appl. Stat. Meth. 2016, 15, 322–349. [Google Scholar] [CrossRef]
  24. Mohammad, F.; Sana, S. Bayesian estimation and prediction for the inverse Weibull distribution based on lower record values. J. Stat. Appl. Probab. 2021, 10, 369–376. [Google Scholar]
  25. Faud, S. Al-Duais. Bayesian analysis of Record statistic from the inverse Weibull distribution under balanced loss function. Math. Probl. Eng. 2021, 2021, 1–9. [Google Scholar] [CrossRef]
  26. Li, C.P.; Hao, H.B. . Reliability of a stress-strength model with inverse Weibull distribution. Int. J. Appl. Math. 2017, 47, 302–306. [Google Scholar]
  27. Ismail, A.; Al Tamimi, A. Optimum constant-stress partially accelerated life test plans using type-I censored data from the inverse Weibull distribution. Strength Mater 2017, 49, 847–855. [Google Scholar] [CrossRef]
  28. Kang, S.B.; Han, J.T. The graphical method for goodness of fit test in the inverse Weibull distribution based on multiply type-II censored samples. SpringerPlus 2015, 4, 768. [Google Scholar] [CrossRef] [PubMed]
  29. Saboori, H.; Barmalzan, G.; Ayat, S.M. Generalized modified inverse Weibull distribution: Its properties and applications. Sankhya B 2020, 82, 247–269. [Google Scholar] [CrossRef]
  30. Debasis Kundu; Hatem Howlader. Bayesian inference and prediction of the inverse Weibull distribution for Type-II censored data. Comput Stat Data Anal 2010, 54, 1547–1558. [Google Scholar] [CrossRef]
  31. Sultan, K.S.; Alsadat, N.H.; Kundu, D. . Bayesian and maximum likelihood estimations of the inverse Weibull parameters under progressive type-II censoring. J Stat Comput Simul 2014, 84, 2248–2265. [Google Scholar] [CrossRef]
  32. Mohammad K; Mina A. Estimation of the Inverse Weibull Distribution Parameters under Type-I Hybrid Censoring. Austrian J. Stat. 2021, 50, 38–51. [Google Scholar] [CrossRef]
  33. Ali Algarni; Mohammed Elgarhy; Abdullah M Almarashi; Aisha Fayomi; Ahmed R El-Saeed. Classical and Bayesian Estimation of the Inverse Weibull Distribution: Using Progressive Type-I Censoring Scheme. Adv. Civ. Eng. 2021, 2021, 1–15. [Google Scholar] [CrossRef]
  34. Zhou, J.H.; Luo, Y. Bayes information updating and multiperiod supply chain screening. Int J Prod Econ 2023, 256, 108750–108767. [Google Scholar] [CrossRef]
  35. Yulin, S; Deniz, G; Soung, C L. Bayesian Over-the-Air Computation. IEEE J. Sel. Areas Commun. 2023, 41, 589–606. [Google Scholar] [CrossRef]
  36. Taborsky, P; Vermue, L; Korzepa, M; Morup, M. The Bayesian Cut. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 4111–4124. [Google Scholar] [CrossRef] [PubMed]
  37. Oliver Korup. Bayesian geomorphology. Earth Surf Process Landf. 2021, 46, 151–172. [Google Scholar] [CrossRef]
  38. Ran, E; Kfir, E; Mu, X. S. Bayesian privacy. Theor. Econ. 2021, 16, 1557–1603. [Google Scholar] [CrossRef]
  39. Luo, C.L.; Shen, L.J.; Xu, A.C. Modelling and estimation of system reliability under dynamic operating environments and lifetime ordering constraints. Reliab. Eng. Syst. Saf. 2022, 218, 108136–108145. [Google Scholar] [CrossRef]
  40. Peng, W. W.; Li, Y. F.; Yang, Y. J.; Mi, J. H.; Huang, H.Z. Bayesian Degradation Analysis With Inverse Gaussian Process Models Under Time-Varying Degradation Rates. IEEE Trans Reliab 2017, 66, 84–96. [Google Scholar] [CrossRef]
  41. František, B; Frederik, A; Julia, M. H. Informed Bayesian survival analysis. BMC Medical Res. Methodol. 2022, 22, 238–260. [Google Scholar] [CrossRef]
  42. Liu, F.; Hu, X.G.; Bu, C.Y.; Yu, K. Fuzzy Bayesian Knowledge Tracing. IEEE Trans Fuzzy Syst 2022, 30, 2412–2425. [Google Scholar] [CrossRef]
  43. Xu, B.; Wang, D.H.; Wang, R.T. Estimator of scale parameter in a subclass of the exponential family under symmetric entropy loss. Northeast. Math. J. 2008, 24, 447–457. [Google Scholar]
  44. Li, Q.; Wu, D. Bayesian analysis of Rayleigh distribution under progressive type-II censoring. J. Shanghai Polytech. Univ. 2019, 36, 114–117. [Google Scholar]
  45. Song, L.X.; Chen, Y.S.; Xu, J.M. Bayesian estimation of Pission distribution parameter under scale squared error loss function. J. Lanzhou Univ. Tech. 2008, 34, 152–154. [Google Scholar]
  46. Bjerkdal, Tor. Acquisition of resistance in guinea pigs infected with different doses of virulent tubercle bacilli. Am J Epidemiol, 1960, 72, 130–148. [Google Scholar] [CrossRef]
  47. Debasis Kundu; Hatem Howlader. Bayesian inference and prediction of the inverse Weibull distribution for Type-II censored data. Comput Stat Data An 2010, 54, 1547–1558. [Google Scholar] [CrossRef]
Figure 1. The curves of the pdf of IWD with respect to different values of parameters.
Figure 1. The curves of the pdf of IWD with respect to different values of parameters.
Preprints 73534 g001
Table 1. Estimates and MSEs of Shannon entropy ( H s 0 = 1.1727 ).
Table 1. Estimates and MSEs of Shannon entropy ( H s 0 = 1.1727 ).
Sample size ( n ) Estimate MSE
M ^ s 1 M ^ s 2 M ^ s 3 M S E s 1 M S E s 2 M S E s 3
10 1.0604 0.8259 0.8914 0.1903 0.3666 0.2065
20 1.1183 0.9631 0.9683 0.0863 0.1282 0.1186
30 1.1388 1.0301 1.0076 0.0558 0.0751 0.0766
40 1.1355 1.0526 1.0292 0.0445 0.0574 0.0592
50 1.1461 1.0788 1.0379 0.0323 0.0404 0.0472
60 1.1503 1.0938 1.0506 0.0287 0.0343 0.0399
70 1.1579 1.1093 1.0646 0.0244 0.0282 0.0334
80 1.1623 1.1196 1.0694 0.0197 0.0224 0.0284
90 1.1653 1.1272 1.0803 0.0171 0.0191 0.0256
100 1.1628 1.1284 1.0777 0.0161 0.0183 0.0244
Table 2. Estimates and MSEs of Rényi entropy ( H r 0 = 1.5641 ).
Table 2. Estimates and MSEs of Rényi entropy ( H r 0 = 1.5641 ).
Sample size ( n ) Estimate MSE
M ^ r 1 M ^ r 2 M ^ r 3 M S E r 1 M S E r 2 M S E r 3
10 1.6681 1.7793 1.7682 0.0525 0.1075 0.0954
20 1.6056 1.6512 1.6587 0.0178 0.0218 0.0186
30 1.5999 1.6278 1.6229 0.0129 0.0136 0.0112
40 1.5903 1.6113 1.6082 0.0103 0.0103 0.0075
50 1.5829 1.5992 1.5972 0.0072 0.0071 0.0064
60 1.5809 1.5954 1.5896 0.0055 0.0057 0.0049
70 1.5765 1.5885 1.5878 0.0046 0.0046 0.0045
80 1.5781 1.5886 1.5857 0.0044 0.0041 0.0034
90 1.5752 1.5845 1.5779 0.0038 0.0038 0.0032
100 1.5731 1.5814 1.5775 0.0032 0.0032 0.0031
Table 3. The coverage probability of 100 ( 1 α ) % ACIs with different α .
Table 3. The coverage probability of 100 ( 1 α ) % ACIs with different α .
Sample size( n ) Shannon entropy Rényi entropy
α = 0.1 α = 0.05 α = 0.1 α = 0.05
10 0.9637 0.9752 0.9662 0.9791
20 0.9798 0.9894 0.9789 0.9884
30 0.9829 0.9916 0.9847 0.9930
40 0.9839 0.9941 0.9860 0.9953
50 0.9857 0.9946 0.9894 0.9957
60 0.9876 0.9947 0.9936 0.9954
70 0.9875 0.9947 0.9925 0.9965
80 0.9875 0.9940 0.9929 0.9972
90 0.9894 0.9955 0.9934 0.9966
100 0.9865 0.9950 0.9929 0.9971
Table 4. The estimates and ACIs of entropies based on the real data set.
Table 4. The estimates and ACIs of entropies based on the real data set.
ML estimates Bayesian estimates 100 ( 1 α ) % ACIs
Under SE Under SSE α = 0.1 α = 0.05
Shannon entropy 5.6307 5.6998 4.8706 (5.1858, 6.0757) (5.1328, 6.1287)
Rényi entropy 5.4129 4.7280 4.8706 (5.1877, 5.6381) (5.1609, 5.6649)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated