Preprint
Article

This version is not peer-reviewed.

Uncertainty Quantification Based on Residual Tsallis Entropy of Order Statistics

A peer-reviewed article of this preprint also exists.

Submitted:

21 September 2023

Posted:

22 September 2023

You are already at the latest version

Abstract
In this paper, we concentrate on the study of the properties of residual Tsallis entropy for order statistics. Order statistics have an important role in reliability structural engineering for example for modelling lifetimes of series and parallel systems. The residual Tsallis entropy of ith order statistic from a continuous distribution function and its deviation from the residual Tsallis entropy of ith order statistics from a uniform distribution is investigated. In a mathematical framework, a method to express the residual Tsallis entropy of the ith order statistic from a continuous distribution in terms of the residual Tsallis entropy of the ith order statistic from a uniform distribution is provided. This approach may provide insight into the behavior and properties of the residual Tsallis entropy for order statistics. Further, we study the monotonicity properties of the residual Tsallis entropy of order statistics under di erent conditions. By studying these properties, deeper understanding of the relationship between the position of order statistics and the resulting residual Tsallis entropy is gained.
Keywords: 
;  ;  ;  ;  

1. Introduction

Information theory plays a crucial role in quantifying the uncertainty inherent in random phenomena. Its applications span a variety of areas, as detailed in Shannon’s influential work [1]. Considering a non-negative random variable X with an absolutely continuous cumulative distribution function (cdf) F ( x ) and a probability density function (pdf) f ( x ) , the Tsallis entropy of order α becomes a relevant measure as defined in [2], given by
H α ( X ) = 1 1 α 0 f α ( x ) d x 1 , = 1 1 α [ E ( f α 1 ( F 1 ( U ) ) ) 1 ] ,
for all α > 0 , α 1 , where E ( · ) denotes the expectation and F 1 ( u ) = inf { x ; F ( x ) u } , for u [ 0 , 1 ] , denotes the quantile function. In general, the Tsallis entropy can take negative values, but it can be made non-negative by choosing appropriate values of α . An important observation is that H ( X ) = lim α 1 H α ( X ) , which illustrates the convergence of Tsallis entropy to Shannon differential entropy. Unlike the Shannon entropy, which exhibits additivity, the Tsallis entropy exhibits non-additivity. In particular, for two independent random variables X and Y in the Shannon framework H ( X , Y ) = H ( X ) + H ( Y ) holds, while in the Tsallis framework we find H α ( X , Y ) = H α ( X ) + H α ( Y ) + ( 1 α ) H α ( X ) H α ( Y ) . The non-additive nature of the Tsallis entropy provides greater flexibility compared to the Shannon entropy, making it applicable in various areas of information theory, physics, chemistry, and technology.
Considering the lifetime of a newly introduced system denoted by X, the tsallis entropy H α ( X ) serves as a measure of the inherent uncertainty of the system. However, there are cases where the actors have knowledge about the current age of the system. For example, they know that the system is operational at time t and want to evaluate the uncertainty in the remaining lifetime given by X t = X t X > t . In such scenarios, the conventional Tsallis entropy H α ( X ) no longer provides the desired insight. Therefore, a new measure, the residual Tsallis entropy (RTE), is introduced, defined as follows:
H α ( X ; t ) = 1 1 α 0 f t α ( x ) d x 1 = 1 1 α t f ( x ) S ( t ) α d x 1 ,
where
f t ( x ) = f ( x + t ) S ( t ) , x , t > 0 ,
is the pdf of X t and S ( t ) = P ( X > t ) is the survival function of X.
Extensive research has been done in the literature to explore various properties and statistical applications of Tsallis entropy. For detailed insights, we recommend the work of Asadi et al. [3], Nanda and Paul [4], Zhang [5], Maasoumi [6], Abe [7], Asadi et al. [8], and the references cited in these works. These sources provide comprehensive discussions on this topic and allow for a deeper understanding of Tsallis entropy in various contexts.
The aim of this paper is to study the properties of RTE in terms of order statistics. We consider a random sample of size n from a distribution F denoted by X 1 , X 2 , , X n . The order statistic represents the ordering of these sample values from smallest to largest, denoted X 1 : n X 2 : n X n : n . Order statistics are very important in many areas of probability and statistics because they can be used to describe probability distributions, to test how well a data set fits a particular model, to control the quality of a product or process, to analyze the reliability of a system or component, and for many other applications.
Moreover, they are indispensable in reliability theory, especially for studying the lifetime properties of coherent systems and performing lifetime tests on data obtained by various censoring methods. For a thorough understanding of the theory and applications of order statistics, we recommend the comprehensive review by David and Nagaraja [9]. This source covers the topic in depth and provides useful insights into the theoretical and practical aspects of order statistics. Many researchers have studied the information properties of order statistics and have gained useful insights, which can be found in [10,11,12] and the references therein. Zarezadeh and Asadi [13] studied properties of residual Renyi entropy of order statistics and record values; see also [14]. Continuing this line of research, our study aims to contribute to the field by investigating the properties of residual Tsallis entropy in terms of order statistics. More recently, Alomani and Kayid [15] have studied the properties of the Tsallis entropy of a coherent and mixed system.
The results of this work is structured as follows: In Section 2, we present the representation of RTE for order statistics denoted as X i : n , from a sample drawn from an arbitrary continuous distribution function F . We express these RTE in terms of RTE for order statistics from a sample drawn from a uniform distribution. Since closed-form expressions for the RTE of order statistics are often not available for many statistical models, we derive upper and lower bounds to approximate the RTE. We provide several illustrative examples to demonstrate the practicality and usefulness of these bounds. In addition, we study the monotonicity properties of RTE for the extremum of a sample under mild conditions. We find that the RTEs of the extremum of a random sample exhibit monotonic behavior as the number of observations in the sample increases. However, we counter this observation by presenting a counterexample that demonstrates the non-monotonic behavior of RTE for other order statistics X i : n with respect to sample size. To further analyze the monotonic behavior, we examine the RTE of order statistics X i : n with respect to the index of order statistics i . Our results show that the RTE of X i : n is not a monotonic function of i over the entire support of F .
Throughout the paper, “ s t " and “ l r " stand for usual stochastic and likelihood ratio orders, respectively, for more details on these orderings, we refer the reader to Shaked and Shanthikumar [16].

2. Residual Tsallis Entropy of Order Statistics

Hereafter, we present an expression for the RTE of order statistics in relation to the RTE of order statistics from a uniform distribution. Let us consider the pdf and the survival function of X i : n denoted by f i : n ( x ) and S i : n ( x ) , respectively, where i = 1 , , n . It holds that
f i : n ( x ) = 1 B ( i , n i + 1 ) F ( x ) i 1 S ( x ) n i f ( x ) , x > 0 ,
S i : n ( x ) = k = 0 i 1 n k 1 S ( x ) k S ( x ) n k , x > 0 ,
where
B ( a , b ) = 0 1 x a 1 ( 1 x ) b 1 d x , a > 0 , b > 0 ,
is known as the complete beta function; see e.g., David and Nagaraja [9]. Furthermore, we can express the survival function S i : n ( x ) as follows:
S i : n ( x ) = B ¯ F ( x ) ( i , n i + 1 ) B ( i , n i + 1 ) ,
where
B ¯ x ( a , b ) = x 1 u a 1 ( 1 u ) b 1 d u , 0 < x < 1 ,
is known as the incomplete beta functions. In this section, we adopt the notation Y B ¯ t ( a , b ) to denote that the random variable Y follows a truncated beta distribution with pdf given by:
f Y ( y ) = 1 B ¯ t ( a , b ) y a 1 ( 1 y ) b 1 , t y 1 .
The paper is concerned with the study of the residual tsallis entropy of the random variable X i : n , which measures the degree of uncertainty about the predictability of the residual lifetime of the system contained in the density of [ X i : n t | X i : n > t ] . In reliability engineering, ( n i + 1 ) -out-of-n systems are an important type of structure. In this case, an ( n i + 1 ) -out-of-n system functions if and only if at least ( n i + 1 ) components of n components function. We consider a system consisting of independent and identically distributed components whose lifetimes are represented by X 1 , X 2 , , X n . The lifetime of the whole system is determined by the order statistic X i : n , where i denotes the position of the order statistic. When i = 1 , this corresponds to a serial system, while i = n represents a parallel system. In the context of ( n i + 1 ) -out-of-n systems operating at time t, the RTE of X i : n serves as a measure of entropy associated with the remaining lifetime of the system. This dynamic entropy measure provides system designers with valuable information about the entropy of ( n i + 1 ) -out-of-n systems in operation at a given time t.
To increase computational efficiency, we introduce a lemma that establishes a relationship between the RTE of order statistics from a uniform distribution and the incomplete beta function. This relation is crucial from a practical point of view and allows for a more convenient computation of RTE. The proof of this lemma, which follows directly from the definition of RTE, is omitted here because it involves simple computations.
Lemma 1. 
Suppose we have a random sample of size n from a uniform distribution on (0,1) and we arrange the sample values in ascending order where U i : n the i-th order statistic. Then
H α ( U i : n ; t ) = 1 1 α B ¯ t ( α ( i 1 ) + 1 , α ( n i ) + 1 ) B ¯ t α ( i , n i + 1 ) 1 , 0 < t < 1 ,
for all α > 0 , α 1 .
Using this lemma, researchers and practitioners can easily compute the RTE of order statistics from a uniform distribution using the well-known incomplete beta function. This computational simplification improves the applicability and usability of RTE in various contexts. In Figure 1, we show the plot of H α ( U i : n ; t ) for different values of α and i = 1 , 2 , , 5 when the total number of observations is n = 5 . The figure shows that there is no inherent monotonicity between the order statistics. However, in the following analysis (Lemma 3), we establish conditions under which a monotonic relationship can be established between the index i and the number of components in the system. This lemma will provide valuable insight into the arrangement of the system components and the resulting effect on the reliability of the system.
The upcoming theorem establishes a relationship between the RTE of order statistics X i : n and the RTE of order statistics from a uniform distribution.
Theorem 1. 
The residual Tsallis entropy of X i : n , for all α > 0 , α 1 , can be expressed as follows:
H α ( X i : n ; t ) = 1 1 α ( ( 1 α ) H α ( U i : n ; F ( t ) ) + 1 ) E [ f α 1 ( F 1 ( Y i ) ) ] 1 , t > 0 ,
where Y i B ¯ F ( t ) ( α ( i 1 ) + 1 , α ( n i ) + 1 ) .
Proof. 
By using the change of u = F ( x ) , from (2),(3) and (5), we get
H α ( X i : n ; t ) = 1 1 α t f i : n ( x ) S i : n ( t ) α d x 1 = 1 1 α t F i 1 ( x ) S n i ( x ) f ( x ) B ¯ F ( t ) ( i , n i + 1 ) α d x 1 = 1 1 α B ¯ F ( t ) ( α ( i 1 ) + 1 , α ( n i ) + 1 ) B ¯ F ( t ) α ( i , n i + 1 ) t F α ( i 1 ) ( x ) S α ( n i ) ( x ) f α ( x ) B ¯ F ( t ) ( α ( i 1 ) + 1 , α ( n i ) + 1 ) d x 1 = 1 1 α B ¯ F ( t ) ( α ( i 1 ) + 1 , α ( n i ) + 1 ) B ¯ F ( t ) α ( i , n i + 1 ) F ( t ) 1 u α ( i 1 ) ( 1 u ) α ( n i ) f α 1 ( F 1 ( u ) ) B ¯ F ( t ) ( α ( i 1 ) + 1 , α ( n i ) + 1 ) d u 1 = 1 1 α ( ( 1 α ) H α ( U i : n ; F ( t ) ) + 1 ) E [ f α 1 ( F 1 ( Y i ) ) ] 1 , t > 0 .
The last equality is obtained from Lemma 1 and this completes the proof. □
After some calculation, it can be seen that when in (7) the order α goes to unity, the Shannon entropy of i-th order statistic from a sample of F can be written as follows:
H ( X i : n ; t ) = H ( U i : n ; F ( t ) ) E [ f ( F 1 ( Y i ) ) ] ,
where Y i B ¯ F ( t ) ( i , n i + 1 ) . The specialized version of this result for t = 0 , was already obtained by Ebrahimi et al. [12]. Below, we provide an example for illustration.
Example 1. 
Suppose that X is a standard exponential distribution with mean unity. Then f ( F 1 ( u ) ) = 1 u , 0 < u < 1 , and we have
E [ f α 1 ( F 1 ( Y i ) ) ] = B ¯ 1 e t ( α ( i 1 ) + 1 , α ( n i + 1 ) ) B ¯ 1 e t ( α ( i 1 ) + 1 , α ( n i ) + 1 ) .
Thus, from (7), we obtain
H α ( X i : n ; t ) = 1 1 α B ¯ 1 e t ( α ( i 1 ) + 1 , α ( n i + 1 ) ) B ¯ 1 e t α ( i , n i + 1 ) 1 , i = 1 , 2 , , n .
In Figure 2, we plotted H α ( X i : n ; t ) for some values of α and i = 1 , 2 , , 5 when n = 5 . When i = 1 , we can use (10) to get
H α ( X 1 : n ; t ) = n α 1 α ( 1 α ) α , t > 0 .
Also, we know that
H α ( X ; t ) = 1 α ( 1 α ) α , t > 0 .
Therefore, we have
H α ( X 1 : n ; t ) H α ( X ; t ) = n α 1 1 α ( 1 α ) , t > 0 .
This finding reveals an intriguing characteristic: the discrepancy between the RTE of the lifetime of a series system and the RTE of each component is not influenced by time. Instead, it solely relies on two factors: the number of components within the system and the parameter α in the exponential case.
While we have successfully obtained a closed expression for the first order statistics of RTE in the exponential distribution, the task becomes significantly more difficult when we consider the higher order statistics in some other distributions. Unfortunately, closed-form expressions for the RTE of higher-order statistics in these distributions, as well as in many other distributions, are generally not available. Given this limitation, we are motivated to explore alternative approaches to characterizing the RTE of order statistics. We therefore propose to establish thresholds for the RTE of order statistics. To this end, we present the following theorem as a conclusive proof that provides valuable insight into the nature of these bounds and their applicability in practical scenarios.
Theorem 2. 
Consider a non-negative continuous random variable X with pdf f and cdf F . Let us denote the RTEs of X and the i-th order statistic X i : n as H α ( X ; t ) and H α ( X i : n ; t ) , respectively.
(a)
Let M i = f Y i ( m i ) where m i = max { F ( t ) , i 1 n 1 } is the mode of the distribution of Y i , then for α > 1 ( 0 < α < 1 ) , we have
H α ( X i : n ; t ) ( ) 1 1 α ( ( 1 α ) H α ( U i : n ; F ( t ) ) + 1 ) ( ( 1 α ) H α ( X ; t ) + 1 ) M i S α ( t ) 1 .
(b)
Suppose we have M = f ( m ) < , where m is the mode of the pdf f, such that f ( x ) M . Then, for any α > 0 , we obtain
H α ( X i : n ; t ) 1 1 α ( ( 1 α ) H α ( U i : n ; F ( t ) ) + 1 ) M α 1 1 .
Proof. 
(a) By applying Theorem 2.3, we only need to find a bound for E [ f α 1 ( F 1 ( Y i ) ) ] . To this aim, for α > 1 ( 0 < α < 1 ) we have
E [ f α 1 ( F 1 ( Y i ) ) ] = F ( t ) 1 u α ( i 1 ) ( 1 u ) α ( n i ) B ¯ F ( t ) ( α ( i 1 ) + 1 , α ( n i ) + 1 ) f α 1 ( F 1 ( u ) ) d u ( ) M i F ( t ) 1 f α 1 ( F 1 ( u ) ) d u = M i t f α ( x ) d x = M i [ ( 1 α ) H α ( X ; t ) + 1 ] S α ( t ) .
The result now is easily obtained by recalling (7).
(b) Since for α > 1 ( 0 < α < 1 ) , it holds that
f α 1 ( F 1 ( u ) ) ( ) M α 1 ,
one can write
E [ f α 1 ( F 1 ( Y i ) ) ] M α 1 .
The result now is easily obtained from relation (7) and this completes the proof. □
The given theorem branches into two elements. The first subdivision, denoted (i), establishes a lower bound for the RTE associated with X i : n , denoted H α ( X i : n ; t ) . It should be noted that the given lower bound can be transformed into an upper bound under certain conditions. This bound is formed by using the incomplete beta function together with the RTE of the original distribution. Conversely, in part (ii) of the theorem, a lower bound is introduced for the RTE of X i : n , denoted as H α ( X i : n ; t ) . This lower bound is expressed via the RTE of order statistics from a uniform distribution and the mode, represented by m, of the underlying distribution. This finding provides interesting insights into the information properties of X i : n and provides a quantifiable measure of the lower bound of RTE with respect to the mode of the distribution.
In the following lemma we deal with the monotonic behavior of the RTE of order statistics. To lay the groundwork for our subsequent findings, we first introduce a central lemma that plays a fundamental role in our analysis.
Lemma 2. 
Consider two non-negative functions, q ( x ) and s β ( x ) , where q ( x ) is an increasing function. Let t and c be real numbers such that 0 t < c < . Additionally, assume that the random variable Z β follows a pdf f β ( z ) , where β > 0 , as
f β ( z ) = q r β ( z ) s β ( z ) t c q r β ( x ) s β ( x ) d x , z ( t , c ) .
Let r be real valued and define function g α as follows:
K α ( r ) = 1 1 α t c q r α ( x ) s α ( x ) d x t c q r ( x ) s 1 ( x ) d x α 1 , α > 0 , α 1 .
(i)
If for α > 1 ( 0 < α < 1 ) , Z α s t ( s t ) Z 1 , then K α ( r ) is an increasing function of r .
(ii)
If for α > 1 ( 0 < α < 1 ) , Z α s t ( s t ) Z 1 , then K α ( r ) is a decreasing function of r .
Proof. 
We just prove Part (i) since the proof for Part (ii) is similar. Assuming that K α ( r ) is differentiable in terms of r, we can obtain
K α ( r ) r = 1 1 α g α ( r ) r ,
where
g α ( r ) = t c q r α ( x ) s α ( x ) d x t c q r ( x ) s 1 ( x ) d x α .
It is evident that
g α ( r ) r = α t c q r ( x ) s 1 ( x ) d x α + 1 × t c log q ( x ) q r α ( x ) s α ( x ) d x t c q r ( x ) s 1 ( x ) d x t c log q ( x ) q r ( x ) s 1 ( x ) d x t c q r α ( x ) s α ( x ) d x = α t c q r ( x ) s 1 ( x ) d x t c q r α ( x ) s α ( x ) d x t c q r ( x ) s 1 ( x ) d x α + 1 E [ log q ( Z α ) ] E [ log q ( Z 1 ) ] ( ) 0 .
Using the fact that Z α s t ( s t ) Z 1 and that log ( · ) is an increasing function, one can show that E [ log q ( Z α ) ] ( ) E [ log q ( Z 1 ) ] . This implies that (13) is non-positive (non-negative), and hence K α ( r ) is an increasing function of r. □
Corollary 1. 
Under the assumptions of Lemma 2, it can be proven that when q ( x ) is decreasing, the following holds:
(i)
For α > 1 ( 0 < α < 1 ) , Z α s t ( s t ) Z 1 , then K α ( r ) is a decreasing function of r .
(ii)
If for α > 1 ( 0 < α < 1 ) , Z α s t ( s t ) Z 1 , then K α ( r ) is a increasing function of r .
Due to Lemma 2, we can prove the following corollary for ( n i + 1 ) -out-of-n systems with components having uniform distributions.
Lemma 3. 
(i)
When considering a parallel (series) system consisting of n components with a uniform distribution over the unit interval, the RTE of the system lifetime decreases as the number of components increases.
(ii)
If i 1 i 2 n , are integers, then H α ( U i 1 : n ; t ) H α ( U i 2 : n ; t ) for t i 2 1 n 1 .
Proof. 
(i) The presumption is that the system operates in parallel. Analogous reasoning can be applied to a series system to authenticate the outcome via Remark 2.9. From Lemma 1, we get
H α ( U n : n ; t ) = 1 1 α t 1 x α ( n 1 ) d x t 1 x n 1 d x 1 , 0 < t < 1 .
So, Lemma 2 readily reveals that H α ( U i : n ; t ) can be depicted as (12) where q ( x ) = x and s α ( x ) = x α . We adopt the assumption, devoid of any generality loss, that n 1 is a continuous variable. Considering α > 1 ( 0 < α < 1 ) , the ratio
t 1 x α ( n 1 ) d x t 1 x n 1 d x ,
is increasing (decreasing) in t and hence we can establish the inequality
Z α s t ( s t ) Z 1 ,
where the pdf of Z β , β > 0 , is defined in equation (11). By applying Lemma 2, we can deduce that the RTE of the parallel system is a decreasing function as the number of components increases.
(ii) To begin, we observe that
H α ( U i : n ; t ) = 1 1 α t 1 x α ( i 1 ) ( 1 x ) α ( n i ) d x t 1 x i 1 ( 1 x ) n i d x α 1 = 1 1 α t 1 x 1 x α i ( 1 x ) n α x α d x t 1 x 1 x i ( 1 x ) n x d x α 1 ,
Furthermore, the pdf of Z α as stated in (11) is expressed as
f α ( z ) = z 1 z α i ( 1 z ) n α z α t 1 x 1 x α i ( 1 x ) n α x α d x , z ( t , 1 ) ,
where in this context q ( x ) = x 1 x and s α ( x ) = ( 1 x ) n α x α . Hence, it is apparent that for 1 z t i 2 1 n 1 and α > 1 (or 0 < α < 1 ), we can write
Z α s t ( s t ) Z 1 .
In conclusion, it can be inferred for i 1 i 2 n , that
H α ( U i 1 : n ; t ) H α ( U i 2 : n ; t ) , t i 2 1 n 1 ,
and this signifies the end of the proof. □
Theorem 3. 
Consider a parallel (series) system consisting of n independent and identically distributed random variables X 1 , , X n representing the lifetime of the components. Assume that the common distribution function F has a pdf f that is increasing (decreasing) in its support. Then, the RTE of the system lifetime is decreasing in n .
Proof. 
Assuming that Y n B ¯ F ( t ) ( α ( n 1 ) + 1 , 1 ) , then f Y n ( y ) indicates the pdf of Y n . It is evident that
f Y n + 1 ( y ) f Y n ( y ) = B ¯ F ( t ) ( α ( n 1 ) + 1 , 1 ) B ¯ F ( t ) ( α n + 1 , 1 ) y α , F ( t ) < y < 1 ,
is increasing in y . This implies that Y n l r Y n + 1 and therefore Y n s t Y n + 1 . Moreover, for α > 1 ( 0 < α < 1 ) , f α 1 ( F 1 ( x ) ) is an increasing (decreasing) function of x . Therefore
E [ f α 1 ( F 1 ( Y n ) ) ] ( ) E [ f α 1 ( F 1 ( Y n + 1 ) ] .
From Theorem 2.3, for α > 1 ( 0 < α < 1 ) , we have
( 1 α ) H α ( X n : n ; t ) + 1 = [ ( 1 α ) H α ( U n : n ; F ( t ) ) + 1 ] E [ f α 1 ( F 1 ( Y n ) ) ] ( ) [ ( 1 α ) H α ( U n : n ; F ( t ) ) + 1 ] E [ f α 1 ( F 1 ( Y n + 1 ) ) ] ( ) [ ( 1 α ) H α ( U n + 1 : n + 1 ; F ( t ) ) + 1 ] E [ f α 1 ( F 1 ( Y n + 1 ) ) ] = ( 1 α ) H α ( X n + 1 : n + 1 ; t ) + 1 .
The first inequality is obtained by noting that ( 1 α ) H α ( U n : n ; F ( t ) ) + 1 is non-negative. The last inequality is obtained from Part (i) of Lemma 3. Thus, one can conclude that H α ( X n : n ; t ) H α ( X n + 1 : n + 1 ; t ) for all t > 0 .
Many distributions have decreasing pdfs, such as the exponential distribution, the Pareto distribution, and mixtures of these two. On the other hand, some distributions have increasing pdfs, such as the power distribution with its density function. Using part (i) of the corollary 1, we can prove a theorem for distributions whose pdfs go up or down. But we have to be careful, because this theorem does not work for all kinds of ( n i + 1 ) -out-of-n systems, as the next example shows.
Example 2. 
Let us consider the system works if at least ( n 1 ) of its n components work. Then, the system’s lifetime is the second smallest component lifetime, X 2 : n . The components have the same distribution, uniform on ( 0 , 1 ) . In Figure 3, we see how the RTE of X 2 : n changes with n when a = 2 and t = 0.02 . The graph shows that the RTE of the system does not always decrease as n increases. For example, it reveals that H α ( X 2 : 2 ; 0.02 ) is less than that of H α ( X 2 : 3 ; 0.02 ) .
In reliability theory, we can imagine a case where the pdf decreases, and thus the RRE of a serial system decreases, when the system has more components. This happens when we have a lifetime model with a failure rate ( h ( t ) = f ( t ) / S ( t ) ) that decreases with time. Then the data distribution must have a density function that also decreases. Some lifetime distributions in the reliability domain have the property that their RTE decreases as their scale parameter increases. This is the case, for example, with the Weibull distribution with a shape parameter less than one and with the Gamma distribution with a shape parameter less than one. Therefore, the RTE of a series system with components following these distributions becomes smaller as the number of components increases.
Now, we want to see how the RTE of order statistics X i : n changes with i . We use Part (ii) of Lemma 3, which gives us a formula for the RTE of X i : n in terms of i .
Theorem 4. 
Suppose X is a continuous random variable that is always positive. Its distribution function is F and its pdf is f . The pdf f decreases over the range of possible values of X . Let i 1 and i 2 be two whole numbers such that i 1 i 2 n . Then the RTE of the i 1 -th smallest value of X among n samples, X i 1 : n , is less than or equal to the RTE of the i 2 -th smallest value, X i 2 : n , for all values of X that are greater than or equal to the F 1 ( i 2 1 n 1 ) th percentile of F .
Proof. 
For i 1 i 2 n , it is easy to verify that Y i 1 l r Y i 2 and hence Y i 1 s t Y i 2 . Now, we have
( 1 α ) H α ( X i 1 : n ; t ) + 1 = [ ( 1 α ) H α ( U i 1 : n ; t ) + 1 ] E [ f α 1 ( F 1 ( Y i 1 ) ) ] ( ) [ ( 1 α ) H α ( U i 1 : n ; t ) + 1 ] E [ f α 1 ( F 1 ( Y i 2 ) ) ] ( ) [ ( 1 α ) H α ( U i 1 : n ; t ) + 1 ] E [ f α 1 ( F 1 ( Y i 2 ) ) ] = ( 1 α ) H α ( X i 2 : n ; t ) + 1 ,
Using Part (ii) of Lemma 3 and the same reasoning as in the proof of Theorem 3, we can obtain the result. □
The following example shows that the condition t F 1 ( i 2 1 n 1 ) cannot be dropped from the conditions of theorem.
Example 3. 
Assume the survival function of X as
S ( x ) = 1 ( 1 + x ) 2 , x > 0 .
In Figure 4, we see how the RTE of order statistics X i : 5 , for i = 3 , 4 and α = 2 , changes with t , t ( 0 , 5 ) . The plots show that the RTE of the order statistics does not always go up or down as i goes up for all values of t . For example, for the values of t < F 1 ( 3 4 ) the RTE is not monotonic in terms of i .
We can get a useful result from Theorem 4.
Corollary 2. 
Suppose X is a non-negative continuous random variable that is always positive with cdf F and pdf f . The pdf f decreases over the range of possible values of X . Let i be a whole number that is less than or equal to half of n + 1 . Then, the RTE of X i : n , is increasing in i for values of t greater than the median of distribution.
Proof. 
Suppose i 1 i 2 n + 1 2 . This means that
m F 1 ( i 2 1 n 1 ) ,
where m = F 1 ( 1 2 ) is the middle value of F . By Theorem 4, we get for t m that H α ( X i 1 : n ; t ) H α ( X i 2 : n ; t ) .

3. Conclusion

This article addresses the concept of RTE for order statistics. We have presented a new approach to express the RTE of order statistics from a continuous distribution in terms of the RTE of order statistics from a uniform distribution. This connection provides valuable insight into the properties and behavior of RTE for different distributions. Furthermore, due to the complexity of obtaining closed-form expressions for RTE of order statistics, we have derived bounds that provide practical approximations and allow a better understanding of their properties. These bounds serve as useful tools for analyzing and comparing RTE values in different scenarios. In addition, we examined the influence of the index of order statistics, denoted i, and the total number of observations, denoted n, on RTE. By examining the changes in RTE with respect to i and n, we gained a deeper understanding of the relationship between the position of the order statistic and the entropy of the overall distribution. To validate our results and demonstrate the applicability of our approach, we have provided illustrative examples. These examples demonstrate the practical implications of RTE for order statistics and illustrate the versatility of our methodology for different distributions. In summary, this study contributes to the understanding of RTE for order statistics by establishing relationships, deriving bounds, and investigating the influence of index and sample size. The results of this work provide valuable insights for researchers and practitioners working in the field of statistical inference and entropy-based analysis.

Acknowledgments

The authors acknowledge financial support from the Researchers Supporting Project number (RSP2023R464), King Saud University, Riyadh, Saudi Arabia.

References

  1. Shannon, C.E. A mathematical theory of communication. The Bell system technical journal 1948, 27, 379–423. [Google Scholar] [CrossRef]
  2. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. Journal of statistical physics 1988, 52, 479–487. [Google Scholar] [CrossRef]
  3. Asadi, M.; Ebrahimi, N.; Soofi, E.S. Dynamic generalized information measures. Statistics & probability letters 2005, 71, 85–98. [Google Scholar] [CrossRef]
  4. Nanda, A.K.; Paul, P. Some results on generalized residual entropy. Information Sciences 2006, 176, 27–47. [Google Scholar] [CrossRef]
  5. Zhang, Z. Uniform estimates on the Tsallis entropies. Letters in Mathematical Physics 2007, 80, 171–181. [Google Scholar] [CrossRef]
  6. Maasoumi, E. The measurement and decomposition of multi-dimensional inequality. Econometrica: Journal of the Econometric Society 1986, 991–997. [Google Scholar] [CrossRef]
  7. Abe, S. Axioms and uniqueness theorem for Tsallis entropy. Physics Letters A 2000, 271, 74–79. [Google Scholar] [CrossRef]
  8. Asadi, M.; Ebrahimi, N.; Soofi, E.S. Connections of Gini, Fisher, and Shannon by Bayes risk under proportional hazards. Journal of Applied Probability 2017, 54, 1027–1050. [Google Scholar] [CrossRef]
  9. David, H.A.; Nagaraja, H.N. Order statistics; John Wiley & Sons, 2004. [Google Scholar]
  10. Wong, K.M.; Chen, S. The entropy of ordered sequences and order statistics. IEEE Transactions on Information Theory 1990, 36, 276–284. [Google Scholar] [CrossRef]
  11. Park, S. The entropy of consecutive order statistics. IEEE Trans. Inf. Theory 1995, 41, 2003–2007. [Google Scholar] [CrossRef]
  12. Ebrahimi, N.; Soofi, E.S.; Soyer, R. Information measures in perspective. Int. Stat. Rev. 2010, 78, 383–412. [Google Scholar] [CrossRef]
  13. Zarezadeh, S.; Asadi, M. Results on residual Rényi entropy of order statistics and record values. Information Sciences 2010, 180, 4195–4206. [Google Scholar] [CrossRef]
  14. Baratpour, S.; Ahmadi, J.; Arghami, N.R. Characterizations based on Rényi entropy of order statistics and record values. Journal of Statistical Planning and Inference 2008, 138, 2544–2551. [Google Scholar] [CrossRef]
  15. Alomani, G.; Kayid, M. Further Properties of Tsallis Entropy and Its Application. Entropy 2023, 25, 199. [Google Scholar] [CrossRef] [PubMed]
  16. Shaked, M.; Shanthikumar, J.G. Stochastic orders; Springer Science & Business Media, 2007. [Google Scholar]
Figure 1. The exact values of H α ( U i : n ; t ) for α = 0.2 (left panel) and α = 2 (right panel) with respect to 0 < t < 1 .
Figure 1. The exact values of H α ( U i : n ; t ) for α = 0.2 (left panel) and α = 2 (right panel) with respect to 0 < t < 1 .
Preprints 85738 g001
Figure 2. The exact values of H α ( X i : n ; t ) for α = 0.2 (left panel) and α = 2 (right panel) with respect to t .
Figure 2. The exact values of H α ( X i : n ; t ) for α = 0.2 (left panel) and α = 2 (right panel) with respect to t .
Preprints 85738 g002
Figure 3. The RTE values for different n in a ( n 1 ) -out-of-n system with a uniform parent distribution and α = 2 when t = 0.2 .
Figure 3. The RTE values for different n in a ( n 1 ) -out-of-n system with a uniform parent distribution and α = 2 when t = 0.2 .
Preprints 85738 g003
Figure 4. The RTE values for different n in a ( n 1 ) -out-of-n system with a uniform parent distribution and α = 2 when t = 0.2 .
Figure 4. The RTE values for different n in a ( n 1 ) -out-of-n system with a uniform parent distribution and α = 2 when t = 0.2 .
Preprints 85738 g004
Table 1. Bounds on H α ( X i : n ; t ) derived from Theorem 2 (Parts (i) and (ii)).
Table 1. Bounds on H α ( X i : n ; t ) derived from Theorem 2 (Parts (i) and (ii)).
Probability Density Function Bounds
f ( x ) = 2 π ( 1 + x 2 ) , x > 0 , ( ) 1 1 α M i 2 α 1 π α ( ( 1 α ) H α ( U i : n ; F ( t ) ) + 1 ) B ¯ t 2 1 + t 2 ( α 1 2 , 1 2 ) 1
1 1 α ( ( 1 α ) H α ( U i : n ; F ( t ) ) + 1 ) 2 π α 1 1
f ( x ) = 2 σ 2 π e ( x μ ) 2 / 2 σ 2 , x > μ > 0 , ( ) 1 1 α M i 2 α + 1 σ α 1 π α ( ( 1 α ) H α ( U i : n ; F ( t ) ) + 1 ) Φ ¯ ( α 2 ( t μ σ ) ) 1
1 1 α ( ( 1 α ) H α ( U i : n ; F ( t ) ) + 1 ) 2 σ 2 π α 1 1
f ( x ) = λ β e ( x μ ) β ( 1 e ( x μ ) β ) λ 1 , x > μ > 0 , ( ) 1 1 α M i λ α β α 1 ( ( 1 α ) H α ( U i : n ; F ( t ) ) + 1 ) B ¯ 1 e ( x μ ) β ( α ( λ 1 ) + 1 , α ) 1
1 1 α ( ( 1 α ) H α ( U i : n ; F ( t ) ) + 1 ) ( β ( 1 1 λ ) 1 λ ) 1 α 1
f ( x ) = b c Γ ( c ) x c 1 e b x , x > 0 , ( ) 1 1 α M i b α 1 ( Γ ( c ) ) α α α ( c 1 ) + 1 ( ( 1 α ) H α ( U i : n ; F ( t ) ) + 1 ) Γ ( α ( c 1 ) + 1 , α b t ) 1
1 1 α ( ( 1 α ) H α ( U i : n ; F ( t ) ) + 1 ) ( b ( c 1 ) c 1 e 1 c Γ ( c ) ) α 1 1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated