Preprint
Article

This version is not peer-reviewed.

Mathematical Properties of the Inverted Topp-Leone Family of Distributions

A peer-reviewed article of this preprint also exists.

Submitted:

27 October 2025

Posted:

29 October 2025

You are already at the latest version

Abstract
This article defines an inverted Topp-Leone distribution. Several mathematical properties and maximum likelihood estimation of parameters of this distribution are considered. The shape of the distribution for different sets of parameters is discussed. Several mathematical properties such as the cumulative distribution function, mode, moment-generating function, survival function, hazard rate function, stress-strength reliability $R$, moments, R\'enyi entropy, Shannon entropy, Fisher information matrix, partial ordering associated with this distribution have been derived. Distributions of sum and quotient of two independent inverted Topp-Leone variables have also been obtained.
Keywords: 
;  ;  ;  ;  ;  ;  

1. Introduction

A random variable X is said to have a Topp-Leone distribution, denoted by X T L ( ν ; σ ) , if its pdf is given by
f T L ( x ; ν , σ ) = 2 ν σ x σ ν 1 1 x σ 2 x σ ν 1 , 0 < x < σ < .
For 0 < ν < 1 , the distribution defined by the density (1) is referred to as the J-shaped distribution by Topp and Leone [38] because f ( x ; ν , σ ) > 0 , f ( x ; ν , σ ) < 0 and f ( x ; ν , σ ) > 0 for all 0 < x < σ , where f ( x ; ν , σ ) and f ( x ; ν , σ ) are the first and the second derivatives, respectively, of f ( x ; ν , σ ) . For ν > 1 , (1) attains different shapes depending on values of parameters (see Kotz and van Dorp [19]). This family has a close affinity to the family of beta distributions as the distribution of 1 X / σ has a McDonald beta distribution (Kumaraswamy distribution with parameters 2 and ν ) and ( 1 X / σ ) 2 follows a standard beta distribution with parameters 1 and ν .
A number of studies addressing different facets of the univariate Topp-Leone distribution have surfaced in recent years, reflecting the revived interest in the Topp-Leone family of probability distributions. For example, see Nadarajah and Kotz [26], Kotz and van Dorp [19, Chapter 2], Kotz and Seier [21], Al-Zahrani [2], Al-Zahrani and Al-Shomrani [3], Bayoud [11,12,13], Genç [15], Ghitany, Kotz and Xie [16], MirMostafaee, Mahdizadeh, Aminzadeh [24], Vicari, Van Dorp and Kotz [39], and Zghoul [40,41].
The inverse gamma distribution is useful as a prior for positive parameters. This distribution has a heavy tail and keeps probability further from zero than the gamma distribution.
Traditionally, if a random variable follows a particular distribution then the distribution of the reciprocal of that random variable is known as inverted or inverse distribution. Meaningful inverted forms of several well know distributions have been derived and their properties have been studied extensively in the scientific literature. Inverted distributions have ample applications in all areas of science and engineering. One of the most widely used distributions and one that has been thoroughly examined in the scientific literature is the two-parameter beta distribution with the support in the unit interval ( 0 , 1 ) . The inverted counterpart of this distribution is defined in a slightly different way than many other inverted distributions. If U has a two-parameter beta distribution, then, instead of 1 / U , the form such as U / ( 1 U ) is considered more appropriate to derive inverted beta distribution with support in ( 0 , ) and useful to model positive data. Since X / σ has a Topp-Leone distribution with support in ( 0 , 1 ) and ( 1 X / σ ) 2 has standard beta distribution with parameters 1 and ν , it will be interesting to examine the distribution of X / σ / ( 1 X / σ ) .
By using the transformation Y / ξ = X / ( σ X ) , ξ > 0 , in (1), the inverted Topp-Leone density can be derived as
f I T L ( y ; ν , ξ ) = 2 ν ξ y ξ ν 1 1 + y ξ 2 ν 1 2 + y ξ ν 1 , y > 0 , ν > 0 .
A notation to designate that Y has pdf (2) is Y I T L ( ν ; ξ ) . For ξ = 1 , the two parameter inverted Topp-Leone distribution reduces to the standard ITL distribution.
Despite the fact that various aspects of Topp-Leone distribution and its variations have been developed and examined over the past 20 years, the inverted Topp-Leone distribution has not received much attention. In this article, we study mathematical properties of the inverted Topp-Leone distribution defined by the density (2). Section 2 deals with several results such as cumulative distribution function, mode, moment generating function, survival function, hazard rate function, etc. Results on expected values of functions of ITL variable are given in Section 3. Entropies such as Rényi and Shannon are derived in Section 6. Estimation of parameters and Fisher information matrix are discussed in Section 7. Results on partial ordering and sum and quotient distributions are presented in Section 8 and Section 9, respectively. Finally, simulation work is shown in Section 10.

2. Properties

This section deals with a number of properties of the inverted Topp-Leone distribution defined and derived in the previous section.
The first order derivative of ln f I T L ( y ; ν , ξ ) with respect to y is
f y ( y ) = d ln f I T L ( y ; ν , ξ ) d y = ν 1 y 2 ν + 1 ξ + y + ν 1 2 ξ + y .
Setting the above equation to zero, we have
3 y 2 + 6 ξ y 2 ( ν 1 ) ξ 2 = 0
and the only positive solution which is the mode of this equation is m o = ξ ( 1 + 2 ν ) / 3 1 for ν > 1 . Computing second order derivatives of ln f I T L ( y ; ν , ξ ) , from (5), we have
f y y ( y ) = d 2 ln f I T L ( y ; ν , ξ ) d y 2 = ν 1 y 2 + 2 ν + 1 ( ξ + y ) 2 ν 1 ( 2 ξ + y ) 2 .
Further, it can be verified that f y y ( m o ) = 9 ( ν 1 ) ξ 2 < 0 which indicates that the density attains its maximum at y = m o given by
f I T L ( m o ; ν , ξ ) = 2 ν 3 3 ν ( ν 1 ) ν 1 ξ ( 2 ν + 1 ) ν + 1 / 2 .
Thus we conclude that m o = ξ ( 1 + 2 ν ) / 3 1 is indeed the mode of the distribution.
In continuation, we present a few graphs (Figure 1) of the density function defined by the expression (2) for a range of values of parameters ν and ξ . Each plot contains four curves for selected values of ν and ξ . Here one can appreciate the vast varieties of shapes that emerge from the inverted Topp-Leone distribution.
By using the transformation Z / ϕ = σ / X 1 in (1), a different inverted Topp-Leone density can also be derived as
g ( z ; ν , ϕ ) = 2 ν ϕ y ϕ 1 + y ϕ 2 ν 1 1 + 2 y ϕ ν 1 , y > 0 , ν > 0 .
Observe that (3) can be obtained from (2) buy transforming Y = 1 / Z with ϕ = 1 / ξ . This distribution, for ϕ = 1 , is defined and studied in Hassan, Elgarhy and Ragab [9]. For ν = 1 , the inverted Topp-Leone density slides to a Pareto distribution given by the density
2 ξ 1 + y ξ 3 , y > 0 .
By using (2), the CDF of Y is derived as
F Y ( y ; ν , ξ ) = 2 ν ξ 0 y v ξ ν 1 1 + v ξ 2 ν 1 2 + v ξ ν 1 d v = 2 ν 1 / ( 1 + y / ξ ) 1 t ( 1 t 2 ) ν 1 d t ,
where we have used the substitution t = 1 / ( 1 + v / ξ ) . The final result is obtained by substituting 1 t 2 = u and evaluating the resulting expression obtaining
F Y ( y ; ν , ξ ) = ( 1 β 2 ) ν = 1 1 1 + y / ξ 2 ν = ( 2 + y / ξ ) y / ξ ( 1 + y / ξ ) 2 ν ,
where β = 1 / ( 1 + y / ξ ) .
By using (2) and (5), the truncated version of the inverted Topp-Loene distribution can be defined by the density
f T I T L ( y ; ν , ξ ) = f T I T L ( y ; ν , ξ ) F Y ( B ; ν , ξ ) F Y ( A ; ν , ξ ) = 2 ν ξ C y ξ ν 1 1 + y ξ 2 ν 1 2 + y ξ ν 1 , 0 < A < y < B < ,
where
C = F Y ( B ; ν , ξ ) F Y ( A ; ν , ξ ) = ( 2 + B / ξ ) B / ξ ( 1 + B / ξ ) 2 ν ( 2 + A / ξ ) A / ξ ( 1 + A / ξ ) 2 ν .
The quantile function y p = F Y 1 ( p ) = Q Y ( p ) is given by
Q Y ( p ) = ξ ( 1 p 1 / ν ) 1 / 2 1 , 0 < p < 1 .
The survival function (reliability function) and the hazard rate (failure rate) function of Y I T L ( ν ; ξ ) , by using the CDF of Y, can be obtained as
S I T L ( y ; ν , ξ ) = P ( Y > y ) = F ¯ Y ( y ; ν , ξ ) = 1 F Y ( y ; ν , ξ ) = 1 ( 1 β 2 ) ν = 1 ( 2 + y / ξ ) y / ξ ( 1 + y / ξ ) 2 ν
and
λ I T L ( y ; ν , ξ ) = f I T L ( y ; ν , ξ ) 1 F Y ( y ; ν , ξ ) = f I T L ( y ; ν , ξ ) F ¯ Y ( y ; ν , ξ ) = 2 ν β 3 ( 1 β 2 ) ν 1 ξ [ 1 1 β 2 ν ] .
Figure 2. Graphs of the hazard rate function (7) for different values of ξ and ν .
Figure 2. Graphs of the hazard rate function (7) for different values of ξ and ν .
Preprints 182515 g002
By using (2), the Laplace transform of Y, denoted by L Y ( t ) is derived as
L Y ( t ) = 2 ν ξ 0 exp ( t y ) y ξ ν 1 1 + y ξ 2 ν 1 2 + y ξ ν 1 d y = 2 ν 0 1 exp t ξ 1 u u u ( 1 u 2 ) ν 1 d u ,
where we have used the substitution u = 1 / ( 1 + y / ξ ) . We will evaluate the above integral by using results on Laguerre polynomials.
The Laguerre polynomial of degree n is defined by the sum (for details see Nagar, Zarrazola and Sánchez [10]),
L n ( x ) = k = 0 n ( 1 ) k k ! n k x k ,
where n k is the binomial coefficient. The first few Laguerre polynomials are
L 0 ( x ) = 1 , L 1 ( x ) = x + 1 , L 2 ( x ) = 1 2 ( x 2 4 x + 2 ) , L 3 ( x ) = 1 6 ( x 3 + 9 x 2 18 x + 6 ) .
The first order derivative of the the Laguerre polynomial L n ( x ) is given by
d d x L n ( x ) = L n 1 ( 1 ) ( x )
where L n ( α ) ( x ) is the generalized Laguerre polynomial of degree n defined by the sum
L n ( α ) ( x ) = k = 0 n ( 1 ) k k ! n + α n k x k .
The generating function for the Laguerre polynomials is given by
exp [ z t / ( 1 t ) ] 1 t = n = 0 t n L n ( x ) , | t | < 1 .
Replacing exp t ξ 1 u u by its series expansion involving Laguerre polynomials (Miller [23, Eq. 3.4a, 3.4b]), namely,
exp t ξ 1 u u = exp ( ξ t ) u m = 0 L m ( ξ t ) ( 1 u ) m , | u | < 1 ,
in (8), we get
L Y ( t ) = 2 ν exp ( ξ t ) m = 0 L m ( ξ t ) 0 1 u 2 ( 1 u ) m ( 1 u 2 ) ν 1 d u ,
By integrating the above expression, we get the Laplace transform of Y.
Finally, we give the following theorems summering results given in Section 1 and this section.
Theorem 1.
If X T L ( ν ; σ ) and Y / ξ = X / ( σ X ) , ξ > 0 , then Y I T L ( ν ; ξ ) . Further, if Y I T L ( ν ; ξ ) then X = σ Y / ξ / ( 1 + Y / ξ ) T L ( ν ; σ ) .
Theorem 2.
Let Y I T L ( ν ; ξ ) and U = 1 + Y / ξ 1 . Then, U follows a Kumaraswamy distribution given by the density
2 ν u ( 1 u 2 ) ν 1 , 0 < u < 1
and U 2 has a beta distribution with parameters 1 and ν.
Theorem 3.
Let Y I T L ( ν ; ξ ) and U = ( 2 + Y / ξ ) Y / ξ ( 1 + Y / ξ ) 2 ν . Then, U follows a uniform distribution with support in ( 0 , 1 ) . Further, ln ( U ) and ln ( 1 U ) follow standard exponential distribution.
Proof. 
Application of probability integral transformation yields the desired result. The second part follows by applying the transformation Z = ln ( U ) .    □

3. Expected Values

Using Theorem 2, it is easy to see that
E 1 + Y / ξ 2 k = Γ ( k + 1 ) Γ ( ν + 1 ) Γ ( ν + k + 1 ) , k > 1 .
For a non-negative integer r and an arbitrary δ , the r-the moment of Y 1 / δ is derived as
E Y r / δ = 2 ν ξ ν 0 y ν + r / δ 1 1 + y ξ 2 ν 1 2 + y ξ ν 1 d y = 2 ν ξ r / δ 0 1 u ν + r / δ 1 ( 1 u ) 1 r / δ ( 2 u ) ν 1 d u = 2 ν ν ξ r / δ Γ ( ν + r / δ ) Γ ( 2 r / δ ) Γ ( ν + 2 ) F ν + r δ , 1 ν ; ν + 2 ; 1 2 , r δ < 2 ,
where we have used the substitution y / ξ = u / ( 1 u ) with d y = ξ ( 1 u ) 2 d u and the definition of the Gauss hypergeometric function. For r = δ , using the substitution 1 u = t in (11), one gets
E Y = 2 ν ξ 0 1 1 t 2 ν 1 d t 2 0 1 t 1 t 2 ν 1 d t = ξ π Γ ( ν + 1 ) Γ ( ν + 1 / 2 ) 1 .
The above result can also be obtained by using (7.3.7.9) of Prudnikov, Brychkov and Marichev [35, p.414].
Further, if Y I T L ( ν ; ξ ) , then by using Theorem 3 it is straightforward to show that
E Y ξ 2 + Y ξ 1 + Y ξ 2 k = ν ν + k .
Theorem 4.
If Y I T L ( ν ; ξ ) , then then
E Y ξ k 2 + Y ξ = ν j = 0 k + k + j ( 1 ) j Γ ( 1 + ( k + j ) / 2 ) Γ ( ν ) Γ [ ν + 1 + ( j k ) / 2 ] ,
where ν > max { , ( + k ) / 2 1 } and 1 + ( k ) / 2 > 0 .
Proof. 
By definition
E Y ξ k 2 + Y ξ = 2 ν ξ ν + k 0 y ν + k 1 1 + y ξ 2 ν 1 2 + y ξ ν 1 d y = 2 ν 0 ( 1 t ) + k t k + 1 1 t 2 ν 1 d t ,
where we have used the substitution y / ξ = ( 1 t ) / t . Now, expanding 1 t k + using binomial theorem and substituting t 2 = z , we obtain
E Y ξ k 2 + Y ξ = ν j = 0 k + k + j ( 1 ) j 0 z ( + j k ) / 2 1 z ν 1 d z .
Finally, integrating z by using the definition of beta function, the desired result in obtained.    □
Corollary 1.
If Y I T L ( ν ; ξ ) , then for ν k > 0 ,
E Y ξ k 2 + Y ξ k = ν j = 0 2 k 2 k j ( 1 ) j Γ ( 1 + j / 2 ) Γ ( ν k ) Γ ( ν k + 1 + j / 2 ) .
Substituting k = 1 and k = 2 in the above corollary and simplifying, one gets
E Y / ξ 2 + Y / ξ = ν + 1 ν 1 π ν Γ ( ν 1 ) Γ ν + 1 / 2 , ν > 1 ,
E Y / ξ 2 + Y / ξ 2 = ν 2 + 5 ν + 2 ( ν 1 ) ( ν 2 ) 2 π ( ν + 1 ) ν Γ ( ν 2 ) Γ ν + 1 / 2 , ν > 2 .
Also, substituting ( k , ) = ( 1 , 2 ) in the above theorem and simplifying the resulting expression, we get
E Y ξ 2 + Y ξ 2 = π ( ν + 4 ) ν Γ ( ν 2 ) 2 Γ ( ν + 1 / 2 ) 3 ν + 2 ( ν 1 ) ( ν 2 ) , ν > 2 .
Similarly, one can easily derive the following theorem.
Theorem 5.
If Y I T L ( ν ; ξ ) , then
E Y ξ k 1 + Y ξ k = ν j = 0 k k j ( 1 ) j Γ ( 1 + j / 2 ) Γ ( ν ) Γ ( ν + 1 + j / 2 ) .
E Y / ξ 1 + Y / ξ = 1 π Γ ( ν + 1 ) 2 Γ ( ν + 3 / 2 )
and
E Y / ξ 1 + Y / ξ 2 = ν + 2 ν + 1 π Γ ( ν + 1 ) Γ ( ν + 3 / 2 ) .
Observe that Y / ξ 1 + Y / ξ T L ( ν 1 + ν 2 ) and therefore above results can be obtained from the moments of T-L distribution, see Nadrajah [26], Nagar, Zarrazola and Echeverri-Valencia [29].

4. Stress-Strength Reliability Model

Inferences about R = P [ Y < X ] , where X and Y are two independent random variables, is very common in reliability, statistical tolerance and biometry (Ali and Woo [1], Kotz, Lumelskii and Pensky [20]).
Let Y represents the random variable of a stress that a device will be subjected to in service and X represents the strength that varies from item to item in the population of devices. Then, the reliability, which is the probability that a randomly selected device functions successfully, denoted by R, is equal to P ( Y < X ) .
Let Y and X be the stress and the strength random variables, independent of each other, Y I T L ( ν ; ξ ) and X I T L ( μ ; ξ ) , then
R = P ( Y < X ) = 0 f I T L ( x ; μ , ξ ) F Y ( x ; ν , ξ ) d x = 2 μ ξ 0 x ξ μ 1 1 + x ξ 2 μ 1 2 + x ξ μ 1 ( 2 + x / ξ ) x / ξ ( 1 + x / ξ ) 2 ν d x = 2 μ ξ 0 x ξ μ + ν 1 1 + x ξ 2 ( μ + ν ) 1 2 + x ξ μ + ν 1 d x = 2 μ ξ ξ 2 ( μ + ν ) = μ μ + ν .

5. Order Statistics

If Y 1 < Y ( 2 ) < < Y ( n ) be order statistics from an inverted Topp-Leone distribution, then the density of Y ( i ) , denoted by g i ( y ) , 1 i n , is given by
g i ( y ) = n ! i ! ( n i ) ! 2 i ν ξ y ξ i ν 1 1 + y ξ 2 i ν 1 2 + y ξ i ν 1 1 ( 2 + y / ξ ) y / ξ ( 1 + y / ξ ) 2 ν n i = n ! i ! ( n i ) ! f I T L ( y ; i ν , ξ ) 1 ( 2 + y / ξ ) y / ξ ( 1 + y / ξ ) 2 ν n i , y > 0 .
It is interesting to note that Y ( n ) I T L ( n ν ; ξ ) and the density of Y 1 is given by
g 1 ( y ) = n 2 ν ξ y ξ ν 1 1 + y ξ 2 ν 1 2 + y ξ ν 1 1 ( 2 + y / ξ ) y / ξ ( 1 + y / ξ ) 2 ν n 1 , y > 0 .
The density of sample media when n is odd, say n = 2 m + 1 , can also be obtained by substituting i = m = ( n 1 ) / 2 in the density of Y ( i ) .

6. Entropy

An entropy of a random variable of is a measure of variation of uncertainty. In this section we will derive two entropies know as Shannon and Rényi entropies for the inverted Topp-Leone distribution.
Denote by H S H ( f ) the well-known Shannon entropy introduced in Shannon [33]. It is define by
H S H ( f ) = f ( x ) log f ( x ) d x .
One of the main extensions of the Shannon entropy was defined by Rényi [32]. This generalized entropy measure is given by
H R ( η , f ) = log G ( η ) 1 η ( f o r η > 0 a n d η 1 ) ,
where
G ( η ) = f η ( x ) d x .
The additional parameter η is used to describe complex behavior in probability models and the associated process under study. Rényi entropy is monotonically decreasing in η , while Shannon entropy (12) is obtained from (13) for η 1 . For details see Nadarajah and Zografos [30], Zografos and Nadarajah [43] and Zografos [42].
Theorem 6.
For the inverted Topp-Leone distribution defined by the pdf (2), the Rényi and the Shannon entropies are given by
H R ( η , f ) = 1 1 η [ η ln ν + ( η 1 ) ln 2 ξ + ln Γ η ( ν 1 ) + 1 + ln Γ 3 η 1 2 ln Γ η ν + 1 2 + 1 2 ] , η > 1 3
and
H S H ( f ) = ln 2 ν ξ ( ν 1 ) ψ ( ν ) 3 2 ψ 1 2 + ν + 1 2 ψ ( ν + 1 ) = ln 2 ν ξ + ( ν 1 ) [ ψ ( ν + 1 ) ψ ( ν ) ] + 3 2 [ ψ ( ν + 1 ) ψ 1 2 ] = ln 2 ν ξ + ν 1 ν + 3 2 ψ ( ν + 1 ) ψ 1 2 ,
respectively, where ψ ( α ) = Γ ( α ) / Γ ( α ) is the digamma function.
Proof. 
For η > 0 and η 1 , using the joint density of Y given by (2), we have
G ( η ) = 2 ν ξ η 0 v ξ η ( ν 1 ) 1 + v ξ η ( 2 ν + 1 ) 2 + v ξ η ( ν 1 ) d v = 2 ν ξ η ξ 0 1 t ( 3 η 2 ) / 2 ( 1 t 2 ) η ( ν 1 ) d t ,
where the last line has been obtained by substituting t = 1 / ( 1 + v / ξ ) . The final result is obtained by substituting 1 t 2 = u and evaluating the resulting expression by using beta integrals,
G ( η ) = ν η 2 ξ η 1 Γ [ η ( ν 1 ) + 1 ] Γ [ ( 3 η 1 ) / 2 ] Γ [ η ( ν + 1 / 2 ) + 1 / 2 ] , η > 1 / 3 .
Now, taking logarithm of G ( η ) and using (13) we get H R ( η , f ) . The Shannon entropy is obtained from H R ( η , f ) by taking η 1 and using L’Hopital’s rule.    □

7. Estimation and Information matrix

Let Y 1 , , Y n be a random sample from the inverted Topp-Leone distribution defined by the density (2). The log-likelihood function, denoted by l ( ν , ξ ) , is given by
l ( ν , ξ ) = n ln 2 + ln ν ν ln ξ + ( ν 1 ) i = 1 n ln y i ( 2 ν + 1 ) i = 1 n ln 1 + y i ξ + ( ν 1 ) i = 1 n ln 2 + y i ξ ,
>Now, differentiating l ( ν , ξ ) w.r.t. ν , we get
l ( ν , ξ ) ν = n ν n ln ξ + i = 1 n ln y i 2 i = 1 n ln 1 + y i ξ + i = 1 n ln 2 + y i ξ ,
Further, differentiating w.r.t ξ , one gets
l ( ν , ξ ) ξ = n ν ξ + 2 ν + 1 ξ i = 1 n y i / ξ 1 + y i / ξ ν 1 ξ i = 1 n y i / ξ 2 + y i / ξ .
Thus, by solving numerically (16) and (17), the MLEs of the ν and ξ can be obtained. As the MLEs are not in a closed form, an Python code is developed to get the estimates. However, if the parameter ξ has a fixed value, say ξ = ξ 0 , then the MLE of ν can be given by
ν ^ = n i = 1 n ln ( 2 + Y i / ξ 0 ) Y i / ξ 0 ( 1 + Y i / ξ 0 ) 2 1 .
Note that we can write ν i = 1 n ln ( 2 + Y i / ξ 0 ) Y i / ξ 0 ( 1 + Y i / ξ 0 ) 2 = i = 1 n ln U i , where U 1 , , U n are independent random variables. Also, from Theorem 3, ln U i follows an exponential distribution with mean 1 and therefore ν i = 1 n ln ( 2 + Y i / ξ 0 ) Y i / ξ 0 ( 1 + Y i / ξ 0 ) 2 is distributed as gamma with shape parameter n and scale parameter 1. Now, using results on gamma distribution, we get
E ( ν ^ r ) = ( n ν ) r Γ ( n r ) Γ ( n ) , n > r
from which the mean, the variance and the MSE of ν ^ are calculated as E ( ν ^ ) = n ν / ( n 1 ) , Var ( ν ^ ) = n 2 ν 2 / ( n 1 ) 2 ( n 2 ) and MSE ( ν ^ ) = ( n + 2 ) ν 2 / ( n 1 ) ( n 2 ) , respectively. Further, the unbiased estimator v ^ ^ = ( n 1 ) ν ^ / n has variance Var ( v ^ ^ ) = ν 2 / ( n 2 ) bigger than the Rao-Cramér lower bound which is ν 2 / n . For a fixed value of ν , say ν 0 , the statistic 2 ν 0 i = 1 n ln ( 2 + Y i / ξ 0 ) Y i / ξ 0 ( 1 + Y i / ξ 0 ) 2 has a chi-square distribution with 2 n degrees of freedom and could be used to test the hypothesis H : ξ = ξ 0 .
Further differentiating (16) and (17) with respect to ν and ξ , respectively, the second order derivatives are derived as
2 l ( ν , ξ ) ν 2 = n ν 2 ,
2 l ( ν , ξ ) ξ ν = n ξ + 2 ξ i = 1 n y i / ξ 1 + y i / ξ 1 ξ i = 1 n y i / ξ 2 + y i / ξ .
and
2 l ( ν , ξ ) ξ 2 = n ν ξ 2 + 2 ν + 1 ξ 2 i = 1 n ( y i / ξ ) 2 1 + y i / ξ 2 2 y i / ξ 1 + y i / ξ ν 1 ξ 2 i = 1 n ( y i / ξ ) 2 2 + y i / ξ 2 2 ( y i / ξ ) 2 + y i / ξ .
Keeping in mind that the expected value of a constant is the constant itself, and applying the results on expected values given in Section 3, one can see that
E 2 l ( ν , ξ ) ν 2 = n ν 2 ,
E 2 l ( ν , ξ ) ξ ν = n ξ 3 π Γ ( ν + 1 ) 2 ( ν 1 ) Γ ν + 3 / 2 + ν 3 ν 1
and
E 2 l ( ν , ξ ) ξ 2 = n ξ 2 6 π Γ ( ν + 1 ) ( ν 2 ) Γ ν + 1 / 2 4 ν 2 + 13 ν + 6 ( ν 2 ) ( ν + 1 ) .
For a given observation y, the Fisher information matrix for the inverted Topp-Leone distribution given by the density (2) is defined as
1 ν 2 ν 3 ξ ( ν 1 ) 3 π Γ ( ν + 1 ) 2 ( ν 3 ) Γ ν + 3 / 2 + 1 ν 3 ξ ( ν 1 ) 3 π Γ ( ν + 1 ) 2 ( ν 3 ) Γ ν + 3 / 2 + 1 1 ξ 2 ( ν 2 ) 6 π Γ ( ν + 1 ) Γ ν + 1 / 2 4 ν 2 + 13 ν + 6 ν + 1 .

8. Partial Ordering

A random variable X with distribution function F X can be said to be greater than another random variable Y with distribution function F Y in a number of ways. There are several different concepts of partial ordering between random variables such as likelihood ratio ordering, hazard rate ordering, reverse hazard rate ordering, and stochastic ordering. A random variable X is stochastically greater than a random variable Y, written as X st Y , if P ( X > t ) P ( Y > t ) (equivalently F ¯ X ( t ) F ¯ Y ( t ) ) for all t. He, in this section, we denote the by f X ( t ) and F ¯ X ( t ) = 1 F X ( t ) the density function and the survival function, respectively, of X. A random variable X is considered to be larger than a random variable Y in the hazard rate ordering (denoted by X hr Y ), if
λ X ( t ) = f X ( t ) F ¯ X ( t ) λ Y ( t ) = f Y ( t ) F ¯ Y ( t )
for all t 0 . The condition λ X ( t ) λ Y ( t ) is equivalent to the condition that the function F ¯ X ( t ) / F ¯ Y ( t ) is non-decreasing in t. The random variable Y is said to be smaller than the random variable X in the reverse hazard rate order (written as X rhr Y ) if F X ( t ) / F Y ( t ) is non-decreasing in t. We state that X is larger than a random variable Y according to the likelihood ratio ordering (written as X lr Y ) if f X ( t ) / f Y ( t ) is non-decreasing function of t.
It is well know that X lr Y X hr Y X st Y and X lr Y X rhr Y . Clearly the likelihood ratio ordering is stronger than other orderings. For further reading on patrial ordering between random variables and related topics the reader is referred to Bapat and Kochar [5] Belzunce, Martínez-Riquelme and Mulero [6], Boland, Emad El-Neweihi and Proschan [7], Nanda and Shaked [31], and Shaked and Shanthikumar [36].
In case of the inverted Topp-Leone distribution defined by the density (2), the likelihood ratio is given by
L ( t ; ν , μ ) = f X ( t ) f Y ( t ) = ν μ t ν μ 1 + t ξ 2 ( ν μ ) 2 + t ξ ν μ ,
where X I T L ( ν ; ξ ) and Y I T L ( μ ; ξ ) . Now differentiating ln L ( t ; ν , μ ) wrt t, we obtain
d d t ln L ( t ; ν , μ ) = ν μ t 2 ( ν μ ) 1 / ξ 1 + t / ξ + ( ν μ ) 1 / ξ 2 + t / ξ = ( ν μ ) t 2 + 2 ξ t + 2 ξ 2 t ( ξ + t ) ( 2 ξ + t ) .
For ν > μ , the derivative of ln L ( t ; ν , μ ) is positive for all t indicating that L ( t ; ν , μ ) is an increasing function of t and therefore likelihood ratio ordering is exhibited, i.e., X lr Y . By using relationship between different orderings described above it is clear that the inverted Topp-Leone distribution also possess the stochastic ordering, the hazard rate ordering, and the reverse hazard rate ordering.

9. Sum and Quotient

In this section we will derive distributions of S = X + Y and R = X / ( X + Y ) where X I T L ( μ ; ξ ) and Y I T L ( ν ; ξ ) are independent. Since X / ( X + Y ) is invariant under the transformation X c X and Y c Y , c > 0 , we can take ξ = 1 . The joint density of X and Y is given by
4 μ ν x μ 1 1 + x 2 μ 1 2 + x μ 1 y ν 1 1 + y 2 ν 1 2 + y ν 1 ,
where x > 0 and y > 0 . Now, substituting x = r s and y = ( 1 r ) s with the Jacobian J ( x , y r , s ) = s in the above density, the joint density of R and S is obtained as
4 μ ν s μ + ν 1 r μ 1 ( 1 r ) ν 1 1 + r s 2 μ 1 2 + r s μ 1 1 + ( 1 r ) s 2 ν 1 2 + ( 1 r ) s ν 1 ,
where 0 < r < 1 and s > 0 . Using
1 + a s b = 1 + s b 1 ( 1 a ) s 1 + s b ,
and
2 + a s c = 2 c 1 + s c 1 ( 1 a / 2 ) s 1 + s c ,
to re-write the joint density of R and S and integrating s, we have the density of R as
2 μ + ν μ ν r μ 1 ( 1 r ) ν 1 0 s μ + ν 1 1 + s ( μ + ν + 4 ) 1 ( 1 r ) s 1 + s 2 μ 1 1 ( 1 r / 2 ) s 1 + s μ 1 1 r s 1 + s 2 ν 1 1 ( ( 1 + r ) / 2 ) s 1 + s ν 1 d s = 2 μ + ν μ ν r μ 1 ( 1 r ) ν 1 0 1 w μ + ν 1 ( 1 w ) 3 1 ( 1 r ) w 2 μ 1 1 1 r 2 w μ 1 1 r w 2 ν 1 1 r + 1 2 w ν 1 d w ,
where the last line has been obtained by substituting s / ( 1 + s ) = w with d s = ( 1 w ) 2 d w . Now, using the definition of the Lauricella’s hypergeometric function F D given in (A.4), the above integral is evaluated as
2 μ + ν μ ν Γ ( μ + ν ) Γ ( 4 ) Γ ( μ + ν + 4 ) r μ 1 ( 1 r ) ν 1 F D μ + ν ; μ + 1 , ν + 1 , 2 μ + 1 , 2 ν + 1 ; μ + ν + 4 ; 1 r 2 , r + 1 2 , 1 r , r , 0 < r < 1 .
To derive the marginal density of S we first re-write the joint density of R and S by using (24) and
2 + a s c = 2 + s c 1 ( 1 a ) s 2 + s c ,
and then integrate wrt r to get
4 μ ν s μ + ν 1 1 + s 2 μ 2 ν 2 2 + s μ + ν 2 0 1 r μ 1 ( 1 r ) ν 1 1 ( 1 r ) s 1 + s 2 μ 1 1 ( 1 r ) s 2 + s μ 1 1 r s 1 + s 2 ν 1 1 r s 2 + s ν 1 d r .
Our next step is the evaluation of the integral given in the above expression. Writing
1 r s 1 + s 2 ν 1 = i = 0 ( 2 ν + 1 ) i i ! r s 1 + s i
and
1 r s 2 + s ν 1 = 1 r s 2 + s ( 1 ν ) = j = 0 ( 1 ν ) j j ! r s 2 + s j
and using the definition of the Lauricella’s hypergeometric function F D , the density of S is derived as
4 μ ν s μ + ν 1 1 + s 2 μ 2 ν 2 2 + s μ + ν 2 i = 0 j = 0 ( 2 ν + 1 ) i ( 1 ν ) j i ! j ! s i + j ( 1 + s ) i ( 2 + s ) j Γ ( μ + i + j ) Γ ( ν ) Γ ( μ + ν + i + j ) F D ν ; μ + 1 , 2 μ + 1 ; μ + ν + i + j ; s 2 + s , s 1 + s , s > 0 .
Similarly, if X has a standard ITL distribution with parameter μ and Y follows a standard inverted beta distribution with parameters α and β , then the density of R is
2 μ μ B ( α , β ) Γ ( μ + α ) Γ ( β + 2 ) Γ ( μ + α + β + 2 ) r μ 1 ( 1 r ) α 1 F D μ + α ; μ + 1 , 2 μ + 1 , α + β ; μ + α + β + 2 ; 1 r 2 , 1 r , r , 0 < r < 1 .
The density of S, in this case, is given by
2 μ B ( α , β ) s μ + α 1 1 + s ( 2 μ + α + β + 1 ) 2 + s μ 1 k = 0 ( α + β ) k k ! s k ( 1 + s ) k Γ ( μ + k ) Γ ( α ) Γ ( μ + α + k ) F D α ; μ + 1 , 2 μ + 1 ; μ + α + k ; s 2 + s , s 1 + s , s > 0 .

10. Simulation

To generate random samples from the Inverted Topp-Leone (ITL) distribution, we employed the inverse transform method. This method relies on generating a uniform random variable U Uniform ( 0 , 1 ) and transforming it via the inverse cumulative distribution function (CDF) specific to the ITL distribution, as defined by equation (5).
Subsequently, the Maximum Likelihood Estimation (MLE) approach was utilized for estimating the parameters ν and ξ , using the previously derived estimator (16) y (17).
In the context of estimating the parameters of a probability distribution via the method of maximum likelihood, it is often necessary to solve a system of nonlinear equations derived from the first-order conditions (score equations), typically of the form
( θ ) = 0 ,
where ( θ ) denotes the log-likelihood function and θ R p is the parameter vector. For this purpose, the numerical solution was obtained using the scipy.optimize.root function from the SciPy library, employing the `hybr’ method. The invocation is as follows:
sol = root(fun_grad,
x0,
method=’hybr’,
tol=tol,
options={’maxfev’: maxfev})
The `hybr’ method corresponds to the hybrid Powell method, implemented in MINPACK’s hybrd and hybrj routines. It is particularly well-suited for solving systems of nonlinear equations of the form F ( θ ) = 0 , where F : R p R p is a smooth vector-valued function. The method combines features of the Newton-Raphson algorithm with trust-region techniques to improve stability and convergence properties, especially when derivatives are not explicitly provided or are difficult to compute.
In this specific application, the function fun_grad returns the score vector, i.e., the gradient of the log-likelihood function with respect to the parameters. The vector x 0 represents the initial guess for the parameter vector θ , which is essential for the local convergence properties of the algorithm.
The hybrid method proceeds by computing a quasi-Newton approximation of the Jacobian matrix when an analytical Jacobian is not supplied, which is then used to generate a sequence of iterations that approximate a solution. The method dynamically adjusts between Broyden’s update and finite-difference approximations, while also incorporating a Levenberg-Marquardt-type trust-region strategy. This allows for robust handling of ill-conditioned or nearly singular Jacobians-a frequent issue in likelihood estimation, particularly when the likelihood surface is flat or exhibits ridges.
The parameters tol and maxfev play a crucial role in the termination criteria of the solver. The former controls the stopping criterion based on the norm of the function value (i.e., how close the score vector is to zero), while the latter limits the maximum number of function evaluations to prevent infinite loops in the case of non-convergence. In practical terms, the chosen values should ensure a balance between computational efficiency and numerical precision, particularly when used in simulation-based estimation procedures or bootstrap replications.
The use of the hybr method within scipy.optimize.root is fully justified given the structure of the problem: solving a moderately sized system of nonlinear equations derived from differentiable functions, without requiring explicit specification of the Jacobian. Its robustness and efficiency, especially in problems involving maximum likelihood estimation, make it a reliable choice for statistical inference in the absence of closed-form solutions.
Table 1 and Table 2 summarize a comparative analysis between the true and estimated values of ν and ξ across varying sample sizes:
The simulation study demonstrates the consistency and reliability of the maximum likelihood estimators for the parameters ν and ξ of the Inverted Topp–Leone distribution. As the sample size n increases, the following trends are observed:
  • For parameter ν :
    -
    The bias decreases progressively, from 0.1455 at n = 100 to 0.0128 at n = 1000 , indicating that the estimator becomes increasingly unbiased.
    -
    The Mean Squared Error (MSE) also declines significantly, from 0.286 to 0.019 , confirming improved accuracy.
    -
    The empirical confidence intervals become narrower with increasing n, reflecting higher estimator precision.
  • For parameter ξ :
    -
    The bias remains very close to zero across all sample sizes, showing remarkable stability and suggesting that the estimator is nearly unbiased even for small samples.
    -
    The MSE diminishes notably, from 0.057 at n = 100 to 0.0057 at n = 1000 , indicating improved estimation accuracy with more data.
    -
    Similar to ν , the empirical confidence intervals for ξ tighten as n grows, further supporting the consistency of the estimator.
Overall, the results confirm the desirable asymptotic properties of the maximum likelihood estimators, particularly their unbiasedness and efficiency, as both bias and MSE decrease with larger sample sizes. These findings support the applicability of the proposed estimation procedure for practical use, especially in contexts where moderate to large samples are available.

Materials and Methods

Competing Exponential and Weibull models are likewise fitted via MLE. Model selection uses AIC, BIC and HQIC. PP plots compare the empirical cumulative distribution against the theoretical CDF of the fitted model. Figure 3Figure 4 display the updated plots with the corrected quantile function.

Information criteria and their interpretation

AIC.
Akaike Information Criterion: AIC = 2 LL + 2 k .
Balances in-sample fit (LL) with a 2 k penalty for complexity. A lower AIC indicates better future predictive ability (in the Kullback–Leibler sense).
BIC.
Bayesian Information Criterion (Schwarz): BIC = 2 LL + k ln n .
Stronger penalty increasing with ln n . Favors simpler models as n increases; it is consistent (selects the “true” model if it is among the candidates).
HQIC.
Hannan–Quinn Information Criterion: HQIC = 2 LL + 2 k ln ln n .
Intermediate penalty between AIC and BIC. Also consistent, but less strict than BIC for moderate sample sizes; useful when AIC overfits and BIC is too conservative.

Results

Dataset 1

The following data set represents Vinyl chloride data from clean up gradient ground-water monitoring wells in ( μ g/L). This is data set was first recorded by Bhaumik et al. [8]:
Table 3. Vinyl chloride concentrations ( μ g/L).
Table 3. Vinyl chloride concentrations ( μ g/L).
5.1 1.2 1.3 0.6 0.5 2.4 0.5 1.1 8.0 0.8 0.4 0.6
0.9 0.4 2.0 0.5 5.3 3.2 2.7 2.9 2.5 2.3 1.0 0.2
0.1 0.1 1.8 0.9 2.0 4.0 6.8 1.2 0.4 0.2
Table 4 summarises the fits. The ITL distribution emerges as the preferred model according to the AIC criterion, outperforming both the Inverse Weibull and Inverse Gamma distributions. Specifically, ITL improves upon the Inverse Weibull by Δ AIC = 96.7512 87.4843 = 9.27 , and over the Inverse Gamma by Δ AIC = 95.0147 87.4843 = 7.53 . This indicates a substantially better fit, with ITL capturing the tail behavior more accurately (see Figure 3).

Dataset 2

The following data shows the time between failures for 30 repairable items and has been provided by Murthy et al. [25]. The data are listed as follows:
Table 5. Time between failures for 30 repairable items.
Table 5. Time between failures for 30 repairable items.
1.43 0.11 0.71 0.77 2.63 1.49 3.46 2.46 0.59 0.74 1.23 0.94
4.36 0.40 1.74 4.73 2.23 0.45 0.70 1.06 1.46 0.30 1.82 2.37
0.63 1.23 1.24 1.97 1.86 1.17
Table 6 summarises the comparative goodness-of-fit results for Dataset 2. According to the AIC criterion, the ITL distribution provides the best fit, with the lowest AIC value of 115.08. It outperforms the Inverse Weibull distribution by a margin of Δ AIC = 121.25 115.08 = 6.17 , and the Inverse Gamma distribution by Δ AIC = 122.13 115.08 = 7.05 . These differences indicate that ITL captures the structure of the data more effectively, providing a better balance between fit quality and model complexity. Similar conclusions are supported by BIC, CAIC, and HQIC values. For a visual assessment, refer to Figure 4.
Probability-Probability (PP) plots are graphical tools used to assess the goodness-of-fit of a statistical model. They compare the empirical cumulative distribution function (ECDF) of the observed data against the theoretical cumulative distribution function (CDF) of the proposed model. In a P-P plot, each data point represents a pair ( F empirical ( x i ) , F theoretical ( x i ) ) . If the model fits the data well, the points will lie approximately along the 45-degree reference line, indicating that the empirical and theoretical distributions are closely aligned. Deviations from this line suggest discrepancies between the data and the fitted model, particularly in the tails or center, depending on the curvature and pattern of the deviation.
As can be observed in Figure 3 and Figure 4, the Inverted Topp-Leone (ITL) distribution provides a satisfactory fit to both datasets. The points in both PP plots remain close to the diagonal reference line, demonstrating that the ITL distribution adequately captures the empirical behavior of the data. This visual evidence complements the numerical results reported in Table 4 and Table 6, further supporting the suitability of the ITL model for these data sets.

11. Conclusion

By using classical method of transformation of variables, we have derived and defined an inverted Topp-Leone distribution (ITLD). This distribution has support on the positive real line and can be applied to lifetime data problems. By using standard definitions and results, several properties of this distribution the CDF, moments, and Fisher information matrix have been derived. Entropies and estimation of parameters associated with the inverted Topp-Leone distribution (ITLD) have also be consodered. The likelihood ratio ordering which also implies other orderings has been established. The mathematics involved in the derivation of results is tractable and therefore the model discussed in this article may serve an alternative to many existing distributions defined on positive real line.
The ITLD has proven to be a highly competitive alternative for modeling lifetime and reliability data. In particular, goodness-of-fit comparisons with classical distributions—such as the Inverse Weibull and Inverse Gamma—demonstrated that the ITLD provides superior performance across various datasets, as evidenced by lower values of AIC, BIC, CAIC, and HQIC, as well as better alignment with empirical distributions through PP-plots.
The Maximum Likelihood Estimation (MLE) procedure has been implemented to estimate the parameters ν and ξ of the ITLD. Simulation studies confirmed the consistency and asymptotic normality of the estimators, while also allowing the computation of biases, mean squared errors, and empirical confidence intervals for different sample sizes.
The proposed methodology for ITLD parameter estimation and simulation provides a robust framework that can be extended to more complex models, including bivariate or regression-based generalizations.
A future work is to construct multivariate, matrix variate and complex variate generalizations of the proposed distribution. The construction of bivariate, multivariate, and matrix variate generalizations of the suggested distribution is a future project.

Acknowledgments

This research work was supported by the Sistema Universitario de Investigación, Universidad de Antioquia [project no. 2021-47390].

Appendix

The Pochhammer symbol ( a ) n is defined by ( a ) n = a ( a + 1 ) ( a + n 1 ) = ( a ) n 1 ( a + n 1 ) for n = 1 , 2 , and ( a ) 0 = 1 . Further, for | z | < 1
( 1 z ) a = j = 0 ( a ) j j ! z j .
The integral representation of the Gauss hypergeometric function is given as
F ( a , b ; c ; z ) = Γ ( c ) Γ ( a ) Γ ( c a ) 0 1 t a 1 ( 1 t ) c a 1 ( 1 z t ) b d t ,
where Re ( c ) > Re ( a ) > 0 , | arg ( 1 z ) | < π . Note that, by expanding ( 1 z t ) b , | z t | < 1 , in (A.1) and integrating t the series expansion for F can be obtained.
F ( a , b ; c ; z ) = ( 1 z ) c a b F c a , c b ; c ; z ( Euler transformation )
F ( a , b ; c ; z ) = ( 1 z ) a F a , c b ; c ; z z 1 = ( 1 z ) b F c a , b ; c ; z z 1 ( Pfaff transformation )
For properties and results the reader is referred to Luke [22].
The Lauricalla hypergeometric function F D has integral representation
F D ( n ) ( a , b 1 , , b n ; c ; z 1 , , z n ) = Γ ( c ) Γ ( a ) Γ ( c a ) 0 1 u a 1 1 u c a 1 i = 1 n ( 1 u z i ) b i d u ,
where R e ( c ) > R e ( a ) > 0 and | arg ( 1 z i ) | < π , i = 1 , , n . For n = 2 , the Lauricalla hypergeometric function F D is also known as Appell’s first hypergeometric function F 1 .
For further results and properties of these functions the reader is referred to Srivastava and Karlsson [37], and Prudnikov, Brychkov and Marichev[35, 7, Sec. 7.2.4].
Finally, we define the beta type 1 and beta type 2 distributions. These definitions can be found in Johnson, Kotz and Balakrishnan [18].
Definition A.1.
The random variable X is said to have a beta type 1 distribution with parameters ( a , b ) , a > 0 , b > 0 , denoted as X B 1 ( a , b ) , if its pdf is given by { Γ ( a + b ) / Γ ( a ) Γ ( b ) } x a 1 ( 1 x ) b 1 , 0 < x < 1 .
Definition A.2.
The random variable X is said to have a beta type 2 (inverted beta) distribution with parameters ( a , b ) , denoted as X B 2 ( a , b ) , if its pdf is given by { Γ ( a + b ) / Γ ( a ) Γ ( b ) } x a 1 ( 1 + x ) ( a + b ) , x > 0 , a > 0 , b > 0 .

References

  1. M. Masoom Ali and Jungsoo Woo, Inference on reliability P(Y < X) in a p-dimensional Rayleigh distribution, Mathematical and Computer Modelling 42 (2005), 367–373. [CrossRef]
  2. Bander Al-Zahrani, Goodness-of-fit for the Topp-Leone distribution with unknown parameters, Appl. Math. Sci. (Ruse), 6, (2012), no. 128, 6355–6363. [CrossRef]
  3. Bander Al-Zahrani, Ali A. Al-Shomrani, Inference on stress-strength reliability from Topp-Leone distributions, J. King Saud Univ. Sci., 24, (2011), no. 1, 73–88.
  4. N. Balakrishnan, Chin-Diew Lai, Continuous Bivariate Distributions, Second edition, Springer, Dordrecht, 2009.
  5. R. B. Bapat and Subhash C. Kochar, On likelihood-ratio ordering of order statistics, Linear Algebra Appl., 199 (1994), 281–291.
  6. Félix Belzunce, Carolina Martínez-Riquelme and Julio Mulero, An introduction to stochastic orders. Elsevier/Academic Press, Amsterdam, 2016.
  7. Philip, J. Boland, Emad El-Neweihi and Frank Proschan, Applications of the hazard rate ordering in reliability and order statistics, J. Appl. Probab., 31 (1994), no. 1, 180–192.
  8. Bhaumik, D. K, Kapur, K. C. Datta, T. K. and Gibbons, R. D. Testing parameters of a gamma distribution for small samples. Technometrics, 51(3), 2009, 326–334.
  9. Amal, S. Hassan, Mohammed Elgarhy and Randa Ragab, Statistical Properties and Estimation of Inverted Topp-Leone Distribution, J. Stat. Appl. Pro., 9, (2020), no. 2, 319–331.
  10. Daya, K. Nagar, Edwin Zarrazola and Luz Estela Sánchez, Entropies and Fisher information matrix for extended beta distribution, Applied Mathematical Sciences, 9 (2015), no. 80, 398–3994.
  11. Husam Awni Bayoud, Estimating the shape parameter of the Topp-Leone distribution based on type I censored samples, Appl. Math. (Warsaw), 42, (2015), no. 2–3,, 219–230.
  12. Husam Awni Bayoud, Admissible minimax estimators for the shape parameter of Topp- Leone distribution, Comm. Statist. Theory Methods, 45, (2016), no. 1, 71–82.
  13. Husam Awni Bayoud, Estimating the shape parameter of Topp-Leone distributed based on progressive type II censored samples, REVSTAT, 14, (2016), no. 4, 415–431.
  14. M. Elgarhy, N. Alsadat, A. S. Hassan, C. Chesneau and A.H. Abdel-Hamid, A New Asymmetric Modified Topp-Leone Distribution: Classical and Bayesian Estimations Under Progressive Type-II Censored Data with Applications, Symmetry, 15 (2023), 1396.
  15. Ali, I. Genç, Moments of order statistics of Topp-Leone distribution, Statist. Papers, 53, no. 1, (2012), 117–131.
  16. M. E. Ghitany, S. Kotz, M. Xie, On some reliability measures and their stochastic orderings for the Topp-Leone distribution, J. Appl. Stat., 32, (2005), no. 7, 715–722.
  17. A. K. Gupta, D. K. Nagar, Matrix Variate Distributions, Chapman & Hall/CRC, Boca Raton, 2000.
  18. N. L. Johnson, S. Kotz, N. Balakrishnan, Continuous Univariate Distributions-2, Second Edition, John Wiley & Sons, New York, 1994.
  19. Samuel Kotz, Johan René van Dorp, Beyond Beta. Other Continuous Families of Distributions with Bounded Support and Applications, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2004.
  20. Samuel Kotz, Yan Lumelskii and Marianna Pensky, The stress-strength model and its generalizations. Theory and applications. World Scientific Publishing Co., Inc., River Edge, NJ, 2003.
  21. Samuel Kotz, Edith Seier, Kurtosis of the Topp-Leone distributions, Interstat, 2006, July, no 1.
  22. Y. L. Luke, The Special Functions and Their Approximations, Vol. 1, Academic Press, New York, 1969.
  23. Allen, R. Miller, Remarks on a generalized beta function, J. Comput. Appl. Math., 100 (1998), no. 1, 23–32.
  24. S. M. T. K. MirMostafaee, M. Mahdizadeh, Milad Aminzadeh, Bayesian inference for the Topp-Leone distribution based on lower k-record values, Jpn. J. Ind. Appl. Math., 33, (2016), no. 3, 637–669.
  25. D. N. P. Murthy, M. Xie, and R. Jiang. Weibull Models. John Wiley & Sons, Hoboken, NJ, 2004.
  26. Saralees Nadarajah, Samuel Kotz, Moments of some J-shaped distributions, J. Appl. Stat., 30, (2003), no. 3, 311–317.
  27. Saralees Nadarajah, Shou Hsing Shih, Daya K. Nagar, A new bivariate beta distribution, Statistics, 51, (2017), no. 2, 455–474.
  28. Daya, K. Nagar, S. Nadarajah and Idika E. Okorie, A new bivariate distribution with one marginal defined on the unit interval, Ann. Data Sci., 4, (2017), no. 3, 405–420.
  29. Daya, K. Nagar, Edwin Zarrazola, Santiago Echeverri-Valencia, Bivariate Topp-Leone family of distributions, International Journal of Mathematics and Computer Science, 17 (2022), no. 3, 1007–1024.
  30. S. Nadarajah and K. Zografos, Expressions for Rényi and Shannon entropies for bivariate distributions, Inform. Sci., 170 (2005), no. 2-4, 173–189.
  31. Asok, K. Nanda and Moshe Shaked, The hazard rate and the reversed hazard rate orders, with applications to order statistics, Ann. Inst. Statist. Math., 53 (2001), no. 4, 853–864.
  32. A. Rényi, On measures of entropy and information, in Proc. 4th Berkeley Sympos. Math. Statist. and Prob., Vol. I, Univ. California Press, Berkeley, Calif., pp. 547–561 (1961).
  33. C. E. Shannon, A mathematical theory of communication, Bell System Tech. J., 27 (1948), 379–423, 623–656.
  34. A. P. Prudnikov, Yu.A. Brychkov and O. I. Marichev, Integrals and Series. Elementary Functions, Vol. 1, Gordon and Breach, New York, 1986.
  35. A. P. Prudnikov, Yu.A. Brychkov and O. I. Marichev, Integrals and Series. More Special Functions, Vol. 3, Translated from the Russian by G. G. Gould. Gordon and Breach Science Publishers, New York, 1990.
  36. Moshe Shaked and J. George. Shanthikumar, Stochastic orders. Springer Series in Statistics. Springer, New York, 2007.
  37. H.M. Srivastava and P.W. Karlsson, Multiple Gaussian Hypergeometric Series, John Wiley & Sons, New York, USA (1985).
  38. Chester, W. Topp, Fred C. Leone, A family of J-shaped frequency functions, J. Amer. Statist. Assoc., 50, (1955), 209–219.
  39. Donatella Vicari, Johan Rene van Dorp, Samuel Kotz, Two-sided generalized Topp and Leone (TS-GTL) distributions, J. Appl. Stat., 35, (2008), no. 9-10, 1115–1129.
  40. A. A. Zghoul, Order statistics from a family of J-shaped distributions, Metron, 68, (2010), no. 2, 127–136.
  41. A. A. Zghoul, Record values from a family of J-shaped distributions, Statistica, 71, (2011), no. 3, 355–365.
  42. K. Zografos, On maximum entropy characterization of Pearson’s type II and VII multivariate distributions, J. Multivariate Anal., 71 (1999), no. 1, 67–75.
  43. K. Zografos and S. Nadarajah, Expressions for Rényi and Shannon entropies for multivariate distributions, Statist. Probab. Lett., 71 (2005), no. 1, 71–84.
Figure 1. Graphs of the inverted Topp-Leone density for different values of ξ and ν .
Figure 1. Graphs of the inverted Topp-Leone density for different values of ξ and ν .
Preprints 182515 g001
Figure 3. PP-plot for ITL fit to Dataset 1.
Figure 3. PP-plot for ITL fit to Dataset 1.
Preprints 182515 g003
Figure 4. PP–plot for ITL fit to Dataset 2.
Figure 4. PP–plot for ITL fit to Dataset 2.
Preprints 182515 g004
Table 1. Comparison of True and Estimated Values for Parameter ν .
Table 1. Comparison of True and Estimated Values for Parameter ν .
n ν ^ Bias ν MSE ν CI ν lower CI ν upper
100 2.145479 0.145479 0.285984 1.412837 3.395362
200 2.067830 0.067830 0.115006 1.543739 2.833557
500 2.026010 0.026010 0.040594 1.688080 2.464037
1000 2.012837 0.012837 0.019029 1.766096 2.305012
Table 2. Comparison of True and Estimated Values for Parameter ξ .
Table 2. Comparison of True and Estimated Values for Parameter ξ .
n ξ ^ Bias ξ MSE ξ CI ξ lower CI ξ upper
100 0.993807 -0.006193 0.057248 0.591871 1.531506
200 0.998264 -0.001736 0.028667 0.701223 1.358450
500 0.999845 -0.000155 0.011551 0.805474 1.228158
1000 0.999741 -0.000259 0.005676 0.857869 1.154789
Table 4. Comparison of Distributions (Dataset 1).
Table 4. Comparison of Distributions (Dataset 1).
Model Log-Likelihood AIC BIC CAIC HQIC
ITL -55.5384 115.0768 118.1295 120.1295 116.1179
Inverse Weibull -58.6266 121.2532 124.3059 126.3059 122.2942
Inverse Gamma -59.0659 122.1319 125.1846 127.1846 123.1793
Table 6. Comparison of Distributions (Dataset 2).
Table 6. Comparison of Distributions (Dataset 2).
Model Log-Likelihood AIC BIC CAIC HQIC
ITL -55.5384 115.0768 118.1295 120.1295 116.1179
Inverse Weibull -58.6266 121.2532 124.3059 126.3059 122.2942
Inverse Gamma -59.0659 122.1319 125.1846 127.1846 123.1793
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated