Preprint
Article

This version is not peer-reviewed.

Some Results on Cumulative Residual Inaccuracy Measure of k-Record Values

A peer-reviewed article of this preprint also exists.

Submitted:

07 November 2025

Posted:

07 November 2025

You are already at the latest version

Abstract
Herein, we consider the significance of cumulative residual entropy (CRE) and its numerous generalizations. This article presents an extension of the cumulative residual inaccuracy as proposed by Taneja and Kumar to k-record values. We examine certain properties of this measure. Additionally, we investigate some stochastic ordering and identify the proposed measure for several distributions that frequently arise in various realistic scenarios and have applications across multiple fields of science and engineering.
Keywords: 
;  ;  ;  

1. Introduction

The pivotal event that established the field of information theory was the 1948 seminal paper by Claude E. Shannon [2], in which he presented the concept of uncertainty for a continuous random variable X with a probability density function f ( x ) as:
H ( f ) = 0 f ( x ) ln f ( x ) d x .
After the work of Shannon, there was enormous development in this field resulting in a huge literature introducing numerous information-theoretic measures by various researchers. Kerridge [3] provides the inaccuracy measure which is the generalization of Shannon entropy as:
H ( f , g ) = 0 f ( x ) ln g ( x ) d x .
Here, f ( x ) is the actual distribution, whereas g ( x ) is the predicted one. In order to address certain limitations associated with Shannon’s entropy measure, Rao et al. [4] introduced a novel measure of uncertainty, referred to, as the cumulative residual entropy. In this measure, the probability density function f ( x ) has been substituted with the survival function (a function that gives the probability that a patient, device, or other object of interest will survive past a certain time) F ¯ ( x ) of the random variable X, defined as
H ( F ¯ ) = 0 F ¯ ( x ) ln F ¯ ( x ) d x .
This measure of uncertainty is especially relevant for characterizing the information in issues related to the aging characteristics of reliability theory, which is based on the mean residual life function. Following Rao’s introduction of this cumulative residual entropy [4], it has garnered significant attention from researchers. Asadi and Zohrev [5] examined its dynamic variant to elucidate the effect of age on the information pertaining to the residual lifetime of a system or component. In a similar vein to cumulative residual entropy (1.3), Dicrescenzo and Longobardi [6] presented cumulative entropy, as particularly applicable to problems associated with inactivity time. Sunoj and Linu [7] introduced the cumulative version of Renyi’s entropy. Taneja and Kumar [1] expanded the concept of cumulative residual entropy to include the cumulative residual inaccuracy (CRI) and subsequently, dynamic cumulative inaccuracy (DCI), while also investigating some of its properties. The measure proposed by Taneja and Kumar [1] is defined as:
H ( F ¯ , G ¯ ) = 0 F ¯ ( x ) ln G ¯ ( x ) d x .
Consider a sequence of independent and identically distributed continuous random variables denoted as { X i , i 1 } , having a distribution function F ( x ) and a probability density function f ( x ) . An observation X j is referred to as an upper record value if its value exceeds that of all preceding observations. A similar definition applies to a lower record value. However, in various scenarios where the expected waiting time between two record values is significantly large, this model of record values becomes inadequate. To address such situations, a novel model of k-records has been introduced by Dziubdziela and Kopocinski [8]. They define the n t h upper k-record as follows. For a positive integer k, define T 1 , k = k and for n 2 ,
T n , k = m i n j : j > T n 1 , k ; X j k + 1 : j > X T n 1 , k k + 1 : T n 1 , k .
Let Y n , k = X T n , k k + 1 : T n , k , n 1 , where X i : n denotes the i t h order statistic in a sample of size n. The sequence { Y n , k , n 1 } are referred to as k-record values. An analogous definition can be given for a lower k-record value. If we put k = 1 , the results for usual records can be obtained as a special case.
In the model of upper k-record value, if we take k = 2 or k = 3 , this means that instead of viewing the largest value, we observe the second or third largest values. We can take those values of k that are of great importance in the context of a particular problem. Insurance claims in some non-life insurance can be used as an example (see Kamps [9]). So, the concept of k-record values has been widely studied in the literature (refer to Berred [10], and, Fashandi and Ahmadi [11]). The probability function (pdf) and the survival function of the n t h upper k-record are given respectively as:
f n , k ( x ) = k n Γ n ( ln F ¯ ( x ) ) n 1 ( F ¯ ( x ) ) k 1 f ( x )
and
F ¯ n , k = j = 0 n 1 1 j ! F ¯ ( x ) k k ln F ¯ ( x ) j
where Γ is the (complete) Gamma function (e.g. see Arnold et al. [12]). Psarrakos and Navarro [13] expanded the notion of CRE by connecting it to the average duration between record values and the idea of the relevation transform [14,15]. The latter describes the total lifetime of a component replaced by a new one of the same age upon failure. Tahmasebi and Eskandarzadeh [16] suggested an enhancement of a cumulative entropy based on the k t h lower record values, while also examining its dynamic variant that incorporates the past lifetime. Goel et al. [17] investigated the measure of inaccuracy between the record distribution and the parent distribution, as well as between the k-record distribution and the parent distribution.
In this article, we study the cumulative residual inaccuracy contained in the sequence of k-record values. The structure of the paper is outlined as follows. In Section 2, we introduce the extension of the cumulative residual inaccuracy measure to k-record values. In Section 3, we examine several properties of the proposed measure and establish some bounds for it. Subsequently, in Section 4, we investigate certain aspects of stochastic ordering. Section 5 presents a simplified expression for cumulative residual inaccuracy to facilitate computations and calculations. Finally, conclusive comments are made at the end in Section 6.

2. Cumulative Residual Inaccuracy Measure

For the measure (1.4), we introduce the cumulative residual inaccuracy measure between the k-record distribution F ¯ n , k ( x ) and the parent distribution F ¯ as:
H ( F ¯ n , k , F ¯ ) = 0 F ¯ n , k ( x ) ln F ¯ ( x ) d x .
Using (1.6), we have:
H ( F ¯ n , k , F ¯ ) = 0 j = 0 n 1 1 j ! F ¯ ( x ) k k ln F ¯ ( x ) j ln F ¯ ( x ) d x = j = 0 n 1 0 k j j ! F ¯ ( x ) k ( ln F ¯ ( x ) ) j + 1 d x .
After some rearrangements, we get:
H ( F ¯ n , k , F ¯ ) = j = 0 n 1 0 ( j + 1 ) k 2 λ F ( x ) . k j + 2 ln F ¯ ( x ) ( j + 1 ) f ( x ) Γ ( j + 2 ) d x = j = 0 n 1 0 ( j + 1 ) k 2 λ F ( x ) f j + 2 , k ( x ) d x = j = 0 n 1 ( j + 1 ) k 2 E f j + 2 , k 1 λ F ( x ) .
Here, E stands for expectation and λ F ( x ) stands for the hazard function corresponding to F ( x ) . The hazard function (also known as the failure rate, hazard rate, or force of mortality) measures the likelihood of, for example, death or failure occurs at a specific time, given that the event has not occurred before that time [18,19].

3. Properties and the Bounds to the Measure

In the present section, we study some of the properties of the proposed measure of cumulative inaccuracy as follows:
1.
H ( F ¯ n , k , F ¯ ) = j = 0 ( n 1 ) k j ( j + 1 ) μ j + 2 , k ( x ) μ j + 1 , k ( x ) , where μ n , k ( x ) = 0 F ¯ n , k ( x ) d x .
Proof. 
From (1.6), we can write:
F ¯ j + 2 , k ( x ) F ¯ j + 1 , k ( x ) = ( F ¯ ( x ) ) k ( k ln F ¯ ( x ) ) j + 1 ( j + 1 ) ! .
Therefore from (2.2) and (3.1), we get:
H ( F ¯ n , k , F ¯ ) = j = 0 ( n 1 ) k j ( j + 1 ) 0 F ¯ j + 2 , k ( x ) F ¯ j + 1 , k ( x ) d x = j = 0 ( n 1 ) k j ( j + 1 ) μ j + 2 , k ( x ) μ j + 1 , k ( x ) .
Remark 1. 
If for a fixed k, F n , k is a decreasing function of n, that is, F ¯ n , k is an increasing function of n, then F ¯ j + 2 , k > F ¯ j + 1 , k . From the previous result, we can see that H ( F ¯ n , k , F ¯ ) is an increasing function of n.
2.
Consider two random variables X and Y with survival functions F ¯ ( x ) and G ¯ ( y ) respectively such that Y = ϕ ( X ) , where ϕ is a strictly increasing function and differentiable almost everywhere with ϕ ( 0 ) = 0 , then
H ( G ¯ n , k , G ¯ ) = j = 0 ( n 1 ) 0 k j j ! ( F ¯ ( x ) ) k ( ln F ¯ ( x ) ) j + 1 ϕ ( x ) d x .
Proof. 
From (2.2), we can write:
H ( G ¯ n , k , G ¯ ) = j = 0 ( n 1 ) 0 k j j ! ( G ¯ ( y ) ) k ln G ¯ ( y ) j + 1 d y .
Now Y = ϕ ( X ) G ¯ ( y ) = F ¯ ( x ) and G ¯ n , k ( y ) = F ¯ n , k ( x ) . Also d y = ϕ ( x ) d x .
By setting all these values into (3.3), the result is obvious. □
Remark 2. 
In particular, Y = ϕ ( X ) = a X ϕ ( x ) = a . Therefore (3.2) becomes:
H ( G ¯ n , k , G ¯ ) = j = 0 ( n 1 ) a 0 k j j ! ( F ¯ ( x ) ) k ( ln F ¯ ( x ) ) j + 1 d x = a H ( F ¯ n , k , F ¯ ) .
3.
If G ¯ ( x ) = F ¯ ( x ) β , where β is an integer grater than 1, and F ¯ ( x ) and G ¯ ( x ) are the survival function of X and Y respectively, then
H ( G ¯ n , k , G ¯ ) = β H ( F ¯ n , k β , F ¯ )
Proof. 
We know that
H ( G ¯ n , k , G ¯ ) = j = 0 ( n 1 ) 0 k j j ! ( G ¯ ( x ) ) k ( ln G ¯ ( x ) ) j + 1 d x = j = 0 ( n 1 ) 0 k j j ! ( F ¯ ( x ) ) k β ( β ln F ¯ ( x ) ) j + 1 d x = β j = 0 ( n 1 ) 0 ( k β ) j j ! ( F ¯ ( x ) ) k β ( ln F ¯ ( x ) ) j + 1 d x = β H ( F ¯ n , k β , F ¯ ) .
4.
Consider η ( X ) = 0 ( F ¯ ( x ) ) k ln F ¯ ( x ) d x , then
H ( F ¯ n , k , F ¯ ) j = 0 ( n 1 ) k j j ! η ( X ) j + 1 ,
Proof. 
From (2.2),
H ( F ¯ n , k , F ¯ ) = j = 0 ( n 1 ) 0 k j j ! F ¯ ( x ) k ( ln F ¯ ( x ) ) j + 1 d x = j = 0 ( n 1 ) 0 k j j ! F ¯ ( x ) k 1 F ¯ ( x ) ( ln F ¯ ( x ) ) j + 1 d x j = 0 ( n 1 ) 0 k j j ! F ¯ ( x ) k 1 F ¯ ( x ) j + 1 ( ln F ¯ ( x ) ) j + 1 d x .
Now, using Jensen’s inequality [20], we get:
H ( F ¯ n , k , F ¯ ) j = 0 ( n 1 ) k j j ! 0 F ¯ ( x ) k ln F ¯ ( x ) d x j + 1 = j = 0 ( n 1 ) k j j ! η ( X ) j + 1 .
Here η ( X ) = 0 ( F ¯ ( x ) ) k ln F ¯ ( x ) d x .
5.
Let X denote an absolutely continuous non-negative random variable, then
H ( F ¯ n , 1 , F ¯ ) j = 0 ( n 1 ) H ( F ¯ ) j + 1 j ! ,
where H ( F ¯ ) is given by (1.3).
Proof. 
This result can be proven directly from (3.4) by taking k = 1 . Also if we set k = 1 into η ( X ) = 0 ( F ¯ ( x ) ) k ln F ¯ ( x ) d x , this becomes the cumulative residual entropy given as:
H ( F ¯ ) = 0 F ¯ ( x ) ln F ¯ ( x ) d x .
This proves the result. □

4. Some Results on Stochastic Ordering

In this section, we prove some order properties of cumulative inaccuracy measure for k-record values. First we give the following definitions.
Definition 1. 
A random variable X is said to be less than Y in the stochastic ordering denoted by X s t Y if F ¯ ( x ) G ¯ ( x ) for all x, where F ¯ ( x ) and G ¯ ( x ) are the survival functions of X and Y respectively.
Definition 2. 
A random variable X is said to be less than Y in the likelihood ratio ordering denoted by X l r Y if f X ( x ) g Y ( x ) is non-increasing in x, where f X ( x ) and g Y ( x ) are the pdf of X and Y respectively.
Proposition 1. 
If E ( X n , k ) and E ( X ) , denotes the expected value of n t h k-record value and the parent distribution such that X n , k s t X , then
( i ) H ( F ¯ n , k ) H ( F ¯ n , k , F ¯ ) E ( X n , k ) ln E ( X n , k ) E ( X ) .
( i i ) H ( F ¯ n , k ) H ( F ¯ ) E ( X n , k ) ln E ( X n , k ) E ( X ) .
Here H ( F ¯ n , k ) and H ( F ¯ ) denote the cumulative residual entropy for the random variables X n , k and X respectively.
Proof. 
By the log-sum inequality, we have:
0 F ¯ n , k ( x ) ln F ¯ n , k ( x ) F ¯ ( x ) d x 0 F ¯ n , k ( x ) d x ln 0 F ¯ n , k ( x ) d x 0 F ¯ ( x ) d x = E ( X n , k ) ln E ( X n , k ) E ( X ) .
Hence, using the previous inequality, we obtain:
H ( F ¯ n , k ) = 0 F ¯ n , k ( x ) ln F ¯ n , k ( x ) d x 0 F ¯ n , k ( x ) ln F ¯ ( x ) d x E ( X n , k ) ln E ( X n , k ) E ( X ) = H ( F ¯ n , k , F ¯ ) E ( X n , k ) ln E ( X n , k ) E ( X ) .
Now using X n , k s t X in above inequality, we get:
H ( F ¯ n , k ) 0 F ¯ n , k ( x ) ln F ¯ ( x ) d x E ( X n , k ) ln E ( X n , k ) E ( X ) 0 F ¯ ( x ) ln F ¯ ( x ) d x E ( X n , k ) ln E ( X n , k ) E ( X ) = H ( F ¯ ) E ( X n , k ) ln E ( X n , k ) E ( X ) .
This proves the result. □
Proposition 2. 
Let X > 0 be associated with the density function f ( x ) and cumulative distribution function F ( x ) . If X n , k s t X , then,
H ( F ¯ n , k , F ¯ ) C e H ( f n , k , f ) .
Here H ( f n , k , f ) = 0 f n , k ( x ) ln f ( x ) d x denotes the measure of inaccuracy between the n t h k-record value and the parent distribution (see Goel et al. [21]).
Proof. 
Consider,
0 f n , k ( x ) ln f ( x ) F ¯ n , k ( x ) ln F ¯ ( x ) d x ln 1 0 F ¯ n , k ( x ) ln F ¯ ( x ) d x = ln 1 H ( F ¯ n , k , F ¯ ) .
The inequality above results from the log-sum inequality [22]. Continuing, we get:
0 f n , k ( x ) ln f ( x ) d x 0 f n , k ( x ) ln F ¯ n , k ( x ) ln F ¯ ( x ) d x ln H ( F ¯ n , k , F ¯ ) .
or
H ( f n , k , f ) + 0 f n , k ( x ) ln F ¯ n , k ( x ) ln F ¯ ( x ) d x ln ( H ( F ¯ n , k , F ¯ ) ) .
Using X n , k s t X , we get:
H ( f n , k , f ) + 0 f n , k ( x ) ln F ¯ n , k ( x ) ln F ¯ n , k ( x ) d x ln ( H ( F ¯ n , k , F ¯ ) ) .
Using the substitution F ¯ n , k ( x ) = u , we get:
0 f n , k ( x ) ln F ¯ n , k ( x ) ln F ¯ n , k ( x ) d x = 0 1 ln u ln u d u = k . ( s a y )
Therefore, by putting this value in (4.4), we get:
H ( f n , k , f ) + k ln ( H ( F ¯ n , k , F ¯ ) ) .
or
H ( F ¯ n , k , F ¯ ) C e H ( f n , k , f ) ,
where C denotes the constant. □
Proposition 3. 
let X be a non-negative random variable, then
H ( F ¯ n , k ) E f n , k ( X β + 1 ) Γ β ( 1 + 1 β ) ( β + 1 ) E β ( X n , k ) .
Proof. 
Let X be a random variable obeying the Weibull distribution [23] and reliability function
F ¯ ( x ) = e ( λ x ) β .
From (4.1),
H ( F ¯ n , k ) 0 F ¯ n , k ( x ) ln F ¯ ( x ) d x E ( X n , k ) ln E ( X n , k ) E ( X ) .
The Weibull distribution becomes:
H ( F ¯ n , k ) E ( X n , k ) ln E ( X n , k ) E ( X ) 0 F ¯ n , k ( x ) ( λ x ) β d x = E ( X n , k ) ln E ( X n , k ) E ( X ) λ β β + 1 E f n , k ( X β + 1 ) .
Let E ( X ) = μ = 0 F ¯ ( x ) d x = Γ ( 1 β + 1 ) λ . Hence,
H ( F ¯ n , k ) E ( X n , k ) ln E ( X n , k ) μ E f n , k ( X β + 1 ) Γ β ( 1 + 1 β ) ( β + 1 ) μ β .
The right hand side of above equation is maximized for a fixed β at:
μ β = β E f n , k ( X β + 1 ) Γ β ( 1 + 1 β ) ( β + 1 ) E ( X n , k ) 1 β .
Using this in (4.6), we get:
H ( F ¯ n , k ) E ( X n , k ) ln E ( X n , k ) μ β E f n , k ( X β + 1 ) Γ β ( 1 + 1 β ) ( β + 1 ) μ β β = E ( X n , k ) β ln β E f n , k ( X β + 1 ) Γ β ( 1 + 1 β ) ( β + 1 ) E β + 1 ( X n , k ) E ( X n , k ) β .
Now we know that ln x 1 x , Using this, we get:
H ( F ¯ n , k ) E ( X n , k ) β 1 β E f n , k ( X β + 1 ) Γ β ( 1 + 1 β ) ( β + 1 ) E β + 1 ( X n , k ) E ( X n , k ) β = E f n , k ( X β + 1 ) Γ β ( 1 + 1 β ) ( β + 1 ) E β ( X n , k ) . o r H ( F ¯ n , k ) E f n , k ( X β + 1 ) Γ β ( 1 + 1 β ) ( β + 1 ) E β ( X n , k ) .
This completes the proof. □
Remark 3. 
If we set β = 1 and n = k = 1 in the previous result, it reduces to:
H ( F ¯ ) E ( X 2 ) 2 E ( X ) ,
a bound obtained by Rao et al. [4].
Proposition 4. 
Suppose that the non-negative random variable X has a decreasing hazard rate, then:
H ( F ¯ n + 1 , k , F ¯ ) H ( F ¯ n , k , F ¯ )
Proof. 
Consider two pdfs of consecutive record values f n , k ( x ) and f n + 1 , k ( x ) . Using (1.5), we get:
f n , k ( x ) f n + 1 , k ( x ) = n k ln F ¯ ( x ) ,
which is a decreasing function in x. This implies that X n , k l r X n + 1 , k . Therefore X n , k s t X n + 1 , k , that is F ¯ n , k ( x ) F ¯ n + 1 , k ( x ) . (For more details one can refer to Shaked and Shantikumar [24]). Therefore for all increasing functions ψ, this is equivalent to E ( ψ ( X n , k ) ) E ( ψ ( X n + 1 , k ) ) , provided these expectations exist.
Now if X has a decreasing hazard rate λ F ( x ) , then 1 λ F ( x ) is an increasing function. Therefore by the above:
E 1 λ F ( X n , k ) E 1 λ F ( X n + 1 , k )
From (2.3), we can see that,
H ( F ¯ n , k , F ¯ ) H ( F ¯ n + 1 , k , F ¯ ) .
Here λ F ( X n , k ) and λ F ( X n + 1 , k ) denote the hazard rate corresponding to X n , k and X n + 1 , k respectively. This completes the proof. □

5. Cumulative Inaccuracy For Some Specific Distributions

In this section, we give a lemma providing a simplified expression for finding the cumulative inaccuracy measure for various distributions, and then we give some examples based on it.
The symbolic expressions and closed-form derivations of the inaccuracy measures H ( F ¯ n , k , F ¯ ) , including equations from Eq. (5.2) to Eq. (5.5), were performed using the Maple Computer Algebra System (Maplesoft, Version 2025.1). Maple’s symbolic integration, summation, and simplification capabilities were utilized to cross-check and symbolically collapse the infinite sums into closed-form expressions, and checked via through numerical substitution. The procedures closely follow the recommendations documented in the Maple Programming Guide [25].
Lemma 1. 
Consider a random variable X having distribution function F ¯ ( x ) , then cumulative inaccuracy measure between k-record values and parent distribution is given as
H ( F ¯ n , k , F ¯ ) = j = 0 ( n 1 ) k j j ! 0 u j + 1 e u ( k + 1 ) f ( F 1 ( 1 e u ) ) d u .
Proof. 
From (2.2)
H ( F ¯ n , k , F ¯ ) = ( F ¯ ( x ) ) k j = 0 ( n 1 ) 0 k j j ! ( ln F ¯ ( x ) ) j + 1 d x .
By putting ln F ¯ ( x ) = u in the previous equation, we get:
H ( F ¯ n , k , F ¯ ) = e k u j = 0 ( n 1 ) 0 k j u j + 1 e u j ! f ( F 1 ( 1 e u ) ) d u = j = 0 ( n 1 ) k j j ! 0 u j + 1 e u ( k + 1 ) f ( F 1 ( 1 e u ) ) d u .
Example 1. 
Consider the finite range distribution with pdf f ( x ) = a b 1 x b a 1 , a > 1 , 0 x b and distribution function F ( x ) = 1 x b a .
Then F 1 ( 1 e u ) = b ( 1 e u a ) and this gives f ( F 1 ( 1 e u ) ) = a b e u ( a 1 ) a . Therefore
Using (5.1), we get:
H ( F ¯ n , k , F ¯ ) = j = 0 ( n 1 ) k j j ! 0 u j + 1 e u ( k + 1 ) f ( F 1 ( 1 e u ) ) d u = j = 0 ( n 1 ) k j b j ! a 0 u j + 1 e u ( k + 1 ) e u ( a 1 ) a d u = j = 0 ( n 1 ) k j b j ! a 0 u j + 1 e u ( k + 1 a ) d u .
Now using the substitution u ( k + 1 a ) = t , we have:
H ( F ¯ n , k , F ¯ ) = j = 0 ( n 1 ) k j b j ! a ( k + 1 a ) j + 2 0 t j + 1 e t d t = b a j = 0 ( n 1 ) k j ( j + 1 ) ( k + 1 a ) j + 2 = b ( k a + n + 1 ) k a k a + 1 n a 2 k a + 1 + a 2 a = ( k a + n + 1 ) ( k a k a + 1 ) n k a 1 b a k a + 1
Here 0 t j + 1 e t d t = Γ ( j + 2 ) = ( j + 1 ) !
In particular if n = 2 a n d k = 1 , then we have an inaccuracy measure between the second record value and the parent distribution as:
H ( F ¯ 2 , 1 , F ¯ ) = b a ( 1 + 1 a ) 2 j = 1 3 j ( 1 + 1 a ) j 1 = b a ( 6 a 2 + 4 a + 1 ) ( a + 1 ) 4 .
Example 2. 
For a uniform distribution, if we put a = 1 in (5.2), we get an inaccuracy measure corresponding to a uniform distribution as:
H ( F ¯ n , k , F ¯ ) = b j = 0 ( n 1 ) k j ( j + 1 ) ( k + 1 ) j + 2 = b 1 ( k + n + 1 ) ( k k + 1 ) n k + 1 .
Example 3. 
If X is a random variable with a Weibull distribution having a pdf of f ( x ) = α β x β 1 e α x β , f o r x > 0 , α > 0 , β > 0 and survival function F ¯ ( x ) = 1 e α x β , this gives F 1 ( 1 e u ) = u α 1 β . Therefore putting these values in (5.1), the inaccuracy measure will come out to be:
H ( F ¯ n , k , F ¯ ) = j = 0 ( n 1 ) k j Γ ( j + 1 + 1 β ) j ! ( α β β ) 1 β k j + 1 + 1 β . = n β k n Γ n β + β + 1 β ( β + 1 ) n ! ( α β β ) 1 β k n β + β + 1 β = β k β 1 β ( α β β ) 1 β ) Γ n β + β + 1 β ( β + 1 ) ( n 1 ) ! .
Example 4. 
If X is an exponentially distributed random variable, then by putting β = 1 in (5.4), we get the an inaccuracy measure corresponding to an exponential distribution as:
H ( F ¯ n , k , F ¯ ) = j = 0 ( n 1 ) ( j + 1 ) k 2 α = n ( n + 1 ) 2 k 2 α .
Remark 4. 
If we put k = 1 , then H ( F ¯ n , k , F ¯ ) becomes H ( F ¯ n , F ¯ ) , which represents the cumulative residual inaccuracy measure between the n t h record value and the parent distribution and if n = 1 , k = 1 , this represents the cumulative residual entropy given by Rao et al. [4].

6. Application to Extremal Quantum Uncertainty

The framework introduced for cumulative residual inaccuracy (CRI) under k-record statistics admits a direct extension to quantify informational divergence in extremal quantum phenomena. Quantum information processing systems frequently exhibit rare, high-impact deviations such as maximum gate errors, decoherence spikes, or syndrome bursts during error-corrected qubit operations [26,27,28]. Such deviations, structurally analogous to k-record values, possess disproportionate informational weight within the noise ensemble [29].
Let F n , k ( x ) denote the survival function of the k t h record in the empirical quantum error distribution and F ( x ) represent the reference survival function under the baseline decoherence model. The residual inaccuracy functional,
H ( F n , k , F ) = 0 F n , k ( x ) ln F ( x ) d x ,
acts as a scalar quantifier of discord between the statistical extremality observed experimentally and the theoretically predicted uncertainty field. The value of H ( F n , k , F ) increases as the observed tail deviates from the expected exponential or sub-exponential structure characteristic of nominal quantum noise processes [30,31].
In practice, the empirical parameters constituting F n , k may be derived from extremal quantum error syndromes aggregated across multiple correction cycles or tomography datasets [32]. The reference model F ( x ) , frequently exponential or Weibull distributed, aligns with conventional assumptions for decoherence and relaxation dynamics in open quantum systems [33,34]. Evaluation of (6.6) thus measures the incremental uncertainty generated by the discrepancies between the empirical and modeled error extremality.
For a simple exponential case, where the empirical and theoretical densities are respectively defined as f ( x ) = λ e λ x and g ( x ) = λ 0 e λ 0 x , the CRI reduces to:
H ( f , g ) = log λ 0 + λ 0 λ ,
illustrating that greater extremal variability ( λ < λ 0 ) produces heightened uncertainty, indicating deviation from the designed fault-tolerant threshold [26,27].
The established formulation reframes the record-based uncertainty as an entropic performance indicator for quantum devices. Unlike mean-based entropies that track average uncertainty, the CRI integrates residual deviation structures, capturing both the persistence and asymmetry of extremal quantum events. This transition from expectation-based to tail-based informational analysis enables refined assessment of quantum reliability, measurement-chain stability, and the adequacy of noise models in practical implementations [35,36,37].
Thus, the cumulative residual inaccuracy measure derived here establishes a pragmatic statistical–entropic bridge between record-value information theory and quantum uncertainty quantification [36,38]. It provides a continuous, interpretable diagnostic of noise asymmetry and extremal behavior in quantum devices – an attractive analytical tool for performance characterization, fault-tolerant design, and risk-aware quantum control.

6.1. Relation to Entropic Uncertainty and the Logarithmic Schrödinger Equation.

The CRI measure complements the broader framework of entropic uncertainty relations (EURs) in quantum mechanics, which quantify intrinsic limits on the joint knowledge of non-commuting observables through entropy sums or bounds [39,40,41]. The EUR or Hirschman uncertainty is defined as the sum of the temporal and spectral Shannon entropies. It turns out that Heisenberg’s uncertainty principle can be expressed as a lower bound on the sum of these entropies [42,43]. A related and insightful formulation arises from the logarithmic Schrödinger equation [44], which introduces a nonlinear quantum evolution incorporating the Shannon entropy functional into the dynamics. This equation underpins the Everett–Hirschman entropic uncertainty relation [41,43,45,46], which bounds the sum of the entropies of a quantum state’s position and momentum probability distributions, providing a stronger uncertainty bound than classical variance-based principles. This has invaluable applications in quantum memory and quantum information [39,46,47]. Moreover, the logarithmic Schrödinger equation has applications in fundamental Physics ranging from Superfluids to modeling the potental of the Higg’s boson [48,49,50].
While traditional entropic uncertainty relations operate in the phase space domain assessing incompatibility of observables, the CRI extends uncertainty quantification to residual tail distributions manifested in record statistics from sequences of quantum measurements or error syndromes. This links the entropic cost of observed extremal deviations to fundamental quantum limits framed by logarithmic nonlinearities in wavefunction evolution, thereby establishing a bridge from entropic uncertainty at the foundational level to operational uncertainty in quantum noise and error landscapes.
Integrating these perspectives enriches the interpretation of CRI as an entropy-informed measure capturing layered quantum uncertainty from intrinsic measurement indeterminacy characterized by the Everett–Hirschman framework to emergent deviations evidenced in cumulative residual inaccuracy of extremal events. This multiscale entropic viewpoint informs both the physical and statistical analysis of next-generation quantum information devices.

7. Conclusions

Record values and k-record values frequently arise in numerous practical scenarios. Examples include athletic competitions [51,52], hydrology [53,54], and weather forecasts [55]. Furthermore, Kerridge’s measure of inaccuracy [3] serves as a valuable tool for assessing the discrepancy between two distributions [3,56]. Therefore, in this communication, we present the cumulative residual inaccuracy between the distributions of the k-record values and the parent random variable. Furthermore, we derive several properties of this measure, including stochastic ordering. To reduce the computational effort required to determine the inaccuracy for various distributions, we offer a simplified expression for the proposed inaccuracy measure and demonstrate its application to several standard distributions.

Author Contributions

Conceptualization and methodology and writing, Ritu Goel - the manuscript is largely based on her thesis work; validation, Vikas Kumar; writing—original draft preparation, Sarang Vehale; writing—review and editing, T.C. Scott; software, T.C. Scott; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Conflicts of Interest

The author declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CRE Cumulative Residual Entropy
CRI Cumulative Residual Innacuracy
DCI Dynamic Cumulative Innacuracy
MDPI Multidisciplinary Digital Publishing Institute
pdf probability function

References

  1. Taneja, H.C.; Kumar, V. On dynamic cumulative residual inaccuracy measure. In Proceedings of the Proc. World Congress of Engineering (WCE), Vol. I, London, U.K., 2012. July 6–8.
  2. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–432. [Google Scholar] [CrossRef]
  3. Kerridge, D.F. Inaccuracy and inference. J. R. Stat. Soc., Ser. B 1961, 23, 184–194. [Google Scholar] [CrossRef]
  4. Rao, M.; Chen, Y.; Vemuri, B.C.; Wang, F. Cumulative residual entropy: A new measure of information. IEEE Trans. Inf. Theory 2004, 50, 1220–1228. [Google Scholar] [CrossRef]
  5. Asadi, M.; Zohrevand, Y. On the dynamic cumulative residual entropy. J. Stat. Plann. Inference 2007, 137, 1931–1941. [Google Scholar] [CrossRef]
  6. Di Crescenzo, A.; Longobardi, M. On cumulative entropies. J. Stat. Plann. Inference 2009, 139, 4072–4087. [Google Scholar] [CrossRef]
  7. Sunoj, S.M.; Linu, M.N. Dynamic cumulative residual Rényi’s entropy. Statistics 2012, 46, 41–56. [Google Scholar] [CrossRef]
  8. Dziubdziela, W.; Kopocinski, B. Limiting properties of the kth record values. Appl. Math. 1976, 15, 187–190. [Google Scholar] [CrossRef]
  9. Kamps, U. A concept of generalized order statistics. J. Stat. Plann. Inference 1995, 48, 1–23. [Google Scholar] [CrossRef]
  10. Berred, M. k-Record values and the extreme-value index. J. Stat. Plann. Inference 1995, 45, 49–63. [Google Scholar] [CrossRef]
  11. Fashandi, M.; Ahmadi, J. Characterizations of symmetric distributions based on Rényi entropy. Stat. Probab. Lett. 2012, 82, 798–804. [Google Scholar] [CrossRef]
  12. Arnold, B.C.; Balakrishnan, N.; Nagaraja, H.N. Records; Wiley: New York, 1998. [Google Scholar]
  13. Psarrakos, G.; Navarro, J. Generalized cumulative residual entropy and record values. Metrika 2013, 76, 623–640. [Google Scholar] [CrossRef]
  14. Krakowski, M. The relevation transform and a generalization of the gamma distribution function. RAIRO - Operations Research - Recherche Opérationnelle 1973, 7, 107–120. [Google Scholar] [CrossRef]
  15. Sankaran, P.G.; Dileep Kumar, M. Reliability properties of proportional hazards relevation transform. Metrika 2019, 82, 441–456. [Google Scholar] [CrossRef]
  16. Tahmasebi, S.; Eskandarzadeh, M. Generalized cumulative entropy based on kth lower record values. Stat. Probab. Lett. 2017, 126, 164–172. [Google Scholar] [CrossRef]
  17. Goel, R.; Taneja, H.C.; Kumar, V. Measure of entropy for past lifetime and record statistics. Physica A 2018, 503, 623–631. [Google Scholar] [CrossRef]
  18. Weisstein, E.W. Hazard Function. https://mathworld.wolfram.com/HazardFunction.html, 2025. From MathWorld - A Worlfram Resource.
  19. Liberto, D. Hazard Rate: Definition, How to Calculate, and Example. https://www.investopedia.com/terms/h/hazard-rate.asp, 2025.
  20. Jensen, J.L.W.V. Sur les fonctions convexes et les inégalités entre les valeurs moyennes. Acta Mathematica 1906, 30, 175–193. [Google Scholar] [CrossRef]
  21. Goel, R.; Taneja, H.C.; Kumar, V. Measure of inaccuracy and k-record statistics. Bull. Calcutta Math. Soc. 2018, 110, 151–166. [Google Scholar]
  22. Dannan, F.M.; Neff, P.; Thiel, C. On the sum of squared logarithms inequality and related inequalities. J. Math. Inequal. 2016, 10, 1–17. [Google Scholar] [CrossRef]
  23. Cameron, A.C.; Trivedi, P.K. Preface. In Microeconometrics; Cambridge University Press: Cambridge, 2005; pp. xxi–xxii. [Google Scholar]
  24. Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer: New York, 2007. [Google Scholar]
  25. Bernardin, L.; Chin, P.; DeMarco, P.; Geddes, K.O.; Hare, D.E.G.; Heal, K.M.; Labahn, G.; May, J.P.; McCarron, J.; Monagan, M.B.; et al. Maple Programming Guide; Maplesoft, a division of Waterloo Maple, Inc.: Toronto, 2012. [Google Scholar]
  26. Kribs, D.W.; Pasieka, A.; Życzkowski, K. Entropy of a Quantum Error Correction Code; World Scientific, 2008.
  27. Nielsen, M.A. Information-Theoretic Approach to Quantum Error Correction. Proceedings of the Royal Society A 1998, 454, 277–304. [Google Scholar] [CrossRef]
  28. Cafaro, C. An Entropic Analysis of Approximate Quantum Error Correction. Physica A 2014, 408, 1–18. [Google Scholar] [CrossRef]
  29. Bueno, V.D.C.; Balakrishnan, N. A Cumulative Residual Inaccuracy Measure for Coherent Systems; Cambridge University Press, 2020.
  30. Choi, S.; et al. Quantum Error Correction in Scrambling Dynamics and Open Systems. Physical Review Letters 2020, 125, 030505. [Google Scholar] [CrossRef] [PubMed]
  31. Li, Y. Statistical Mechanics of Quantum Error Correcting Codes. Physical Review B 2021, 103, 104306. [Google Scholar] [CrossRef]
  32. Onorati, E. Fitting Quantum Noise Models to Tomography Data. Quantum Journal 2023, 7, 1197. [Google Scholar] [CrossRef]
  33. Barrios, R.; et al. Exponentiated Weibull Fading Channel Model in Free-Space Optics. IEEE Transactions on Communications 2013, 61, 2594–2602. [Google Scholar]
  34. Preskill, J. Quantum Error Correction. Caltech Lecture Notes on Quantum Computation, Chapter 7, n.d.
  35. Golse, F.; Jin, S.; Liu, N. Quantum Algorithms for Uncertainty Quantification, 2022, [arXiv:2209.11220].
  36. Wirsching, G. Quantum-Inspired Uncertainty Quantification. Frontiers in Computer Science 2022, 4, 662632. [Google Scholar] [CrossRef]
  37. Engineering and Physical Sciences Research Council. Quantum Uncertainty Quantification (QUQ) Project Overview, University of Exeter, 2025.
  38. Halpern, N.; Hayden, P. Entropic uncertainty relations in quantum information theory. arXiv preprint arXiv:1909.08438 2019. [CrossRef]
  39. Berta, M.; Christandl, M.; Colbeck, R.; Renes, J.M.; Renner, R. The uncertainty principle in the presence of quantum memory. Nature Physics 2010, 6, 659–662. [Google Scholar] [CrossRef]
  40. Wehner, S.; Winter, A. Entropic uncertainty relations—a survey. New Journal of Physics 2010, 12, 025009. [Google Scholar] [CrossRef]
  41. Hirschman, I.I. A Note on Entropy. American Journal of Mathematics 1957, 79, 152–156. [Google Scholar] [CrossRef]
  42. Beckner, W. Inequalities in Fourier Analysis. Annals of Mathematics 1975, 102, 159–182. [Google Scholar] [CrossRef]
  43. Białynicki-Birula, I.; Mycielski, J. Uncertainty relations for information entropy in wave mechanics. Communications in Mathematical Physics 1975, 44, 129–132. [Google Scholar] [CrossRef]
  44. Bialynicki-Birula, I.; Mycielski, J. Nonlinear wave mechanics. Annals of Physics 1976, 100, 62–93. [Google Scholar] [CrossRef]
  45. Everett, H. Generalized Lagrange Multipliers in the Problem of Boltzmann’s Entropy. Annals of Mathematics 1957, 66, 591–600. [Google Scholar]
  46. Maassen, H.; Uffink, J.B.M. Generalized entropic uncertainty relations. Phys. Rev. Lett. 1988, 60, 1103–1106. [Google Scholar] [CrossRef] [PubMed]
  47. Ding, Z.Y.; Yang, H.; Wang, D.; Yuan, H.; Yang, J.; Ye, L. Experimental investigation of entropic uncertainty relations and coherence uncertainty relations. Phys. Rev. A 2020, 101, 032101. [Google Scholar] [CrossRef]
  48. Dzhunushaliev, V.; Zloshchastiev, K. Singularity-free model of electric charge in physical vacuum: non-zero spatial extent and mass generation. Open Physics 2013, 11, 325–335. [Google Scholar] [CrossRef]
  49. Zloshchastiev, K.G. Origins of logarithmic nonlinearity in the theory of fluids and beyond. International Journal of Modern Physics B 2025, 39, 2530008. [Google Scholar] [CrossRef]
  50. Scott, T.C.; Glasser, M.L. Kink Soliton Solutions in the Logarithmic Schrödinger Equation. Mathematics 2025, 13. [Google Scholar] [CrossRef]
  51. Australia, A. Australian Para Athletics Records, 2025.
  52. Field, U.T.. Record Applications.
  53. Teegavarapu, R. Statistical Analysis of Hydrologic Variables. Technical report, Tufts University, 2019.
  54. Vogel, R.M. The value of streamflow record augmentation procedures for estimating low-flow statistics. Journal of Hydrology 1991, 127, 15–23. [Google Scholar]
  55. Dey, R. Weather forecasting using Convex Hull & K-Means clustering, 2015.
  56. Balakrishnan, N.; et al. Dispersion indices based on Kerridge inaccuracy measure and applications. Computational Statistics 2023. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated