Preprint
Article

This version is not peer-reviewed.

Parameter Estimation Problem for Doubly Geometric Process with the Gamma Distribution and Some Applications

Submitted:

06 March 2026

Posted:

09 March 2026

You are already at the latest version

Abstract
The geometric process (GP) is one of the important and widely used stochastic models in reliability theory. Although it is used in various areas of application, it has some limitations that cause difficulties. The doubly geometric (DGP) has been proposed to overcome these limitations. The parameter estimation problem plays an important role for both GP and DGP. In this study, the parameter estimation problem for DGP when the distribution of the first interarrival time is assumed to be a gamma distribution with the parameters α and β is considered. Firstly, the maximum likelihood (ML) method is used to estimate the model parameters. Asymptotic joint distribution of the estimators and their asymptotic unbiasedness and consistency properties are obtained. Then the small sample performances of the estimators are evaluated by a simulation study. Finally, the applicability of the method is illustrated by using two real-life data examples. It is shown that these data sets can be modeled by DGP. Additionally, the nonparametric estimators which are called modified moment (MM) estimators are compared with the ML estimators. As a result it can be said that the ML estimators are more efficient than the MM estimators.
Keywords: 
;  ;  ;  ;  ;  
and β is considered. Firstly, the maximum likelihood (ML) method is used to estimate the model parameters. Asymptotic joint distribution of the estimators and their asymptotic unbiasedness and consistency properties are obtained. Then the small sample performances of the estimators are evaluated by a simulation study. Finally, the applicability of the method is illustrated by using two real-life data examples. It is shown that these data sets can be modeled by DGP. Additionally, the nonparametric estimators which are called modified moment (MM) estimators are compared with the ML estimators. As a result it can be said that the ML estimators are more efficient than the MM estimators.

1. Introduction

In the statistical literature, a data set with occurrence times of successive events generally can be modeled by using a counting process (CP). To determine a suitable stochastic CP model, it has to be tested whether the data set has a trend or not. Pekalp and Aydogdu [32] compared the monotonic trend tests for some counting processes. If the successive interarrival times are independent and identically distributed (iid) (there is no trend), the data set may be modeled by a renewal process (RP). However, in real-life examples, the successive inter-arrival times contain a monotone trend because of the aging effect and the accumulated wear [8]. In this case, this trend can be modeled by a nonhomogeneous Poisson process (NHPP) [2,3,4,5,6,7,8,9,10]. The data set having a monotone trend can also be analyzed by a geometric process (GP). GP is one of the widely used and known models for the monotone trend. Lam [17,18] first introduced the GP process. Furthermore, the process is used as a model in many areas in the reliability context. For details, see [36]. Lam [22] presented the GP theory and its applications. The real datasets with a monotone trend are modeled by GP in [39].
Although the GP is known as the most commonly used model, it has some limitations that can cause difficulties. The two limitations can be given as follows: Using the GP for non-monotone interarrival times with distributions with varying shape parameters is not suitable. The other limitation is that GP only allows logarithmic growth or explosive growth [7]. The GP model causes difficulties in the applications. Therefore, it can be said that the GP could be unsuitable for the mentioned cases.
To overcome the above difficulties, some stochastic models were developed by Wu and Scarf [36], Wu [37], and Wu and Wang [38]. One of the important models is the DGP for such models.
Wu [37] compares the DGP with the GP and exhibits the advantages and the preferability of the DGP. The definitions of CP, RP, GP, and DGP are presented as follows.
Definition 1.1  N t = s u p n 0 :   S n t is the number of events that occurred in the interval ( 0 , t ] , then { N ( t ) , t     0 } is called a CP where, S 0 = 0 ,   S n = k = 1 n X k   n = 1,2 , be the occurrence time of t h e   n t h event, X k = S k S k 1 inter-arrival time be the time of t h e   ( k 1 )   t h and k t h event;
Definition 1.2.If { X k ,   k = 1,2 ,     } are (iid) random variables with cumulative distribution function (cdf )F, a CP { N ( t ) , t     0 } corresponds to an RP.
Definition 1.3.Let’s assume that { N ( t ) ,   t 0 } is a CP and { X k , k = 1,2 , } is the set of interarrival times of a CP { N ( t ) ,   t 0 } . If there exists a positive constant value a defined as a ratio parameter such that Y k = a k 1 X k   k = 1,2 , …,are iid form an RP with a common F is the cdf of the first inter-arrival time   X 1  then the CP corresponds to a GP.
The cdf of X k is F k x = F a k 1   x ,   k = 1,2 , .Therefore the distribution of X 1 uniquely determines the distribution of X k . Now, let E ( X 1 ) = μ and V a r X 1 = σ 2 . Then the expected value and the variance of the random variable X k are given as E X k   = µ a k 1 ,   V a r X k = σ 2 a 2 k 1 ,   k = 1,2 , . .Therefore it is clear that the cdf of X 1 uniquely determines the cdf of X k ' s that is; F k x = F a k 1   x , k = 1,2 , 3 , ..
Monotonicity has an important role in the theory of stochastic processes. If a < ( > ) 1 , then { X k , k = 1,2 , } is defined as stochastically increasing (decreasing), and if a = 1 then the GP corresponds to the RP.
Definition 1.4.Let’s assume that N t ,   t 0 is a CP   X k , k = 1,2 , 3 , is the set of interarrival times of this process. If there exists a positive constant a ratio parameter such that Y k = a k 1 X k h ( k )   k = 1,2 , …are iid form an RP with a common F is the CDF of the X 1 , the process N t ,   t 0   is called a DGP, where h k is a positive function of k with k = 1,2 , h 1 = 1   E ( X 1 ) = μ and V a r X 1 = σ 2 .
Wu [37] handled different h k such as h k =   ( 1 + l o g k ) b , h k = b k 1 ,   h k = b l o g ( k ) , and h k = 1 + b   l o g k where b   is a real number, l o g k denotes the logarithm with base 10. He has shown that the DGP with h k = 1 + l o g k b is better than the others for ten real data sets in Lam [23]. Then it is taken as h k = 1 + l o g k b in this study.
Now we assume that the random variable X 1 is a continuous random variable with a pdf f . Hence
f X k x = a k 1 h k   x h k 1   f a k 1 x h k ,   x 0  
Moreover, the expected value and the variance of X k are
E X k = a 1 k h 1 k 0 x h 1 k f x d x V a r   X k = a 2 1 k h 1 k 0 x 2 h 1 k f x d x 0 x h 1 k f x d x 2 .
Provided the above integral are finite, where h 1 k = 1 h k .
Wu [37] gives the monotonicity properties of the DGP given as follows.
i) If P X 1 > 1 = 1 and b < 0 , 0 < a < 1 or if P 0 < X 1 < 1 = 1 and 0 <   b < 4.898226 ,   0 < a < 1 , { X k ,   k = 1,2 ,   } increases stochastically.
ii) If P 0 < X 1 < 1 = 1 and b < 0 ,   a > 1 or if   P X 1 > 1 = 1 and 0 < b < 4.898226 ,   a > 1 , { X k ,   k = 1,2 , 3   } decreases stochastically.
The parameter estimation problem naturally arises in DGP. The parameter estimation problem for a DGP contains the parameters a , b , μ , and   σ 2 . These parameters determine the mean and the variance of the first inter-arrival time X 1 . Therefore, the parameter estimation problem for DGP is a very important issue as well as in GP. The some of the studies in GP are given as follows.
Lam and Chan [20], Chan et al. [8], Aydoğdu et al. [3], and Kara et al. [13] used the lognormal, gamma, Weibull, and inverse Gaussian distributions, respectively, for the X 1 interarrival time to estimate the parameters for a GP. Kara et al. [16] consider the parameter estimation problem for the gamma geometric process.
Pekalp and Aydogdu [30] considered the power series expansions for the probability distribution, mean value, and variance function of a geometric process with gamma interarrival times. Pekalp and Aydogdu [33] considered the parameter estimation problem for the mean value and variance functions in GP.
Aydogdu and Altındag [5] computed the mean value and variance functions in a geometric process. Altındag [1] evaluated the multiple process data in a geometric process with exponential failures.
Yılmaz et al. [40] used the Bayesian inference with Lindley distribution by a GP.
Pekalp and Aydogdu [26] studied the integral equation for the second moment function in a GP. Pekalp and Aydogdu [27], Pekalp et al. [28] obtained the asymptotic solution of the integral equation for the second moment function of a GP by discriminating the lifetime distributions of the ten real data sets used in [21].
Pekalp et.al [29,31,34] estimate the parameters of a DGP by using the ML method under the exponential, Weibull, and lognormal distribution assumptions, respectively, for the first inter-arrival time X 1 . Eroglu Inan [11] considered the parameter estimation problem for a DGP under the assumption that the first interarrival time has an inverse Gaussian distribution.
Although there are some studies in GP for the gamma distribution, which is used and is an important model in reliability theory, to the best of our knowledge, no study has been done yet about DGP, which removes the lack of GP model. Therefore, the parametric statistical inference problem for DGP under the gamma distribution assumption can be studied.
The nonparametric problem was studied for GP and α series processes by the some authors [4,15,19,35,37]. Similarly, Jasim and Al-Qazaz [12] proposed inference methods for DGP.
In this study, the parameter estimation problem for a DGP is considered under the assumption that the first inter-arrival time X 1 has a gamma distribution. This paper is organized as follows: In Section 2, the gamma distribution and its probabilistic properties are reminded. In Section 3, the log-likelihood function, first derivatives of this function, and the asymptotic joint distribution of the estimators are obtained. In Section 4, an extensive simulation study is performed, and the simulated mean, bias and mean square error (MSE) values are calculated, and the small sample performances of the estimators are evaluated for various values of parameters. In Section 5, the importance of the modelling with DGP is presented by the two illustrative examples. Finally, in Section 6, the results are discussed.

2. Gamma Distribution

The gamma distribution is one of the most commonly used models in the reliability and life testing areas for asymmetric data, for details, [9,23] referred to in [14].
The probability density function (pdf) of the gamma distribution is given by
f X   x , α , β = 1 Γ α β α   x α 1   e x β   x > 0 ;   α > 0   ,   β > 0  
where
α : shape parameter
β : scale parameter
If α = 1 , the gamma distribution corresponds to an exponential distribution.
Let’s X have a gamma distribution with α and β parameters, that is X ~ g a m m a α , β . Some characteristics of the gamma distribution such as expected value, variance, skewness, and kurtosis are given as E X = α β ,   V a r X = α   β 2 , γ 1 = 2 α and γ 2 = 6 α respectively.

3. Statistical Inference for Gamma Distribution

Let’s assume that {N(t), t ≥ 0} is a DGP with the ratio parameter a , h k = 1 + l o g k b , and the first inter-arrival time distribution X 1 has a g a m m a α , β .   For a realization { X 1 ,   X 2 , ,   X n } of the DGP, the random variables Y k   = a k 1 X k h ( k ) are iid with the g a m m a α , β distribution. From the equation (1) we have that the pdf of X k   is
f X k x = a k 1   x 1 + l o g k b 1 br to break × 1 + l o g k b   1 Γ α β α   a k 1 x 1 + l o g k b α 1   e a k 1 x 1 + l o g k b β  
Further, the r. moment of X k is easily seen that
E X k r = 1 + l o g k b β r 1 + l o g k b   a 1 k 1 + r 1 + l o g k b br - to - break × Γ α + 1 + l o g k b Γ α   r = 1,2 , .
V a r X k = β 2 1 + l o g k b   Γ α + 1 + l o g k b Γ α 1 + l o g k b ×   a 2 1 k 1 + 1 + l o g k b   a 1 k Γ α + 1 + l o g k b Γ α 1 + l o g k b The likelihood function L a , b , α , β based on the realization X 1 , ,   X n
L a , b , α , β = 1 Γ α   β α n   a α   n   n 1 2   k = 1 n 1 + l o g k b br to break × k = 1 n x k α 1 + l o g k b 1   e k = 1 n a k 1   x k 1 + l o g k b β  
Then, log-likelihood function is
l n L a , b , α , β = n   log Γ   α + α log β + α   n   n 1 2 l n a br to break + k = 1 n l n 1 + l o g k b + k = 1 n α   1 + l o g k b 1   l n x k   br - to - break k = 1 n a k 1   x k 1 + l o g k b β
Firstly, the first partial derivatives of the log-likelihood function defined as Equation (6) are taken with respect to the parameters a , b , α , β respectively, and then the likelihood equations are obtained as follows by equating the derivatives to zero.
l n L a = α   n n 1 2 a 1 β   k = 1 n k 1 a k 2 x k 1 + l o g k b = 0 l n L b = k = 1 n l n 1 + l o g k + k = 1 n α   l n x k 1 + l o g k b l n 1 + l o g k 1 β   k = 1 n a k 1 x k 1 + l o g k b ln x k 1 + l o g k b l n 1 + l o g k = 0 l n L α = n n 1 2   l n a n Γ α α n l n β + k = 1 n 1 + l o g k b ln x k = 0 l n L β = n α β + k = 1 n a k 1 x k 1 + l o g k b β 2 = 0
where,   l o g and l n denotes the logarithm with base 10 and e , respectively.
It is seen that these likelihood equations can’t be solved analytically. Therefore, the values of the ML estimators of the a , b , α and β must be calculated numerically. For this purpose, the estimates are obtained numerically with the Nmaximize subroutine, which is a constrained nonlinear optimization technique of the Mathematica package program.
The Fisher information matrix I and its inverse I 1 are derived to find the asymptotic joint distribution of the ML estimators. It is well known that the diagonal elements give the asymptotic variance for each estimator, respectively.
For the information matrix I = I i j
I 11 =   E 2 l n L a 2 = n n 1   α 2 a 2 + α a 2 k = 1 n k 2 3 k + 2     I 12 = E l n L a l n L b = k = 1 n   ln 1 + l o g k   k 1   a   β E   Y k   ln Y k k 1 l n a   α   β     I 13 = E l n L a l n L α = n n 1 2 a I 14 = E l n L a l n L β =   n 2 n α 2 a   β I 22 =   E 2 l n L b 2 =   α   ln 1 + l o g k 2 E   ln Y k k 1 l n a   + 1 β k = 1 n E Y k   l n Y k   k 1 l n a   α   β ln 1 + l o g k 2   + ln 1 + l o g k 2 β k = 1 n E Y k   l n Y k   2 +   α   β k 1 l n a 2   2 E Y k   l n Y k   k 1 l n a I 23 = E l n L b l n L α =   k = 1 n ln 1 + l o g k   E   ln Y k k 1 l n a   I 24 = E l n L b l n L β = 1 β 2 k = 1 n E Y k   l n Y k   k 1 l n a   α   β   ln 1 + l o g k   I 33 = E 2 l n L α 2 = n p o l y g a m m a 1 , α 1 I 34 =   E 2 ln L α β = n β I 44 = E 2 ln L β 2 =   n α β 2
where
p o l y g a m m a n , x : ( n + 1 ) . derivative of log-gamma function at   x .
Then, by [6] the joint distribution of these ML estimators is asymptotically normal with the mean vector a , b , α , β and the covariance matrix I 1 .
a ^ b ^ α ^ β ^   ~ A N a b α β   , I 1
Although the elements of I 1 are obtained analytically, the expressions have very large size, so they can’t be given here. Indeed, by the notation I 1 = v i j n × n it can be written that
a ^ ~ A N a , v 11 b ^ ~ A N b ,   v 22 α ^ ~ A N α ,   v 33 β ^ ~ A N β ,   v 44 From the theory of ML estimators, it is well known that these estimators are both asymptotically unbiased and consistent.

4. Simulation Analysis

In this section, a simulation study is done to evaluate the performance of the ML estimators. Mathematica is used for the computations. The simulated means, biases and MSE values are calculated for different sample sizes n   =   30 ,   50 ,   100 which are commonly used values in applications. The trend is usually small for a ratio parameter values which are close to 1 [22]. Therefore, a is taken as 0.95, 0.99, 1.01, and 1.05 throughout the study. The monotonic property of the DGP is altered by the parameter b ’s positive and negative values. The possible effect of positive and negative values of the b is considered by taking -2, 2. The parameter   α and   β values are taken to be 1 and 2. The simulation is evaluated for 1000 repetitions, step by step:
1. Generate a sample { Y 1 , Y 2 , , Y n }
from a gamma distribution with the parameters α   and β .
2. Calculate a realization X 1 , X 2 , , X n of the DGP with the parameters a , b by using the equation X k = Y k a k 1 1 / ( 1 + l o g k ) b , k = 1,2 , 3 , n .
3. Calculate the mean, bias, and MSE values for each estimator based on the X 1 ,   X 2 ,   ,   X n data set. Table 1, Table 2, Table 3 and Table 4 include the corresponding simulated mean, biases, and MSE values for each estimator.
From Table 1, Table 2, Table 3 and Table 4 the MSE gets small values, and the absolute bias values get closer to zero for increasing n values. This expected case indicates the fact that the estimators a ,   b , α   and
β   are asymptotically unbiased and consistent. The diagonal elements of the I 1   give the lower bounds of the variances MVB of the parameters   a ,   b , α and β . The simulated variances for the ML estimates and the MVB values are given in Table 5, Table 6, Table 7, Table 8 for all cases. When the MVB values compare the corresponding simulated variance values, and it is seen that the values get close to each other for increasing n values, it can be said that the estimators are highly efficient estimators.

5. Data Analysis

In this section, two real data sets of are presented to illustrate the data analysis and estimation procedure given in Section 3. Further the performance of the ML estimators are compared with the nonparametric MM estimators given by Jasim and Al-Qazaz [12]. These data sets are called the coal mining disasters data and the propulsion diesel engine failure data.
Now we consider a data set X k ,   k = 1,2 , comes from a DGP. Jasim and Al-Qazaz [12] obtained the least square (LS) estimates of the parameters a , b and μ by minimizing the sum of squared error (SSE);
S S E = k = 1 n X k μ a k 1 1 + l o g k b 2
By taking a ^ L S E   and b ^ L S E   for the parameters a and b in the equation Y k   = a k 1 X k 1 + l o g k b the prediction of Y k   is
Y ^ k = a ^ M M k 1   X k 1 + l o g k b ^ M M
Then the MM estimators of the parameters μ , σ 2 are defined by
μ ^ M M = Y ^ k ¯ = 1 n k = 1 n Y ^ k
σ ^ 2 M M = 1 n 1 k = 1 n Y ^ k Y ^ k ¯   2   ,   a 1 ,   h k 1 ,   b 0 1 n 1 k = 1 n   X k X ¯ 2   ,   a = 1 , h k = 1   ,   b = 0    
where
X ¯ = 1 n k = 1 n X k
Note that a ^ L S E ,   and b ^ L S E   are called MM estimators by Jasim and Al-Qazaz [12]. The estimators are denoted by a ^ M M and b ^ M M in respectively.
In the DGP model with the gamma distribution, the prediction of Y k   is given
Y ^ k = a ^ M L k 1   X k 1 + l o g k b ^ M L
If it can be showed that the Y ^ k ' s are iid with gamma distribution, the DGP with gamma distribution can be used to model the data sets. The goodness of fitness of the Y ^ k ' s for gamma distribution with parameters α ,   β are measured by the Kolmogorov-Simirnov (KS) test. The KS test statistic and corresponding p value are given for the data sets in Tables 14,16.
To compare the ML and MM methods, the prediction of X k for the data set X 1 , ,   X n is calculated as
X ^ k = a ^ M L 1 k 1 + l o g k b ^ M L   E ^ X 1 1 + l o g k b ^ M L   ,   f o r   M L   m e t h o d     μ ^ M M a ^ M M k 1 1 + l o g k b ^ M M   ,   f o r   M M   m e t h o d  
where   E ^ X 1 is obtained by the Equation 4 as
E ^ X 1 = 1 + l o g k b ^ β ^ 1 + l o g k b ^   a ^ 1 k 1 + 1 + l o g k b ^ × Γ α ^ + 1 + l o g k b ^ Γ α ^ The mean-squared error ( M S E * ) and maximum percentage error (MPE) criterias are used to evaluate the performance of the DGPs with the ML and MM estimators. M S E * and MPE are defined by Lam et al. [21] as
M S E * = 1 n k = 1 n X k X ^ k 2  
and
M P E = max S k S ^ k S k ,   k = 1,2 , , n  
where
S k = X 1 + + X k ,   k = 1,2 , , n  
and
S ^ k = k = 1 n X ^ k  
* The notation MSE* is different from the MSE defined in Section3.

5.1. First Data Set

The first data set, originally studied by Maguire et al. [25] (Data Set No.1). It has 190 observations show the interarrival time between coal mining disasters.
When the data set 1 is modeled by a DGP with the gamma distribution, the ML estimator values of the a , b , α , β parameters, the MM estimates of the parameters a , b ,   MSE, and MPE values, and the estimates of μ , σ 2   are given in Table 9.
It can be seen from the Table 9 that ML estimators have the smallest MSE and MPE values. The results show that the ML estimators outperform the MM estimators with smaller MSE and MPE values. For this data, the KS test statistic value and the corresponding p value are given in Table 10.
It is seen in Table 10 that the gamma distribution is appropriate for the data set 1. In Figure 1,   S k and S ^ k disaster times are plotted for both ML and MM methods.
The performance of the ML and MM methods can be compared by Figure 1. It is easily seen that the DGP with the ML estimator outperform the DGP with MM estimator. The results are compatible with Table 9. It is clear that the ML method works well as it is expected.

5.2. Second Data Set

The second data set is called the U.S.S. Grampus No. 4 main propulsion diesel engine failure data. (Data set No.2) contains the cumulative operating hours until significant maintenance events occur for one of the thirty engines on nine different submarines. It has 57 observations. The data set was studied by Lee [24].
When the data set 2 is modeled by a DGP with the gamma distribution, the ML estimator values of the a , b , α , β parameters, the MM estimates of the parameters a , b ,   MSE, and MPE values, and the estimates of μ ^ ,   σ ^ 2 are given in Table 11.
It can be seen from Table 11 that ML estimators have the smallest MSE and MPE values. The results show that the ML estimators outperform the MM estimators with smaller MSE and MPE values. For this data, the KS test statistic value and the corresponding p value are given in Table 12.
It is seen in Table 12 that the gamma distribution is appropriate for the data set 2. In Figure 2,   S k and S ^ k disaster times are plotted for both ML and MM methods.
The performance of the ML and MM methods can be compared by Figure 2. It is easily seen that the DGP with the ML estimator outperform the DGP with MM estimator. The results are compatible with Table 11. It is clear that the ML method works well as it is expected.

6. Conclusion

In this study, the parameter estimation problem for a DGP is considered under the assumption that the first inter-arrival time has a gamma distribution. The estimators of the parameters a , b ,   α and β are calculated by the ML method. The asymptotic joint distribution of the ML estimators are obtained. The unbiasedness and consistency statistical properties of the estimators are investigated. The Monte Carlo simulation study was performed to evaluate the small sample performance of the estimators. It is seen that the bias and the MSE values of the estimators decrease for increasing n sample size values. This expected case supports the fact that the ML estimators are asymptotically unbiased and consistent. Finally, two data sets are considered in applications. The ML and MM estimators’ values, MSE, MPE and the KS values are calculated. As a result, it can be said that the gamma distribution is an appropriate model for these data sets. The DGP with ML estimators outperforms the DGP with MM estimators.

References

  1. Altındag, O. Statistical evaluation of multiple process data in geometric processes with exponential failures. Hacettepe Journal of Mathematics and Statistics 2025, 54-2, 738–761. [Google Scholar] [CrossRef]
  2. Ascher, H.; Feingold, H. Repairable Systems Reliability.; Marcel Dekker: New York, USA, 1988. [Google Scholar]
  3. Aydogdu, H.; Senoglu, B.; Kara, M. Parameter estimation in geometric process with Weibull distribution. App. Math. Comput 2010, 217-6, 2657–2665. [Google Scholar] [CrossRef]
  4. Aydogdu, H.; Kara, M. Nonparametric Estimation in á-Series Processes. Comput. Stat. Data Anal 2012, 56-1, 190–201. [Google Scholar] [CrossRef]
  5. Aydogdu, H.; Altındag, O. Computation of the mean value and variance functions in geometric process. Journal of Statistical Computation and Simulation 2016, 86-5, 986–995. [Google Scholar] [CrossRef]
  6. Barndorff-Nielsen, O.E.; Cox, D.R. Inference and Asymptotics; Chapman & Hall, London: ENGLAND, 1994. [Google Scholar]
  7. Braun, W. J.; Li, W.; Zhao, Y.P. Properties of the geometric and related processes. Nav. Res. Log 2005, 52, 607–616. [Google Scholar] [CrossRef]
  8. Chan, S.K.; Lam, Y.; Leung, Y.P. Statistical inference for geometric process with gamma distribution. Comput. Stat. Data Anal 2004, 47, 565–581. [Google Scholar] [CrossRef]
  9. Cohen, A.C.; Whitten, B.J. Parameter Estimation in Reliability and Life Span Models. In Marcel Dekker; New York, ENGLAND, 1988. [Google Scholar]
  10. Cox, D.R.; Lewis, P.A.W. The Statistical Analysis of Series of Events; Methuen, London: ENGLAND, 1966. [Google Scholar]
  11. Eroglu Inan, G. Parameter Estimation for doubly geometric process with inverse Gaussian distribution and its applications. Fluctations and Noise Letters 2025, 24-4, 1–26. [Google Scholar]
  12. Jasim, O.R.; Qazaz, Q.N.N.A. Nonparametric estimation in a doubly geometric stochastic process. Int. J. Agricult.Stat. Sci 2021, 17, 1–10. [Google Scholar]
  13. Kara, M.; Aydogdu, H.; Turksen, O. Statistical inference for geometric process with the inverse Gaussian distribution. J. Stat. Comput. Simul 2015, 85-16, 3206–3215. [Google Scholar] [CrossRef]
  14. Kara, M.; Senoglu, B.; Aydogdu, H. Statistical inference for α-series process with gamma distribution. Communication in Statistics: Theory and Methods 2017, 46-13, 6727–6736. [Google Scholar] [CrossRef]
  15. Kara, M.; Altındag, O.; Pekalp, M.H.; Aydogdu, H. Parameter estimation in α series process with lognormal distribution. Communication in Statistics: Theory and Methods 2019, 48-20, 4976–4998. [Google Scholar] [CrossRef]
  16. Kara, M.; Guven, G.; Senoglu, B.; Aydogdu, H. Estimator of the parameters of the gamma geometric process. J. Stat. Comput. Simul 2022, 92-12, 1–11. [Google Scholar]
  17. Lam, Y. Geometric processes and replacement problem. Acta Math Appl Sin 1988, 4, 366–377. [Google Scholar]
  18. Lam, Y. A note on the optimal replacement problem. Adv. Appl. Probab 1988, 20, 479–482. [Google Scholar]
  19. Lam, Y. Nonparametric inference for geometric process. Commun. Stat. Theory Methods 1992, 21, 2083–2105. [Google Scholar]
  20. Lam, Y.; Chan, S.K. Statistical inference for geometric process with lognormal distribution. Comput. Stat. Data Anal 1998, 27, 99–112. [Google Scholar] [CrossRef]
  21. Lam, Y.; Zhu, L.X.; Chan, J.S.K.; Liu, Q. Analysis of data from a series of events by a geometric process model. Acta Math Appl Sin 2004, 20, 263–282. [Google Scholar] [CrossRef]
  22. Lam, Y. The Geometric Processes and their Application; World Scientific: Singapore, 2007. [Google Scholar]
  23. Lawless, J.F. Statistical Models and Methods for Lifetime Data; JohnWiley & Sons, New Jersey, 1984. [Google Scholar]
  24. Lee, L. Testing Adequacy of the Weibull and log linear rate models for a Poisson process. Techometrics 1980, 22, 195–199. [Google Scholar] [CrossRef]
  25. Maguire, B.A.; Pearson, E.S.; Wynn, A.H.A. The time intervals between industrial accidents. Biometrika 1952, 39, 168–180. [Google Scholar] [CrossRef]
  26. Pekalp, M.H.; Aydogdu, H. An integral equation for the second moment function of a geometric process and its numerical solution. Naval Research Logistics 2018, 65 -2, 176–184. [Google Scholar] [CrossRef]
  27. Pekalp, M.H.; Aydogdu, H. An asymptotic solution of the integral equation for the second moment function in geometric processes. Journal of Computational and Applied Mathematics 2019, 353, 179–190. [Google Scholar] [CrossRef]
  28. Pekalp, M.H.; Aydogdu, H.; Turkman, K.F. Discriminating between some lifetime distributions in geometric counting processes. Communications in Statistics – Simulation and Computation 2019, 70, 715–737. [Google Scholar] [CrossRef]
  29. Pekalp, M.H.; Eroglu Inan, G.; Aydogdu, H. Statistical inference for a doubly geometric process with exponential distribution. Hacettepe Journal of Mathematics & Statistics 2021, 50-5, 560–1571. [Google Scholar]
  30. Pekalp, M.H.; Aydogdu, H. Power series expansions for the probability distribution, mean value, and variance functions of a geometric process with gamma interarrival times. Journal of Computational and Applied Mathematics 2021, 388, 1–10. [Google Scholar] [CrossRef]
  31. Pekalp, M.H.; Eroglu Inan, G.; Aydogdu, H. Statistical inference for a doubly geometric process with Weibull interarrival times. Communications in Statistics-Simulation and Computation 2022, 51-6, 3428–3440. [Google Scholar] [CrossRef]
  32. Pekalp, M.H.; Aydogdu, H. Comparison of monotonic trend tests for some counting processes. Journal of Statistical Computation and Simulation 2023, 93-8, 1282–1296. [Google Scholar] [CrossRef]
  33. Pekalp, M.H.; Aydogdu, H. Parametric estimators of the mean value and variance functions in geometric process. Journal of Computational and Applied Mathematics 2024, 449. [Google Scholar] [CrossRef]
  34. Pekalp, M.H.; Eroglu Inan, G.; Aydogdu, H. Parameter estimation and testing for the doubly geometric process with lognormal distribution: Application to Bladder Cancer Patients’ Data. Asia-Pacific Journal of Operational Research 2025, 42-2, 2450014. [Google Scholar] [CrossRef]
  35. Saada, N.; Abdullah, M.R.; Hamaideh, A.; Romman, A.A. Data in the Middle East Engineering. Technology & Allied Sci. Res 2019, 9, 4261–4264. [Google Scholar]
  36. Wu, S.; Scarf, P. Decline and repair, and covariate effects. European Journal of Operational Research 2015, 244, 219–226. [Google Scholar] [CrossRef]
  37. Wu, S. Doubly geometric process and applications. Journal of the Operational Research Society 2018, 69, 66–67. [Google Scholar] [CrossRef]
  38. Wu, S.; Wang, G. The semi-geometric process and some properties. IMA Journal of Management Mathematics 2018, 29, 229–245. [Google Scholar] [CrossRef]
  39. Wu, D.; Peng, R.; Wu, S. A review of the extensions of the geometric process, applications, and challenges. Quality and Reliability Engineering International 2020, 36, 436–446. [Google Scholar] [CrossRef]
  40. Yılmaz, M.; Kara, M.; Kara, H. Bayesian inference for geometric process with Lindley distribution and its applications. Fluctuation and Noise Letters 2022, 21, 1–18. [Google Scholar] [CrossRef]
Figure 1. The plots of S k   and S ^ k disaster times for the data set 1.
Figure 1. The plots of S k   and S ^ k disaster times for the data set 1.
Preprints 201784 g001
Figure 2. The plots of S k   and S ^ k failure times for the data set 2.
Figure 2. The plots of S k   and S ^ k failure times for the data set 2.
Preprints 201784 g002
Table 1. The biases and the MSEs for the ML estimators of the parameters a , b
Table 1. The biases and the MSEs for the ML estimators of the parameters a , b
a n a ^ b ^ α ^ β ^
Method Mean Bias MSE Mean Bias MSE Mean Bias MSE Mean Bias MSE
0.95 30 ML 0.9453 -0.0047 0.0004 2.0543 0.0543 0.0408 1.0882 0.0882 0.0811 1.9564 0.0436 0.3295
50 ML 0.9479 -0.0021 0.0001 2.0263 0.0263 0.0172 1.0508 0.0508 0.0390 1.9597 -0.0402 0.1965
100 ML 0.9490 -0.0009 0.00003 2.0119 0.0119 0.0069 1.0269 0.0269 0.0177 1.9806 -0.0194 0.1050
0.99 30 ML 0.9893 -0.0007 0.0002 2.0456 0.0456 0.0358 1.0912 0.0912 0.0910 1.9374 -0.0626 0.3348
50 ML 0.9891 -0.0009 0.00005 2.0254 0.0254 0.0178 1.0435 0.0435 0.0388 1.9677 -0.0328 0.1946
100 ML 0.9897 -0.0003 0.0000 2.0092 0.0092 0.0064 1.0291 0.0291 0.0178 1.9732 0.0268 0.1045
1.01 30 ML 1.0089 -0.0011 0.0001 2.0524 0.0524 0.0384 1.0904 0.0904 0.0866 1.9523 -0.0477 0.3556
50 ML 1.0098 -0.0002 0.00003 2.0250 0.0250 0.0176 1.0381 0.0381 0.0338 1.9821 -0.0179 0.2039
100 ML 1.0100 0.0000 0.0000 2.0109 0.0109 0.0059 1.0247 0.0247 0.0182 1.9675 -0.0325 0.1067
1.05 30 ML 1.0519 0.0019 0.0001 2.0546 0.0546 0.0427 1.0958 0.0958 0.0829 1.9335 -0.0665 0.3369
50 ML 1.0510 0.0010 0.00003 2.0275 0.0275 0.0161 1.0434 0.0434 0.0356 1.9787 -0.0213 0.1948
100 ML 1.0505 0.0005 0.00001 2.0070 0.0070 0.0059 1.0335 0.0335 0.0199 1.9779 -0.0220 0.1102
Table 2. The biases and the MSEs for the ML estimators of the parameters a , b
Table 2. The biases and the MSEs for the ML estimators of the parameters a , b
a n a ^ b ^ α ^ β ^
Method Mean Bias MSE Mean Bias MSE Mean Bias MSE Mean Bias MSE
0.95 30 ML 0.9466 -0.0034 0.0004 -1.9493 0.0507 0.0416 1.0897 0.0897 0.0806 1.9494 -0.0506 0.3582
50 ML 0.9477 -0.0023 0.0001 -1.9682 0.0318 0.0187 1.0467 0.0467 0.0377 1.9855 -0.0145 0.1941
100 ML 0.9488 -0.0012 0.00003 -1.9841 0.0159 0.0066 1.0302 0.0302 0.0184 1.9748 -0.0252 0.1005
0.99 30 ML 0.9876 -0.0024 0.0002 -1.9469 0.0531 0.0415 1.0932 0.0931 0.0791 1.9317 -0.0683 0.3503
50 ML 0.9895 -0.0005 0.00005 -1.9755 0.0245 0.0177 1.0566 0.0566 0.0400 1.9628 -0.0372 0.2049
100 ML 0.9896 -0.0004 0.0000 -1.9875 0.0125 0.0064 1.0241 0.0241 0.0167 1.9766 -0.0234 0.1015
1.01 30 ML 1.0092 -0.0008 0.0001 -1.9423 0.0577 0.0399 1.0916 0.0916 0.0778 1.9299 -0.0701 0.3468
50 ML 1.0097 -0.00002 0.00003 -1.9748 0.0252 0.0169 1.0602 0.0602 0.0412 1.9489 -0.0510 0.2024
100 ML 1.0100 0.0000 0.0000 -1.9891 0.0109 0.0061 1.0312 0.0312 0.0199 1.9739 -0.0261 0.1072
1.05 30 ML 1.0520 0.0020 0.0001 -1.9447 0.0553 0.0424 1.0830 0.0830 0.0786 1.9539 -0.0461 0.3423
50 ML 1.0512 0.0012 0.00003 -1.9720 0.0280 0.0158 1.0441 0.0441 0.0343 1.9759 -0.0241 0.1963
100 ML 1.0505 0.0005 0.00001 -1.9918 0.0082 0.0060 1.0234 0.0234 0.0173 1.9855 -0.0145 0.1068
Table 3. The biases and the MSEs for the ML estimators of the parameters a , b
Table 3. The biases and the MSEs for the ML estimators of the parameters a , b
a n a ^ b ^ α ^ β ^
Method Mean Bias MSE Mean Bias MSE Mean Bias MSE Mean Bias MSE
0.95 30 ML 0.9464 -0.0036 0.0002 -1.9544 0.0456 0.0340 2.1875 0.1875 0.3753 0.9710 -0.0290 0.0727
50 ML 0.9482 -0.0018 0.00008 -1.9775 0.0225 0.0151 2.1218 0.1218 0.2006 0.9778 -0.0222 0.0470
100 ML 0.9494 -0.0006 0.00002 -1.9934 0.0066 0.0054 2.0498 0.0498 0.0817 0.9929 -0.0071 0.0217
0.99 30 ML 0.9878 -0.0022 0.0001 -1.9502 0.0498 0.0355 2.2301 0.2301 0.4691 0.9625 -0.0375 0.0728
50 ML 0.9890 -0.0009 0.00003 -1.9705 0.0294 0.0165 2.1177 0.1177 0.2005 0.9781 -0.0219 0.0444
100 ML 0.9897 -0.0003 0.0000 -1.9904 0.0096 0.0059 2.0589 0.0589 0.1177 0.9878 -0.0122 0.0217
1.01 30 ML 1.0088 -0.0012 0.0001 -1.9495 0.0505 0.0373 2.2403 0.2403 0.4407 0.9486 -0.0514 0.0681
50 ML 1.0096 -0.0004 0.00001 -1.9720 0.0280 0.0167 2.1172 0.1172 0.1880 0.9807 -0.0193 0.0475
100 ML 1.0100 0.0000 0.0000 -1.9878 0.0122 0.0059 2.0526 0.0526 0.0742 0.9902 -0.0098 0.0212
1.05 30 ML 1.0514 0.0014 0.00007 -1.9497 0.0502 0.0344 2.1828 0.1828 0.3678 0.9771 -0.0229 0.0811
50 ML 1.0511 0.0011 0.00002 -1.9732 0.0268 0.0148 2.1393 0.1393 0.1947 0.9695 -0.0305 0.0411
100 ML 1.0507 0.0007 0.00001 -1.9884 0.0116 0.0055 2.0508 0.0508 0.0818 0.9930 -0.0070 0.0237
Table 4. The biases and the MSEs for the ML estimators of the parameters a , b
Table 4. The biases and the MSEs for the ML estimators of the parameters a , b
a n a ^ b ^ α ^ β ^
Method Mean Bias MSE Mean Bias MSE Mean Bias MSE Mean Bias MSE
0.95 30 ML 0.9463 -0.0037 0.0003 2.0438 0.0438 0.0372 2.2241 0.2241 0.4240 0.9625 -0.0375 0.0728
50 ML 0.9478 -0.0021 0.0001 2.0300 0.0300 0.0166 2.0883 0.0883 0.1559 0.9896 -0.0104 0.0420
100 ML 0.9491 -0.0008 0.00002 2.0121 0.0121 0.0062 2.0501 0.0501 0.0763 0.9896 -0.0037 0.0221
0.99 30 ML 0.9880 -0.0019 0.0001 2.0396 0.0396 0.0340 2.1938 0.1938 0.3321 0.9612 -0.0387 0.0701
50 ML 0.9891 -0.0008 0.00003 2.0251 0.0251 0.0160 2.1193 0.1193 0.1822 0.9746 -0.0254 0.0417
100 ML 0.9897 -0.0003 0.0060 2.0075 0.0075 0.0060 2.0436 0.0436 0.0766 0.9992 -0.0008 0.0222
1.01 30 ML 1.0089 -0.0011 0.0001 2.0549 0.0549 0.0363 2.2212 0.2212 0.4298 0.9662 -0.0337 0.0745
50 ML 1.0095 -0.0005 0.00001 2.0299 0.0299 0.0152 2.1384 0.1384 0.1837 0.9711 -0.0289 0.0436
100 ML 1.0100 0.0000 0.0000 2.0077 0.0077 0.0053 2.0680 0.0680 0.0790 0.9864 -0.0136 0.0212
1.05 30 ML 1.0515 0.0015 0.0000 2.0564 0.0564 0.0339 2.1952 0.1952 0.3772 0.9706 -0.0293 0.0736
50 ML 1.0511 0.0011 0.0000 2.0268 0.0268 0.0149 2.0947 0.0947 0.1765 0.9919 -0.0081 0.0464
100 ML 1.0506 0.0006 0.0000 2.0136 0.0136 0.0049 2.0499 0.0499 0.0744 0.9873 -0.0126 0.0215
Table 5. The simulation results about the variances and MVB results α = 1 ,   β = 2 ,   b = 2 .
Table 5. The simulation results about the variances and MVB results α = 1 ,   β = 2 ,   b = 2 .
a n Simulation results about the variances MVB results
V a r a ^ V a r b ^ V a r α ^ V a r β ^ V a r a ^ V a r b ^ V a r α ^ V a r β ^
0.95 30 0.0004 0.0383 0.0746 0.3333 0.0005 0.0640 0.0000 0.1207
50 0.0001 0.0157 0.0734 0.3323 0.0001 0.0304 0.0000 0.1151
100 0.00003 0.0065 0.0180 0.1021 0.00005 0.0118 0.0000 0.0374
0.99 30 0.0002 0.0378 0.0786 0.3548 0.0002 0.0621 0.0000 0.1164
50 0.00005 0.0166 0.0726 0.3410 0.00006 0.0291 0.0000 0.1123
100 0.00000 0.0060 0.0180 0.1064 0.00000 0.0111 0.0000 0.0382
1.01 30 0.0001 0.0361 0.0756 0.3217 0.00017 0.0610 0.0000 0.1132
50 0.00003 0.0170 0.0745 0.3309 0.00003 0.0284 0.0000 0.1125
100 0.00000 0.0060 0.0152 0.0978 0.00000 0.0107 0.0000 0.0381
1.05 30 0.00016 0.0348 0.0832 0.3355 0.00013 0.0587 0.0000 0.1125
50 0.00003 0.0170 0.0752 0.3357 0.00003 0.0284 0.0000 0.1166
100 0.00001 0.0060 0.0172 0.1036 0.00001 0.0097 0.0000 0.0387
Table 6. The simulation results about the variances and MVB results α = 1 ,   β = 2 ,   b = 2 .
Table 6. The simulation results about the variances and MVB results α = 1 ,   β = 2 ,   b = 2 .
a n Simulation results about the variances MVB results
V a r a ^ V a r b ^ V a r α ^ V a r β ^ V a r a ^ V a r b ^ V a r α ^ V a r β ^
0.95 30 0.0020 0.0000 0.0671 0.3346 0.0043 0.0446 0.0000 0.1195
50 0.0008 0.00000 0.0432 0.2184 0.0017 0.0163 0.0000 0.0725
100 0.0002 0.00000 0.0172 0.0972 0.0003 0.0030 0.0000 0.0390
0.99 30 0.0034 0.0000 0.0770 0.3257 0.0040 0.0571 0.0000 0.1144
50 0.0020 0.0000 0.0403 0.2071 0.0018 0.0259 0.0000 0.0743
100 0.0011 0.0000 0.0185 0.0997 0.0005 0.0078 0.0000 0.0386
1.01 30 0.0047 0.0000 0.0639 0.3244 0.0033 0.0616 0.0000 0.1151
50 0.0029 0.0000 0.0398 0.1988 0.0014 0.0300 0.0000 0.0722
100 0.0022 0.0000 0.0174 0.1030 0.0003 0.0109 0.0000 0.0377
1.05 30 0.0083 0.0000 0.0699 0.3282 0.0014 0.0636 0.0000 0.1173
50 0.0065 0.0000 0.0348 0.1866 0.0002 0.0309 0.0000 0.0741
100 0.0055 0.0000 0.0178 0.1046 0.0000 0.0106 0.0000 0.0384
Table 7. The simulation results about the variances and MVB results α = 2 ,   β = 1 ,   b = 2 .
Table 7. The simulation results about the variances and MVB results α = 2 ,   β = 1 ,   b = 2 .
a n Simulation results about the variances MVB results
V a r a ^ V a r b ^ V a r α ^ V a r β ^ V a r a ^ V a r b ^ V a r α ^ V a r β ^
0.95 30 0.0002 0.0359 0.3572 0.0803 0.0002 0.0320 0.0279 0.0197
50 0.00009 0.0161 0.1798 0.0455 0.00008 0.0152 0.0170 0.0127
100 0.00002 0.0057 0.0758 0.0221 0.00002 0.0059 0.0086 0.0067
0.99 30 0.0001 0.0359 0.3151 0.0678 0.0001 0.0310 0.0280 0.0196
50 0.00003 0.0161 0.1805 0.0451 0.00003 0.0145 0.0170 0.0125
100 0.0000 0.0057 0.0786 0.0221 0.0000 0.0055 0.0086 0.0066
1.01 30 0.00009 0.0334 0.3484 0.0745 0.00008 0.0305 0.0281 0.0203
50 0.00001 0.0146 0.1476 0.0435 0.00001 0.0142 0.0172 0.0133
100 0.00000 0.0053 0.0835 0.0236 0.00000 0.0053 0.0086 0.0068
1.05 30 0.00007 0.0309 0.3299 0.0723 0.00006 0.0294 0.0280 0.0199
50 0.00002 0.0141 0.1613 0.0431 0.00002 0.0134 0.0171 0.0131
100 0.00001 0.0051 0.0791 0.0219 0.0000 0.0048 0.0086 0.0067
Table 8. The simulation results about the variances and MVB results α = 2 ,   β = 1 ,   b = 2 .
Table 8. The simulation results about the variances and MVB results α = 2 ,   β = 1 ,   b = 2 .
a n Simulation results about the variances MVB results
V a r a ^ V a r b ^ V a r α ^ V a r β ^ V a r a ^ V a r b ^ V a r α ^ V a r β ^
0.95 30 0.00001 0.0051 0.3157 0.0783 0.0000 0.0048 0.0281 0.0201
50 0.0004 0.0000 0.1481 0.0422 0.0009 0.0092 0.0172 0.0134
100 0.0001 0.0000 0.0786 0.0224 0.0002 0.0017 0.0086 0.0066
0.99 30 0.0018 0.0000 0.3117 0.0686 0.0017 0.0303 0.0280 0.0195
50 0.0010 0.0000 0.1762 0.0456 0.0008 0.0143 0.0171 0.0130
100 0.00051 0.0000 0.0820 0.0233 0.0002 0.0046 0.0086 0.0068
1.01 30 0.0024 0.0000 0.3546 0.0715 0.0012 0.0318 0.0279 0.0191
50 0.0016 0.0000 0.1675 0.0421 0.0005 0.0157 0.0171 0.0129
100 0.0011 0.0000 0.0716 0.0217 0.0001 0.0060 0.0086 0.0068
1.05 30 0.0047 0.0000 0.2926 0.0695 0.0002 0.0307 0.0281 0.0201
50 0.0037 0.0000 0.1730 0.0456 0.00002 0.0144 0.0170 0.0128
100 0.0030 0.0000 0.0724 0.0209 0.00002 0.0044 0.0086 0.0068
Table 9. The results for the investigated methods for dataset set 1 in DGP.
Table 9. The results for the investigated methods for dataset set 1 in DGP.
a ^ b ^ α ^ β ^ MSE MPE μ ^ σ ^ 2
ML 0.988 0.095 0.737 152.41 56634.3 0.46042 112.403 17131.470
MM
0.977 0.351 - - 86021.6 1.4132 276.888 267361.57
Table 10. The KS test statistic value and p-value for dataset set1.
Table 10. The KS test statistic value and p-value for dataset set1.
KS test statistic p value
0.0526 0.8931
Table 11. The results for the investigated methods for dataset 2.
Table 11. The results for the investigated methods for dataset 2.
a ^ b ^ α ^ β ^ MSE MPE μ ^ σ ^ 2
ML 1.0089 0.0283 0.9187 419.222 66073.03 0.5495 385.1392 161458.84
MM
1.0349 0.0155 - - 92589.38 1.2610 788.466 836326.4
Table 12. The KS test statistic value and p-value for dataset set2.
Table 12. The KS test statistic value and p-value for dataset set2.
KS test statistic p value
0.0737 0.8941
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated