Preprint
Article

This version is not peer-reviewed.

An Adaptive Test for Two-Sample Accelerated Life Models

Submitted:

24 September 2025

Posted:

25 September 2025

You are already at the latest version

Abstract
We propose an adaptive testing method that is robust for two-sample scale models with censored observations. Motivated by Uno et. al (2015), we propose simulation-based procedures to check model validity that exhibit robust performance across a broad range of alternative hypotheses. To evaluate the behavior of the proposed test, we conduct comprehensive simulations in some widely used survival functions. Simulation results indicate that the test exhibits strong performance in detecting scale difference between two samples, demonstrating adequate power. The proposed procedures are illustrated using a real-world dataset.
Keywords: 
;  ;  ;  ;  

1. Introduction

In clinical trials, comparing the lifetimes of different groups, such as treatment and control groups, is crucial for evaluating therapeutic efficacy. The proportional hazards model ([2]), also referred to as the Cox model, has been widely adopted for such studies, particularly when data are censored. However, the proportional hazards assumption underlying the Cox model is restrictive and often difficult to satisfy in practice. An alternative approach is the accelerated life model ([5]), which models the direct relationship between lifetime and covariates. In this model, covariates act as acceleration or deceleration factors on the lifetime, making parameter interpretation more straightforward. For example, taking the logarithm of survival time yields a linear relationship between them. Despite this interpretational advantage, the accelerated life model presents obstacles in variance estimation because the asymptotic variance involves unknown densities, especially when censored data are present. The difficulty of estimating variance has hindered the development of model-checking procedures, necessitating the use of non-parametric methods. To address this issue, many authors have studied this model, including [10,13,15], among others.
To assess differences in variability between two groups through the cumulative hazard function in the accelerated life model, Yang ([15]) employed a weighting scheme that eliminates the density function from the variance expression through the hazard function. [15] further developed a corresponding test to validate the two-sample accelerated life model. This lack-of-fit test utilizes methodologies developed by [4]. The underlying approach is based on the idea that under H 0 (correct model specification) the scale estimators should converge, while under H 1 (model misspecification) these estimates diverge significantly. Thus, the test construction necessitates deriving multiple parameter estimates through the estimating function, which substantially increases computational complexity.
In addition, Yang’s method presents several practical challenges. First, it requires selecting an appropriate weight function that may vary depending on specific alternatives, a common issue in dealing with survival data. The careful selection of weight functions is crucial to obtain accurate results, which is not an easy process. Although various forms of weight functions have been proposed in the literature, including those by [11], the a priori selection of the optimal weight function remains a challenge. Second, both the derivation of the statistic and its practical implementation are also difficult due to the complexity of its mathematical expression. These difficulties can limit the practical applicability of the method in practice.
To address the limitations inherent in existing testing methods for accelerated life models, especially those associated with Yang’s method, this paper presents a novel procedure that dynamically determines optimal weight functions without requiring prespecified alternative hypotheses. Following [12], the proposed method adaptively determines optimal weighting schemes based on observed differences between cumulative hazard functions of the two samples. Unlike conventional testing procedures that depend heavily on assumed alternative hypotheses, this method offers enhanced flexibility and broader applicability. The proposed test statistic overcomes a limiting of existing weighted testing methods by not relying on specific alternatives, making it well-suited for developing robust testing procedures for two-sample accelerated life models.
The remainder of this paper proceeds as follows. Section 2 reviews Yang’s estimating function for accelerated life models. Section 3 and Section 4 develop proposed testing procedures. Section 5 presents numerical studies, including simulation results and a real-data example. Finally, Section 6 provides concluding remarks and directions for future research.

2. Estimating Function

Consider two censored samples with sample sizes n 1 and n 2 . Let X i j , i = 1 , 2 , j = 1 , , n i , be independent, positive lifetimes with absolutely continuous distributions F i that are also independent of the corresponding censoring variables C i j . The observed data are T i j = min ( X i j , C i j ) , and Δ i j = I ( X i j C i j ) , where I ( · ) is the indicator function. Let f i ( t ) be the density of F i ( t ) . Then the hazard function λ i ( t ) and the cumulative hazard function Λ i ( t ) are defined as λ i ( t ) = f i ( t ) 1 F i ( t ) and Λ i ( t ) = 0 t λ i ( u ) d u = log ( 1 F i ( t ) ) , respectively.
We are interested in testing the null hypothesis H 0 that two samples exhibit a scale factor relationship. To this end, consider the accelerated life model, which assumes that Λ 1 and Λ 2 are related by
Λ 2 ( t ) = Λ 1 t θ
for θ > 0 , but otherwise unspecified. Equivalently, the random variables X 1 j , j = 1 , , n 1 , have the same distribution as the random variables X 2 j / θ , j = 1 , , n 2 . Therefore, θ serves as the scale factor for acceleration or deceleration, depending on whether θ > 1 or θ < 1 .
We assess the validity of the two-sample accelerated life model by using the available data T i j and Δ i j , assuming the censoring distributions are unknown. That is, we test H 0 that the accelerated life model holds. Let N i j ( t ) = I ( T i j t , Δ i j = 1 ) , Y i j ( t ) = I ( T i j t ) , N i ( t ) = j N i j ( t ) , and Y i ( t ) = j Y i j ( t ) . The Nelson-Aalen estimator ([1]) of Λ i ( · ) is
Λ ^ i ( t ) = 0 min ( t , max i T i j ) 1 Y i ( s ) d N i ( s ) .
Let n = n 1 + n 2 . [15] defined a scale estimator θ ^ for the accelerated life model above as the solution to S n ( a ) = 0 , for
S n ( a ) = n 1 / 2 u v W n ( t ; a ) { Λ ^ 1 ( t ) Λ ^ 2 ( a t ) } d t ,
where W n ( t ; a ) is a data-dependent weight function. [15] showed that choosing the weight function W n ( t ; a ) = η { F ^ ( t ) } / t for some function η , where F ^ is the product-limit estimator ([6]), eliminates the unknown density from the variance expression, thus simplifying the variance estimation procedures. Furthermore, [15] showed that θ ^ is a consistent estimator for θ and follows a normal distribution under H 0 . It is important to note that a finite integration range, [ u , v ] , containing sufficient data is used instead of ( 0 , ) . This adjustment addresses the potential for explosive behavior near 0 or , a challenge that can be mitigated by truncating the integration range. We use this estimating function to develop testing procedures for the accelerated life model.

3. Test Statistic

Let D ( t ) = Λ 1 ( t ) Λ 2 ( θ t ) . Our primary aim is to evaluate the null hypothesis that two samples are related by a scale factor, θ , consistent with the accelerated life model. Testing the null hypothesis H 0 is equivalent to verifying whether D ( t ) = 0 for all t in [ u , v ] . To assess this, let D ^ ( t ) = Λ ^ 1 ( t ) Λ ^ 2 ( θ t ) be the empirical counterpart, where Λ ^ i ( · ) represents the Nelson-Aalen estimator of the cumulative hazard function Λ i ( · ) . To standardize the difference at each time point t, define
Z ( t ) = D ^ ( t ) σ ^ N A ( t ) ,
where the estimated variance, σ ^ N A ( t ) , under the assumption of independence is given by
σ ^ N A 2 ( t ) = Var ^ { Λ ^ 1 ( t ) } + Var ^ { Λ ^ 2 ( θ t ) } .
Note σ ^ N A 2 ( · ) denotes its (approximately) estimated variance. Specifically,
var ^ { Λ ^ i ( θ i 1 t ) } = 0 t d N i ( u ; θ i 1 ) Y i 2 ( u ; θ i 1 ) , i = 1 , 2 .
Following [12] associated with the estimating function in (1), consider the following statistic that is an integrated cumulative hazard difference on the integration range,
S Z = u v W n ( t ) Z ( t ) d t ,
where W n ( t ) is a data-dependent weight function. It is clear that the test statistic S Z is expected to perform well when the weight function W n ( t ) is proportional to the expected value of Z ( t ) under H 1 , as mentioned in [12]. This is because, in this case, a larger Z ( t ) corresponds to a larger W n ( t ) . Thus, one potential class of test statistics can be expressed as
S Z = u v Z 2 ( t ) d t ,
which results in a right tail distribution under H 0 . However, this chi-squared distributed statistic might underperform for certain alternatives due to the extended right tail of its distribution. This issue could be resolved by choosing the weight function W n ( t ) that produces a statistic with a short right tail under H 0 and a pronounced right tail under H 1 . Refer to [12] for more details. Based on [12] and [14], a convenient choice is to define W n ( t ; c ) as W n ( t ; c ) = max { Z ( t ) , c } , where c is a specified value on [ u , v ] , such as 1.65. Let
S Z ( c ) = u v W n ( t ; c ) Z ( t ) d t .
Note that S Z would not show a long right tail under H 0 for a fixed value of c, such as 1.65. In contrast, under H 1 , S Z ( c ) would yield a large value. Therefore, selecting an appropriate value for c is based on circumstance and can be challenging. To test the two-sample scale model, we adopt the adaptive approach to select the value of c in [12]. This approach is based on the development of a process with respect to c. To construct a robust testing procedure for the two-sample accelerated life model, we incorporate this process into the approximation procedures presented in Section 4.

4. Approximation Procedures

Let N 2 j ( t ; θ ) = I ( T 2 j / θ t , Δ j = 1 ) and Y 2 j ( t ; θ ) = I ( T 2 j / θ t ) for j = 1 , , n 2 . By the standard martingale theory, the processes
M 1 j ( t ) = N 1 j ( t ) 0 t Y 1 j ( s ) d Λ 1 ( s ) , M 2 j ( t ; θ ) = N 2 j ( t ; θ ) 0 t Y 2 j ( s ; θ ) d Λ 2 ( θ s )
are mean-zero martingales ([3]). From these, we have
Λ ^ 1 ( t ) Λ 1 ( t ) = 0 t d M 1 ( s ) Y 1 ( s ) , Λ ^ 2 ( θ t ) Λ 2 ( θ t ) = 0 t d M 2 ( s ; θ ) Y 2 ( s ; θ ) ,
where M 1 ( s ) = j = 1 n 1 M 1 j ( s ) , M 2 ( s ; θ ) = j = 1 n 2 M 2 j ( s ; θ ) , Y 1 ( s ) = j = 1 n 1 Y 1 j ( s ) , and Y 2 ( s ; θ ) = j = 1 n 2 Y 2 j ( s ; θ ) . Let
S Z ( c , θ ) = u v W n ( t ; c ) Z M ( t ) d t ,
where Z M ( t ) = { Λ ^ 1 ( t ) Λ ^ 2 ( θ t ) } / σ ^ N A . Note that the distribution S Z ( c , θ ) is determined by M 1 j and M 2 j due to it being a sum of integrable martingales. However, its distribution remains analytically intractable. One approach to overcome this challenge is to substitute M i j with suitable approximations. Motivated by this, we derive an approximation process with respect to c that is formulated in terms of martingales M 1 j and M 2 j . This serves as a fundamental component for developing robust testing procedures. A key property of martingales M i j ( t ) is that they have zero expectations, E { M i j ( t ) } = 0 , with variance V a r { M i j ( t ) } = E { N i j ( t ) } . See [9] for details. This approximation framework allows us to establish asymptotic properties and optimize the power performance of our proposed tests under various configurations.
We now derive an approximation process, S Z * ( c , θ ^ ) , from S Z ( c , θ ) associated with θ ^ . As mentioned in [8] and [9], the distribution of S z ( c , θ ) can be approximated by the normal multiplier. Let { G i j : i = 1 , 2 , j = 1 , , n i } be independent random samples from the standard normal distribution. By the martingale properties of M i j ( t ) specified above, M i j ( t ) can be replaced by N i j ( t ) G i j . Applying this substitution to S Z ( c , θ ) yields an approximation process S Z * ( c , θ ^ ) , where θ ^ is the estimator θ ^ . Specifically,
S Z * ( c , θ ^ ) = u v W n * ( t ; c ) Z M * ( t ; θ ^ ) d t ,
where
W n * ( t ; c ) = max { Z M * ( t ; θ ^ ) , c } , Z M * ( t ; θ ^ ) = { Λ ^ 1 * ( t ) Λ ^ 2 * ( θ ^ t ) } / σ ^ N A ( t ) ,
with
Λ ^ i * ( θ ^ i 1 t ) = i = 1 2 j = 1 n i ( 1 ) i 1 G i j Δ i j I ( u T i j / θ ^ i 1 v ) Y 2 i ( T 1 j ) Y 2 i 1 ( T 2 j ; θ ^ ) , i = 1 , 2 .
Note that S Z * ( c , θ ^ ) in (3) can be expressed as
S Z * ( c , θ ^ ) = i = 1 2 j = 1 n i ( 1 ) i 1 G i j Δ i j I ( u T i j / θ ^ i 1 v ) { T i j / θ ^ i 1 W n * ( s ; c ) d s } Y 2 i ( T 1 j ) Y 2 i 1 ( T 2 j ; θ ^ ) .
Following [9], the conditional distribution of the process S Z * ( c , θ ^ ) given the data is asymtotically equivalent to the unconditional distribution of the process S Z ( c , θ ^ ) . The only random component in S Z * ( c , θ ^ ) is G i j , so this formulation facilitates computation in numerical studies.
To validate the two-sample accelerated life model, we obtain a large number of realizations of S Z * ( c , θ ^ ) by generating G i j , creating a reference distribution against which test statistics can be compared. Note that it is generated such that they maintain complete statistical independence from the observed data set. This approach enables robust assessment of the model’s validity under various conditions.
The procedures consists of two stages. First, we choose an optimum value of c from the null distribution approximation of the process S Z ( c , θ ^ ) with respect to c. Note that although an ideal test would consider the entire range of c throughout ( 0 , ) , we deliberately restrict the range to [ u , v ] to circumvent computational challenges and theoretical complexities. In practical situations, we select [ u , v ] to include all observed data. Then, model validation proceeds through analysis of the p-value corresponding to the adaptively selected c, providing a statistical foundation for assessing the model’s empirical fit.
Let s Z ( c , θ ^ ) be the observed value of S Z ( c , θ ^ ) . We calculate the corresponding p-value p ( c ) to s Z ( c , θ ^ ) through the approximation distribution of S Z ( c , θ ^ ) under H 0 . This procedure generates the null distribution of P c = min { P ( c ) : c in [ u , v ] } , where P ( c ) is the random counterpart of p ( c ) . Note that the distribution of P c is based on the minimum value of P ( c ) , as this represents the most statistical significance, thereby offering the most reliable criterion for evaluation. We then adaptively identify the optimal value of c that yields the minimum p-value of p ( c ) , i.e., p c = min { p ( c ) : c in [ u , v ] } . Among evaluated p-values, smaller p-values provide stronger evidence against the null hypothesis, supporting the alternative hypothesis.
By arguments in [12], Z M ( · ) asymptotically converges to a zero-mean process, which we denote as Z G ( · ) . Thus, as demonstrated by [12], the process over c given by u v max { Z M ( t ; θ ^ ) , c } Z M ( t ) d t converges to u v max { Z G ( t ; θ ^ ) , c } Z G ( t ) d t . It follows that P ( c ) converges to the survival function evaluated at this integral involving Z G . For further details, refer to [12]. Similar arguments were previously presented by [8].
We utilize simulation-based approaches to estimate the asymptotic distribution of the process. To approximate the distribution of S Z ( c ) under H 0 , we generate multiple realizations of S Z * ( c , θ ^ ) by simulating normal random samples G i j the only source of randomness in S Z * ( c , θ ^ ) . From a collection of realizations of S Z * ( c , θ ^ ) , we obtain the corresponding p-value, denoted by P * ( c ) . Subsequently, the null distribution of P c is approximated based on multiple sets of realizations, from which P c * = min { P * ( c ) : c in [ u , v ] } is derived. This enables empirical estimation of the null distribution of P c using P c * . Notably, the asymptotic equivalence between the conditional distribution of S Z * ( c , θ ^ ) given the data and unconditional distribution of S Z ( c , θ ^ ) justifies the simulation approach as a robust method for testing. Finally, as in [9] and [12], the p-value of the test, which is given by P( P c < p c ), can be empirically approximated by P( P c * < p c ).

5. Numerical Study

To assess the statistical properties and performance of the proposed test, we conducted comprehensive simulations using several commonly used survival functions. These simulations evaluate the statistical power and reliability of the test in the varying sample sizes and censoring proportions. Similar to [8], the simulation incorporates several configurations: the log-logistic family and Weibull distributions with the scale parameter θ = 2 (scenario where the accelerated life model holds), as well as the lognormal survival distribution with the location (shift) parameter θ = 2 (scenarios where the accelerated life model is violated). Including both compatible and incompatible models enables examination of the test’s validity under correct model specification and its robustness to model misspecification. The specific simulation cases are outlined below.
  • Case (a). We examine the case where 2 1 i X i j follows the log-logistic density function 2 t ( 1 + t 2 ) 2 ) , t > 0 , while the censoring variable C i j is characterized by density 2 h 2 t ( 1 + h 2 t 2 ) , t > 0 , with some constant h.
  • Case (b). We also investigate conditions where 2 1 i X i j has Weibull density 2 t e t 2 , t > 0 , with log ( C i j ) following a normal distribution with mean h and unit standard deviation.
  • Case (c). Lastly, we consider scenarios where log ( X i j 2 i 1 ) is distributed as standard normal, while log ( C i j ) follows a uniform distribution on interval [ h , 1 + h ] . In contrast to cases (a) and (b), where the accelerated life model holds, this configurations represents the location shift model defined by the relationship Λ 2 ( t ) = Λ 1 ( t θ ) . This setup enables evaluation of the test performance under conditions that violate the assumption of the accelerated life model.
  • The constant h was chosen in each scenario to achieve the desired censoring proportions, allowing an evaluation of test performance under varying censoring proportions. In addition, the integration limits u and v in (1) define the temporal intersection of observations between the two groups. In survival analysis, explosive behavior is often observed near the endpoints of the estimating function. So, this setup is necessary to appropriately capture and account for such boundary effects. Specifically, u and v represent the lower and upper bounds of the shared observation times, where u is the maximum of the minimum observation times across the groups, and v is the minimum of the maximum observation times. Thus, the interval [ u , v ] represents the time period in which observations from both groups coexist.
Table 1 and Table 2 present the power and type I error rates of the proposed test for the cases (a), (b), and (c), based on 500 repetitions. For each case, the simulations were conducted for censoring proportions of 25% and 50%, with sample sizes of 25 and 50. The simulations used the weight W n = ( 1 F ^ ( t ) ) 2 F ^ 2 ( t ) / t for the estimating function defined in (1), which corresponds to a form of η F ^ ( t ) / t . The results show that the test performs well overall, showing adequate power under both modest (25%) and heavy (50%) censoring proportions across all sample sizes considered, while maintaining Type I error rates close to the nominal level of α = 0.05 under H 0 . The simulation results show the effectiveness of the proposed method in testing two-sample accelerated life models across various scenarios. To further demonstrate its practical utility, we applied the procedure to a real-world dataset.
The dataset comprises clinical data from 35 patients diagnosed with limited stage II or IIIA ovarian carcinoma at the Mayo Clinic and has become a benchmark for methodological validation in survival analysis. It stratifies patients into two cohorts based on tumor differentiation (low-grade versus well-differentiated). Many researchers have examined this dataset. Examples include [7,15] and Gill and [4]. They demonstrated the inadequacy of the accelerated life model. In line with them, this study further demonstrates that the relationship between disease grade and progression cannot be adequately captured by the two-sample scale modeling approach.
Specifically, the sample has a censoring percentage 13 / 35 = 0.37 % , and the estimated scale parameter is θ ^ = 0.56 using the same weight function as in the simulations. The observed p value is 0.0075, obtained at c = 0 . The reference set for the test was constructed using the 5000 realized samples G i j drawn from N ( 0 , 1 ) . This reference set generates the null distribution of P c from which the approximate P ( P c * < p c ) = 0.0077 is calculated. These results provide strong evidence against the adequacy of the two-sample accelerated life model. Figure 1, which displays the estimated cumulative hazard functions for both groups in the dataset, supports this conclusion graphically. These results underscores the need for more flexible or nonparametric approaches to accurately characterize outcomes.

6. Concluding Remarks

Motivated by [12], we have proposed an adaptive and robust model-checking method for two-sample accelerated life models. The procedures use an estimating function that incorporates the integrated cumulative hazard difference developed by [15]. Following the approach in [12], our test employs a simple simulation-based approach to determine optimal critical values, yielding versatility that performs well across diverse scenarios without assuming specific alternative hypotheses. Simulation results demonstrate that the proposed approach provides comprehensive evaluation of statistical power and reliability under various conditions. Real-world data analysis validated the proposed method. Although developed specifically for accelerated life models, the methodology extends to other survival models, such as the accelerated hazards model, providing a flexible framework that maintains statistical power across diverse lifetime distributions.

Acknowledgments

This research was conducted during a sabbatical leave granted by Illinois Wesleyan University. The author is grateful for this support.

Conflicts of Interest

The authors declares no conflicts of interest.

References

  1. Aalen, D.D. Nonparametric inference for a family of counting processes. Annals of Statistics 1978, 6, 701–726. [Google Scholar] [CrossRef]
  2. Cox, D. R. Regression models and life tables (with discussion), Journal of the Royal Statistical Society 1972, 34, 187–220.
  3. Gill, R.D. Censoring and Stochastic Integrals, MC Tract 124, Amsterdam: Mathematical Centre, 1980.
  4. Gill, R.D.; Schumacher, M. A Simple Test for the Proportional Hazard Assumptions, Biometrika 1987, 74, 289–300.
  5. Kalbfleisch, J. D.; Prentice, R. L. The Statistical Analysis of Failure Time Data, New York: Wiley, 1980.
  6. Kaplan, E.; Meier, P. Nonparametric Estimation from Incomplete Observations. Journal of the American Statistical Association 1958, 53, 457–481. [Google Scholar] [CrossRef]
  7. Lee, S.H. On the Versatility of the Combination of the Weighted log-rank statistics. Computational Statistics & Data Analysis 2007, 51, 6557–6564. [Google Scholar] [CrossRef]
  8. Lee, S.H.; Yang, S. Checking the censored two-sample accelerated life model using integrated cumulative hazard difference. Lifetime Data Analysis 2007, 13, 371–380. [Google Scholar] [CrossRef] [PubMed]
  9. Lin, D.Y.; Wei, L.J.; Ying, Z. Checking the Cox Model with Cumulative Sums of Martingales-based Residuals. Biometrika 1993, 80, 557–572. [Google Scholar] [CrossRef]
  10. Lin, D.Y.; Wei, L.J.; Ying, Z. Accelerated Failure Time Models for Counting Processes. Biometrika 1998, 85, 605–618. [Google Scholar] [CrossRef]
  11. Pepe, M.S.; Fleming, T.R. Weighted Kaplan-Meier Statistics: Large Sample and Optimality Considerations. Journal of the Royal Statistical Society, Series B, 53, 1991 341-352. [CrossRef]
  12. Uno, H.; Tian, L.; Claggett, B.; Wei, L.J. A Versatile Test for Equality of Two Survival Functions based on Weighted Differences of Kaplan-Meier Curves. Stat Med., 34, 2015 3680-3695. 3680. [Google Scholar]
  13. Wei, L. J.; Ying, Z.; Lin, D.Y. Linear Regression Analysis Censored Survival Data based on Rank Tests. Biometrika 1990, 19, 845–851. [Google Scholar] [CrossRef]
  14. Xu, X.; Tian, L.; Wei, L.J. Combining Dependent Tests for Linkage or Association across Multiple Phenotypic traits. Biostatistics 2003, 4, 223–229. [Google Scholar] [CrossRef] [PubMed]
  15. Yang, S. Some Scale Estimators and Lack-of-Fit Tests for the Censored Two Sample Accelerated Life Model. Biometrics 1998, 54, 1040–1052. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Estimated cumulative hazard function Λ ^ 1 (solid) and Λ ^ 2 (dotted) for two groups, ovarian cancer data.
Figure 1. Estimated cumulative hazard function Λ ^ 1 (solid) and Λ ^ 2 (dotted) for two groups, ovarian cancer data.
Preprints 178116 g001
Table 1. Results (rejection rate) at α = . 05 .
Table 1. Results (rejection rate) at α = . 05 .
251 50
Case 25%2 50% 25% 50%
(a) 0.93 0.81 0.99 0.97
(b) 0.51 0.50 0.65 0.69
(c) 0.68 0.63 0.71 0.63
1 n 1 = n 2 . 2 Censoring proportion.
Table 2. Size simulation results at α = . 05 .
Table 2. Size simulation results at α = . 05 .
251 50
Case 25%2 50% 25% 50%
(a) .0550 .0640 .0470 .0560
(b) .0580 .0670 .0350 .0640
(c) .0480 .0620 .0430 .0450
1 n 1 = n 2 . 2 Censoring proportion.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated