Preprint
Article

This version is not peer-reviewed.

Artificial Intelligence and Control Charts: A Big Problem

Submitted:

24 September 2025

Posted:

26 September 2025

You are already at the latest version

Abstract
We use the data of some published papers to compare those authors findings with ours. From the analysis we get different results: the cause is that they use the Probability Limits of the PI (Probability Interval) as they were the Confidence Limits (Control Limits of the Control Charts, CCs). The Control Limits in the Shewhart CCs are based on the Normal Distribution (Central Limit Theorem, CLT) and are not valid for non-normal distributed data: consequently, the decisions about the “In Control” (IC) and “Out Of Control” (OOC) states of the process are wrong. The Control Limits of the CCs are wrongly computed, due to unsound knowledge of the fundamental concept of Confidence Interval. Minitab and other software e (e.g. JMP, SAS) use the “T Charts”, claimed to be a good method for deal-ing with “rare events”, but their computed Control Limits of the CCs are wrong. The same happens for the Confidence Limits of the parameters of the distribution involved in the papers (Weibull, Inverse Weibull, Gamma, Binomial, Maxwell). We will show that the Re-liability Integral Theory (RIT) is able to solve these problems and the Sequential way of dealing with data.
Keywords: 
;  ;  ;  ;  ;  ;  

1. Introduction

In the old days of Statistics almost all the methods have been based on the Normal distribution and their connected ones (t, F, Snedecor, …); when the data were not “normally distributed” suitable transformations to Normality have been used.
Quite recently simulations have become in fashion; several articles in ResearchGate mention simulations [1,2,3,4,5,6,7,8,9,10,11,12,13,14]; Journals do the same [15]. See various statements [from 1-14] in the Excerpt 1.
Unfortunately often sound Theory is disregarded [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]. In fact, the authors use wrong Control Limits (LCL, UCL) and therefore are wrong in defining the state of control of the Process (IC, In Control, OOC, Out of Control) and the related ARLs (Average Run Lengths).
Since 1989, the author (FG) tried to inform the Scientific Community about the flaws in the use of (“wrong”) quality methods for making Quality [24] and in 1999 about the GIQA (Golden Integral Quality Approach) showing how to manage Quality during all the activities of the Product and Process Development in a Company [25], including the Process Management and Control Charts (CC) for Process Control. Control Charts (CC) use sequentially the collected data to assess if a Production or Service process output is to be considered In Control (IC) or Out Of Control (OOC); the decision is very important for taking Corrective Actions (CA), if needed.
To show our Theory we will use some of the data found in the papers [16,17,18]; we will show the drawbacks of the “theory (wrong)” presented in the papers.
But before that we mention the very interesting the statements in the Excerpt 2:
We agree with the authors in the Excerpt 2, but, nevertheless, they did not realise the problem that we are showing here: wrong Control Limits in CCs for Rare Events, with data exponentially or Weibull or Maxwell distributed or free-distributed data. Several papers compute “a-scientific” control limits… See References…
Test of Hypotheses and the Confidence Intervals (CI) are intimately related and so equivalent for decision making. Using the data in [16,17,18] with good statistical methods [26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74] we give our “reflections on Artificial Intelligence and Control Charts (CCs)”.
We will try to state that several papers (that are not cited here, but you can find in the “Garden of flowers” [19] and some in the Appendix A) compute in an a-scientific way the Control Limits of CCs for “Individual Measures or Exponential, Weibull, Maxwell Gamma and free-distributed data”, indicated as I-CC (Individual Control Charts); we dare to show, to the Scientific Community, how to compute the True Control Limits (True Confidence Limits). If the author is right, then all the decisions, taken up today, have been very costly to the Companies using those Control Limits; therefore, “Corrective Actions” are needed, according to the Quality Principles, because NO “Preventive Actions” were taken [19,20,21,22,23,24,25]: this is shown through the suggested published papers. Humbly, given our strong commitment to Quality [19,20,21,22,23,24,25,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74], we would dare to provide the “truth”: Truth makes you free [hen (“hic et nunc”=here and now)].
On 22nd of February 2024, we found the paper “Publishing an applied statistics paper: Guidance and advice from editors” published in Quality and Reliability Engineering International (QREI-2024, 1-17) [by C. M. Anderson-Cook, Lu, R. B. Gramacy, L. A. Jones-Farmer, D. C. Montgomery, W. H. Woodall; the authors have important qualifications and Awards]; since I-CC is a part of “applied statistics” we think that their hints will help: the authors’ sentence “Like all decisions made in the face of uncertainty, Type I (good papers rejected) and Type II (flawed papers accepted) errors happen since the peer review process is not infallible.” is very important for this paper: the interested readers can see [19,20,21,22,23,24,25,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74].
To let the reader follow our way of approaching the problem of estimation we will use various figures and some data: this is caused by the fact that there are wrong ideas in the literature.
By reading [19] and other papers, the readers are confronted with this type of practical problem: we have a warehouse with two departments
a)
in the 1st of them, we have a sample (the “The Garden of flowers… in [19]”) of “products (papers)” produced by various production lines (authors)
b)
while, in the other, we have some few products produced by the same production line (same author)
c)
several inspectors (Peer Reviewers, PRs) analyse the “quality of the products” in the two departments; the PRs can be the same (but we do not know) for both the departments
d)
The final result, according to the judgment of the inspectors (PRs), is the following: the products stored in the 1st dept. are good, while the products in the 2nd dept. are defective. It is a very clear situation, as one can guess by the following statement of a PR: “Our limits [in the 1st dept.] are calculated using standard mathematical statistical results/methods as is typical in the vast literature of similar papers [19].” See the standard mathematical statistical results/methods in the Appendix A and meditate (see the formulae there and in the AI queries)!
Hence, the problem becomes “…the standard … methods as is typical …”: are those standards typical methods (in the “The Garden … ” in [19] and [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]) scientific?
To understand the readers need to know “Some ideas on Hypothesis Test and The Statistical Hypotheses with the related risks” [26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42], or, in alternative, they can read [43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74].
Due to length limitations of the paper, we must ask the reader to know the following ideas: the definition of statistical hypothesis as a statement about a population parameter θ (e.g. the ′′true′′ mean, the ′′true′′ shape, the ′′true′′ variance, the ′′true′′ reliability, the ′′true′′ failure rate, …n that we assume to exists and has a value even though it is unknown to us), related to the statistical model F(x|θ) associated with a random variable (RV) X. The set of all the possible values of the parameter is called the parameter space Θ. The goal of a hypothesis test is to decide, based on a sample drawn from the population, which value hypothesized for the population parameter of the parameter space Θ can be accepted as true. Remember: nobody knows the truth…
Generally, two competitive hypotheses are defined, the null hypothesis H0 and the alternative hypothesis H1.
A hypothesis testing procedure (or simply a hypothesis test) is a rule (decision criterion) that specifies
  • for which sample values the decision is made to «accept» H0 as true,
  • for which sample values H0 is rejected and then H1 is accepted as true.
based on managerial/Statistics which defines
  • the test statistic (a formula to analyse the data)
  • the critical region C (rejection region)
to be used for decisions, with the stated risks: decision criterion.
The subset of the sample space for which H0 will be rejected is called rejection region (or critical region). The complement of the rejection region is called the acceptance region.
If θ denotes the population parameter, the general form of the null hypothesis is H0: {θ∈Θ0} versus the alternative hypothesis H1: {θ∈Θ1}, where Θ0 is a subset of the parameter space Θ and Θ1 a subset disjoint from Θ0.; Θ0∪Θ1= Θ and Θ0∩Θ1=∅; before collecting any data, with H0 we accept a probability α of wrong decision, while with H1 we accept a probability β of wrong decision. A hypothesis test of H0: {θ∈Θ0} versus the alternative hypothesis H1: {θ∈Θ1} might make one of two types of errors, traditionally named Type I Error and Type II Error; their probabilities are indicated as α and β.
If «actually (but we do not know)» H0: {θ∈Θ0} is true and the hypothesis test (the rule, the computed quantity S, in the Figure 1), due to the collected data, incorrectly decides to reject H0 then the test (and the Experimenter, the Manager, the Researcher, the Scholar who follow the rule) makes a Type I Error, whose probability is α. If, on the other hand, «actually (but we do not know)» θ∈Θ1 but the test (the rule), due to the collected data, incorrectly decides to accept H0 then the test (and the Experimenter, the Manager, the Researcher, the Scholar who follow the rule) makes a Type II Error, whose probability is β.
These two different situations are depicted in the Table 1 (for simple parametric hypotheses).
The framework of a test of hypothesis is depicted in the Figure 1.
Notice that when we decide to “accept the null hypothesis” in reality we use a short-hand statement saying that “we do not have enough evidence to state the contrary”. It is evident that
α = P r e j e c t   H 0 | H 0   t r u e             a n d             β = P a c c e p t   H 0 | H 0   f a l s e
A likelihood ratio test is any test that has a rejection region of the following form {s(D): q(D)≥c}, where c is any number satisfying 0≤c≤1 and s(D) is the “statistic” by which we elaborate the data of the empirical sample D. This test is a measure of how much the evidence, provided by the data D, supports H0.
This has great importance for Control Charts, as you can see in the Figure 3.
Suppose C is the “critical” (or rejection) region for a test, based on a «statistic s(D)» (the formula to elaborate the sampled data D, providing the value s(D).
Then for testing H0: {θ∈Θ0}, the test makes a mistake if «s(D)∈C», so that the probability of a Type I Error is α=P(S(D)∈C) [S(D) is the random variable giving the result s(D)]. It is important the power of the test 1-β, which is the probability of rejecting H0 when in reality H0 is false
1 β = P r e j e c t   H 0 | H 0   f a l s e
Therefore, the power function of a hypothesis test with rejection region C is the function of θ defined by β(θ)=P(S(D)∈C). The function 1-β(θ), power function, evaluated at the value θ, is often named the Operating Characteristic curve [OC curve].
To find the RV S(D) and the region C, we use the likelihood function L(θ|D={x1, x2, …, xn])
L θ D = 1 n f ( x i )
Let L0 be the Likelihood function L(θ0|D) and L1 be the Likelihood function L(θ1|D): the most powerful test is the one that has the most powerful critical region C={s(D): q(n)=L1/L0≥kα}, where q(n) is the Likelihood Ratio L1/L0 and the quantity kα is chosen in such a way that the Type I Error has a risk (probability) α as in the formula (4), with fixed n (the sample size),
. . q ( n ) k α L 0   d x 1 d x 2 . . d x n = α
The most powerful critical region C has the highest power 1-β(θ).
Let CRn be the “Critical Region” found by (4) and βn be the probability (5), function of n,
β n = . . q ( n ) C R n L 1   d x 1 d x 2 . . d x n
By (4) and (5), increasing n, we arrive to select a final sample size n, such that βn=β, the desired risk.
Usually when an efficient estimator exists, this provides then a powerful statistic, giving the most powerful test.
We will use α = β in the following discussion. After the data analysis, we can decide if the data suggest us to “accept (= not reject)” H0: {θ∈Θ0} or “accept” H1: {θ∈Θ1},and after that we can compute the Confidence Interval, CI=θL-------θU, of the parameter θ, with Confidence Level C L = 1 α = 1 α / 2 + β / 2 = 1 α / 2 + α / 2 .
When we consider the Control Charts we want to test the two Hypotheses H0: {the process is “IC (In Control)”} against H1: { the process is “OOC (Out Of Control)”}, and after the data analysis we can compute the Control Interval (which is actually a Confidence Interval), LCL-------UCL.
If we use the Table 4 data (Time between failures data (from “Improved Phase… for Monitoring TBE”)) it is easy to see that (as said with the above warehouse example) the practical problem becomes a Theoretical one [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74] (all references and Figure 21). Since those data are well “exponentially distributed” we anticipate here, immediately, the wrong formulae (either using the parameter θ = θ 0 or its estimate t ¯ 0 , with α =   0.0027 ) in the below formula (6) (as you can find in [19])
L C L = θ 0 l n 1 α / 2   0.00135   t ¯ 0   U C L = θ 0 l n α / 2   6.6077   t ¯ 0
The readers should understand clearly the Theoretical and Practical Difference between L------U (the Probability Interval) and LCL------UCL (the Confidence Interval), pictorially shown in the Figure 2: the two lines L and U depends on the parameter θ (to be estimated) and on the two probabilities α and β, while the two points L and U depends on the assumed value θ0 of the parameter and on the two chosen probabilities α and β; after the data analysis, we compute the estimate t ¯ 0 of the parameter θ and from that the Confidence Interval LCL------UCL, with Confidence Level C L = 1 α . It is clear now the wrong ideas in the formulae (6).
Figure 2. Theoretical and Practical Difference between L------U and LCL------UCL.
Figure 2. Theoretical and Practical Difference between L------U and LCL------UCL.
Preprints 178099 g002
In the formulae (6), for the interval LCL------UCL (named Control Interval, by the authors [24]), the LCL actually must be L and the UCL actually must be U, vertical interval L------U (Figure 2); the actual interval LCL------UCL is the horizontal one in the Figure 2, which is not that of the formulae (6). Artificial Intelligence provides various wrong formulae for the interval LCL------UCL, as done by many authors [19]. Since the errors have been continuing for at least 25 years, we dare to say that this paper is an Education Advance for all the Scholars, for the software sellers and the users: they should study the books and papers in [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74].
The readers could think that the I-CCs are well known and well dealt in the scientific literature about Quality. We have some doubt about that: we will show that, at least in one field, the I-CC_TBE (with TBE, Time Between Event data) usage, it is not so: there are several published papers, in “scientific magazines and Journals (well appreciated by the Scholars)” with wrong Control Limits; a sample of the involved papers (from 1994 to January 2024) can be found in [20,21,22,23,24,25]”. Therefore, those authors do not extract the maximum information from the data in the Process Control. “The Garden…” [19] and the excerpts 1, with the Deming’s statements, constitute the Literature Review.
Excerpt 3. Some statements of Deming about Knowledge and Theory (Deming 1986, 1997).
Excerpt 3. Some statements of Deming about Knowledge and Theory (Deming 1986, 1997).
Preprints 178099 sch003
We hope that the Deming statements about knowledge will interest the Readers (Excerpt 2).
The good Managers, Researchers, Scholars do not forget that the two risks always are present and therefore they must take care of the power of the test 1-β, they use for the decision (as per the principles F1 and F2) [43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74].
Figure 3. LCL and UCL of Control Charts with their risks.
Figure 3. LCL and UCL of Control Charts with their risks.
Preprints 178099 g003
Such Managers, Researchers, Scholars use the Scientific Method.
It is important to state immediately and in an explicit way that
the risks must be stated,
together with the goals (the hypotheses),
BEFORE any statistical (reliability) test is carried out and data are analysed.
For demonstration of reliability characteristics, with reliability tests, Managers, Students, Researchers and Scholars must take into account, according the F1 principle, the very great importance of W. E. Deming statements (Excerpt 2): from these, unfortunately for Quality, for the Customers, for the Users and for the Society, this devastating result
The result is that hundreds of people are learning what is wrong. I make this statement on the basis of experience, seeing every day the devastating effects of incompetent teaching and faulty applications.
In many occasions and several Conferences on Total Quality Management for Higher Education Institutions, [Toulon (1998), Verona (1999), Derby (2000), Mons (2001), Lisbon (2002), Oviedo (2003), Palermo (2005), Paisley (2006), Florence (2008), Verona (2009)] the author (FG) showed many real cases, found in books and magazines specialized on Quality related to concepts, methods and applications wrong, linked to Quality [43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74]. All the very many documents published (more than 250) by F. Galetto show the profound truth that
facts and figures are useless, if not dangerous, without a sound theory (F. Galetto, 2000),
Brain is the most important asset: let's not forget it. (F. Galetto, 2003),
All that is particularly important for the analysis of any type of data (quality or reliability).

2. Materials and Methods

2.1. A Reduced Background of Statistical Concepts

After the ideas given in the Introduction, we provide the following ones essential to understand the “problems related to I-CC” as we found in the literature. We suggest it for the formulae given and for the difference between the concepts of PI (Probability Interval) and CI (Confidence Interval): this is overlooked in “The Garden … [19]”
Engineering Analysis is related to the investigation of phenomena underlying products and processes; the analyst can communicate with the phenomena only through the observed data, collected with sound experiments (designed for the purpose): any phenomenon, in an experiment, can be considered as a measurement-generating process [MGP, a black box that we do not know] that provides us with information about its behaviour through a measurement process [MP, known and managed by the experimenter], giving us the observed data (the “message”).
It is a law of nature that the data are variable, even in conditions considered fixed, due to many unknown causes.
MGP and MP form the Communication Channel from the phenomenon to the experimenter.
The information, necessarily incomplete, contained in the data, has to be extracted using sound statistical methods (the best possible, if we can). To do that, we consider a statistical model F(x|θ) associated with a random variable (RV) X giving rise to the measurements, the “determinations” D={x1, x2, …, xn} of the RV, constituting the “observed sample” D; n is the sample size. Notice the function F(x|θ) [a function of real numbers, whose form we assume we know] with the symbol θ accounting for an unknown quantity (or some unknown quantities) that we want to estimate (assess) by suitably analysing the sample D.
We indicate by f x | θ = d F x | θ / d x the pdf (probability density function) and by F x | θ the Cumulative Function, where θ is the set of the parameters of the functions.
We state in the Table 2 a sample of models where θ is a set of parameters:
Two important models are the Normal and the Exponential, but we consider also the others for comparison. When θ = μ , σ 2 we have the Normal model, written as N (x| μ , σ 2 ), with (parameters) mean E[X]=μ and variance Var[X]=σ2 with pdf
f ( x | μ , σ 2 ) = n x μ , σ 2 = 1 2 π σ e x μ 2 / ( 2 σ 2 )
When θ = θ we have Exponential model, E(x|θ), with (the single parameter) mean E[X]= θ = 1 / λ (variance Var[X]= θ 2 = 1 / λ 2 ), whose pdf is written in two equivalent ways f x | θ = e x / θ / θ = λ e λ x = f x | λ .
When we have the observed sample D={x1, x2, …, xn}, our general problem is to estimate the value of the parameters of the model (representing the parent population) from the information given by the sample. We define some criteria which we require a "good" estimate to satisfy and see whether there exist any "best" estimates. We assume that the parent population is distributed in a form, the model, which is completely determinate but for the value θ0 of some parameter, e.g. unidimensional, θ, or bidimensional θ={μ, σ2}, or θ={β,η,ω}) as in the GIW(x|β,η,ω), or θ={β,η,ω, θ }) as in the MPGW(x|β,η,ω, θ ) .
We seek some function of θ, say τ(θ), named inference function, and we see if we can find a RV T which can have the following properties: unbiasedness, sufficiency, efficiency. Statistical Theory allows us the analysis of these properties of the estimators (RVs).
We use the symbols X ¯ and S 2 for the unbiased estimators T1 and T2 of the mean and the variance.
Luckily, we have that T1, in the Exponential model  f x | θ , is efficient [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54], and it extracts the total available information from any random sample, while the couple T1 and T2, in the Normal model, are jointly sufficient statistics for the inference function τ(θ)=(μ, σ2), so extracting the maximum possible of the total available information from any random sample. The estimators (which are RVs) have their own “distribution” depending on the parent model F(x|θ) and on the sample D: we use the symbol φ t ,   θ , n for that “distribution”. It is used to assess their properties. For a given (collected) sample D the estimator provides a value t (real number) named the estimate of τ(θ), unidimensional.
A way of finding [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54] the estimate is to compute the Likelihood Function  L θ D [LF] and to maximise it: the solution of the equation L θ D / θ =0 is termed Maximum Likelihood Estimate [MLE]. Both are used also for sequential tests.
The LF is important because it allows us finding the MVB (Minimum Variance Bound, Cramer-Rao theorem) [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54] of an unbiased RV T [related to the inference function τ(θ)], such that
V a r T τ θ 2 E l n L θ D θ 2 = M V B T
The inverse of the MVB(T) provides a measure of the total available amount of information in D, relevant to the inference function τ(θ) and to the statistical model F(x|θ).
Naming IT(T) the information extracted by the RV T we have that [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54]
IT(T)=1/MVB(T) T is an Efficient Estimator.
If T is an Efficient Estimator there is no better estimator able to extract more information from D.
The estimates considered before were “point estimates” with their properties, looking for the “best” single value of the inference function τ(θ).
We recap the very important concept of Confidence Interval (CI) and Confidence Level (CL) [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54].
The “interval estimates” comprise all the values between τL (Lower confidence limit) and τU (Upper confidence limit); the CI is defined by the numerical interval CI={τL-----τU}, where τL and τU are two quantities computed from the observed sample D: when we make the statement that τ(θ)∈CI, we accept, before any computation, that, doing that, we can be right, in a long run of applications, (1-α)%=CL of the applications, BUT we cannot know IF we are right in the single application (CL=Confidence Level).
We know, before any computation, that we can be wrong α% of the times but we do not know when it happens.
The reader must be very careful to distinguish between the Probability Interval PI={L-----U}, where the endpoints L and U depends on the distribution φ t ,   θ , n of the estimator T (that we decide to use, which does not depend on the “observed sample” D) and, on the probability π=1-α (that we fix before any computation), as follows by the probabilistic statement (9) [se the Figure 2 for the exponential density, when n=1]
P L T U = L U φ t ,   θ , n d t = 1 α
and the Confidence Interval CI={τL-----τU} which depends on the “observed sample” D.
Notice that the Probability Interval PI={L-----U}, given in the formula (9), does not depend on the data D, as you can pictorially see in Figure 2: L and U are the Probability Limits. Notice that, on the contrary, the Confidence Interval CI={τL-----τU} does depend on the data D, pictorially seen in Figure 2. This point is essential for all the papers in the References.
Shewhart identified this approach, L and U, on page 275 of [40] where he states:
  “For the most part, however, we never know f Θ Θ , n [this is the symbols of Shewhart for our φ t ,   θ , n ] in sufficient detail to set up such limit… We usually chose a symmetrical range characterised by limits Θ ¯ ± t σ Θ  symmetrically spaced in reference to Θ . Tchebycheff’s Theorem tells us that the probability P that an observed value of Θ will lie within these symmetric limits so long as the quality standard is maintained satisfies the inequality P>1-1/t2. We are still faced with the choice of t. Experience indicated that t=3 seems to be an acceptable economic value”. See the excerpts 3,…
The Tchebycheff Inequality: IF the RV X is arbitrary with density f(x) and finite variance σ 2 THEN we have the probability P X μ k σ 1 / k 2 , where μ = E X . This is a “Probabilistic Theorem”.
It can be transferred into Statistics. Let’s suppose that we want to determine experimentally the unknown mean μ within a “stated error ε”. From the above (Probabilistic) Inequality we have P μ ε < X < μ + ε   1 σ 2 / ε 2 ; IF σ ε THEN the event X μ < ε is “very probable” in an experiment: this means that the observed value x of the RV X can be written as μ ε < x < μ + ε and hence x ε < μ < x + ε . In other words, using x as an estimate of μ we commit an error that “most likely” does not exceed ε . IF, on the contrary, σ ε , we need n data in order to write P μ ε < X ¯ < μ + ε   1 σ 2 / ( n ε 2 ) , where X ¯ is the RV “mean”; hence we can derive x ¯ ε < μ < x ¯ + ε ., where x ¯ is the “empirical mean” computed from the data. In other words, using x ¯ as an estimate of μ we commit an error that “most likely” does not exceed ε . See the Excerpts 3, 3a, 3b.
Notice that, when we write x ¯ ε < μ < x ¯ + ε , we consider the Confidence Interval CI [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54], and no longer the Probability Interval PI [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54].
These statistical concepts are very important for our purpose when we consider the Sequential tests and the Control Charts, especially with Individual data.
Notice that the error made by several authors [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19] is generated by lack of knowledge of the difference between PI and CI [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54]: they think wrongly that CI=PI, a diffused disease [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]! They should study some of the books/papers [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54] and remember the Deming statements (excerpt 2).
The Deming statements are important for Quality. Managers, scholars; the professors must learn Logic, Design of Experiments and Statistical Thinking to draw good decisions. The authors must, as well. Quality must be their number one objective: they must learn Quality methods as well, using Intellectual Honesty [1,2,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,25,26,27,28,29,30,31,32,33]. Using (9), those authors do not extract the maximum information from the data in the Process Control. To extract the maximum information from the data one needs statistical valid Methods [1,2,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,25,26,27,28,29,30,31,32,33].
As you can find in any good book or paper [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54] there is a strict relationship between CI and Test Of Hypothesis, known also as Null Hypothesis Significance Testing Procedure (NHSTP). In Hypothesis Testing (see the Introduction), the experimenter wants to assess if a “thought” value of a parameter of a distribution is confirmed (or rejected) by the collected data: for example, for the mean μ (parameter) of the Normal  n (x| μ , σ 2 ) density, he sets the “null hypothesis” H0={μ=μ0} and the probability P=α of being wrong if he decides that the “null hypothesis” H0 is true, when actually it is opposite: H0 is wrong. When we analyse, at once, the observed sample D={x1, x2, …, xn} and we compute the empirical (observed) mean x ¯ and the empirical (observed) standard deviation s   , we define the Acceptance interval, which is the CI
L C L = x ¯ t 1 α / 2 s / n < μ < x ¯ + t 1 α / 2 s / n = U C L
Notice that the interval (for the Normal model, μ assumed)
μ t 1 α / 2 σ / n μ t 1 α / 2 σ / n
is the Probability Interval such that P μ t 1 α / 2 σ / n < X ¯ < μ t 1 α / 2 σ / n = 1 α .
A fundamental reflection is in order: the formulae (10) and (11) tempt the unwise guy to think that he can get the Acceptance interval, which is the CI [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54], by substituting the assumed values μ 0 , σ 0 of the parameters with the empirical (observed) mean x ¯ and standard deviation s . This trick is valid only for the Normal distribution.
The formulae (10) can be used sequentially to test H0={μ=μ0} versus H1={μ=μ10}; for any value 2<k≤n; we obtain n-2 CIs, decreasing in length; we can continue until either μ1<LCL or UCL<μ0, or both (verify) μ1<LCL and UCL<μ0.
More ideas about these points can be found in [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54].
In the field of Control Charts, with Shewhart, instead of the formula (10), we use (12)
x ¯ z 1 α / 2 s c 4 n < μ < x ¯ + z 1 α / 2 s c 4 n
where the t distribution value t 1 α / 2 is replaced by the value z 1 α / 2 of the Normal distribution, actually z 1 α / 2 =3, and a coefficient c 4 is used to make “unbiased” the estimate of the standard deviation, computed from the information given by the sample.
Actually, Shewhart does not use the coefficient c 4 is as you can see from page 294 of Shewhart book (1931), where X ¯ is the “Grand Mean”, computed from D [named here empirical (observed) mean x ¯ ], σ is “estimated standard of each sample” (named here s, with sample size n=20, in excerpt 3)
Excerpt 3. From Shewhart book (1931), on page 294.
Excerpt 3. From Shewhart book (1931), on page 294.
Preprints 178099 sch004

2.2. Control Limits by AI Versus Sound Theory

In the first part of this section we provide the ideas of the Statistical Theory, while in the second one we see what AI tells us.
Statistical Process Management (SPM) entails Statistical Theory and tools used for monitoring any type of processes, industrial or not. The Control Charts (CCs) are the tool used for monitoring a process, to assess its two states: the first, when the process, named IC (In Control), operates under the common causes of variation (variation is always naturally present in any phenomenon) and the second, named OOC (Out Of Control), when the process operates under some assignable causes of variation. The CCs, using the observed data, allow us to decide if the process is IC or OOC. CCs are a statistical test of hypothesis for the process null hypothesis H0={IC} versus the alternative hypothesis H1={OOC}. Control Charts were very considered by Deming [29,30] and Juran [32] after Shewhart invention [40,41].
We start with Shewhart ideas (see the Excerpts 3, 3a and 3b).
In the excerpts, X ¯ is the (experimental) “Grand Mean”, computed from D (we, on the contrary, use the symbol x ¯ ), σ is the (experimental) “estimated standard of each sample” (we, on the contrary, use the symbol s, with sample size n=20, in Excerpts 3a, 3b), σ ¯ is the “estimated mean standard deviation of all the samples” (we, on the contrary, use the symbol s ¯ ).
Excerpt 3a. From Shewhart book (1931), on page 89.
Excerpt 3a. From Shewhart book (1931), on page 89.
Preprints 178099 sch005
On page 95, he also states that
Excerpt 3b. From Shewhart book (1931), on page 294.
Excerpt 3b. From Shewhart book (1931), on page 294.
Preprints 178099 sch006
So, we clearly see that Shewhart, the inventor of the CCs, used the data to compute the Control Limits, LCL (Lower Control Limit, which is the Lower Confidence Limit) and UCL (Upper Control Limit, the Upper Confidence Limit) both for the mean μ X (1st parameter of the Normal pdf) and for σ X (2nd parameter of the Normal pdf). They are considered the limits comprising 0.9973n of the observed data. Similar ideas can be found in [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54] (with Rozanov, 1975, we see the idea that CCs can be viewed as a Stochastic Process).
We invite the readers to consider that if one assumes that the process is In Control (IC) and if he knows the parameters of the distribution he can test if the assumed known values of the parameters are confirmed or disproved by the data, then he does not need the Shewhart Control Charts; it is sufficient to use NHSTP or the Sequential Test Theory!
Remember the ideas in the previous section and compare Excerpts 3, 3a, 3b (where LCL, UCL depend on the data) with the following Excerpt 4 (where LCL, UCL depend on the Random Variables) and appreciate the profound “logic” difference: this is the cause of the many errors in the CCs for TBE [Time Between Events (see [19,42,43,44,45,46,47,48,49,50,51,52,53,54]).
The formulae, in the excerpt 4, LCL1 and UCL1 are actually the Probability Limits (L and U) of the Probability Interval PI in the formula (9), when φ t ,   θ , n is the pdf of the Estimator T, related to the Normal model F(x; μ, σ2). Using (9), those authors do not extract the maximum information from the data in the Process Control. From the Theory [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36] we derive that the interval L=μY-3σY------μY+3σY=U is the PI such that the RV Y= X ¯
P [ μ Y 3 σ Y Y = X ¯ μ Y + 3 σ Y ] = 0.9973
and it is not the CI of the mean μ=μY [as wrongly said in the Excerpt 4, where actually (LCL1-----UCL1)=PI].
The same error is in other books and papers (not shown here but the reader can see in [19,20,21,22,23]).
Figure 3. Control Limits LCLX----UCLX=L----U (Probability interval), for Normal data (Individuals xij, sample size k) “sample means” x ¯ I and “grand mean” x ̿ .
Figure 3. Control Limits LCLX----UCLX=L----U (Probability interval), for Normal data (Individuals xij, sample size k) “sample means” x ¯ I and “grand mean” x ̿ .
Preprints 178099 g004
Figure 4. Individual Control Chart (sample size k=1). Control Limits LCL----UCL=L----U (Probability interval), for Normal data (Individuals xi) and “grand mean” x ¯ .
Figure 4. Individual Control Chart (sample size k=1). Control Limits LCL----UCL=L----U (Probability interval), for Normal data (Individuals xi) and “grand mean” x ¯ .
Preprints 178099 g005
The data plotted in the CCs [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54] (see the Figure 3-4) are the means x ¯ t i , determinations of the RVs X ¯ t i , i=1, 2, ..., n (n=number of the samples) computed from the sequentially collected data of the i-th sample Di={xij, j=1, 2, ..., k} (k=sample size)}, determinations of the RVs X t i j at very close instants tij, j=1, 2, ..., k. In other applications I-CC (see the Figure 3), the data plotted are the Individual Data x t i , determinations of the Individual Random Variables X t i , i=1, 2, ..., n (n=number of the collected data), modelling the measurement process (MP) of the “Quality Characteristic” of the product: this model is very general because it is able to consider every distribution of the Random Process X t , as we can see in the next section. From the Excerpts 3, 3a, 3b and formula (10) it is clear that Shewhart was using the Normal distribution, as a consequence of the Central Limit Theorem (CLT) [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54]. In fact, he wrote on page 289 of his book (1931) “… we saw that, no matter what the nature of the distribution function of the quality is, the distribution of the arithmetic mean approaches normality rapidly with increase in n (his n is our k), and in all cases the expected value of means of samples of n (our k) is the same as the expected value of the universe” (CLT in Excerpt 3, 3a, 3b).
Let k be the sample size; the RVs X ¯ t i are assumed to follow a normal distribution and uncorrelated; X ¯ t i [ith rational subgroup] is the mean of RVs IID X t i j j=1, 2, ..., k, (k data sampled, at very near times tij).
To show our way of dealing with CCs we consider the process as a “stand-by system whose transition times from a state to the subsequent one” are the collected data. The lifetime of “stand-by system” is the sum of the lifetimes of each unit. The process (modelled by a “stand-by …”) behaves as a Stochastic Process X t [25,26,27,28,29,30,31,32,33], that we can manage by the Reliability Integral Theory (RIT): see the next section; this method is very general because it is able to consider every distribution of X t .
If we assume that X t is distributed as f(x) [probability density function (pdf) of “transitions from a state to the subsequent state” of a stand-by subsystem] the pdf of the (RV) mean X ¯ t i is, due the CLT (page 289 of 1931 Shewhart book), X ¯ t i ~ N μ X ¯ t i ,   σ X ¯ t i 2 [experimental mean x ¯ t i ] with mean μ X ¯ t i and variance σ X ¯ t i 2 . X ̿ is the “grand” mean and σ X ̿ 2 is the “grand” variance: the pdf of the (RV) grand mean X ̿ ~ N μ X ̿ ,   σ X ̿ 2 [experimental “grand” mean x ̿ ]. In Figure 2 we show the determinations of the RVs X ¯ t i and of X ̿ .
When the process is Out Of Control (OOC, assignable causes of variation, some of the means μ X ¯ t i , estimated by the experimental means x ¯ i = x ¯ t i , are “statistically different)” from the others [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54]. We can assess the OOC state of the process via the Confidence Intervals (by the Control Limits) with CL=0.9973. Remember the trick valid only for the Normal Distribution ….; consider the PI, L=μY-3σY------μY+3σY=U; putting x ̿ in place of μ Y and s ¯ / k in place of σ Y we get the CI of μ X ̿ when the sample size k is considered for each X ¯ t i , with CL=0.9973. The quantity s ¯ is the mean of the standard deviations of each sample. This allows us to compare each (subsystem) mean μ X ¯ t q , q=1,2, …, n, to any other (subsystem) mean μ X ¯ t r   r=1,2, …, n, and to the (Stand-by system) grand mean μ X ̿ = μ . If two of them are different, the process is classified as OOC. The quantities L C L X = x ¯ ¯ 3 s ¯ / k and U C L X = x ¯ ¯ + 3 s ¯ / k are the Control Limits of the CC, which are the Confidence Limits. When the Ranges Ri=max(xij)-min(xij) are considered for each sample we have L C L X = x ¯ ¯ A 2 R ¯ , U C L X = x ¯ ¯ + A 2 R ¯ and L C L R = D 3 R ¯ , U C L R = D 4 R ¯ , where R ¯ is the “mean range” and the coefficients A2, D3, D4 are tabulated and depend on the sample size k [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54].
We stress that the interval LCLX-------UCLX is the “Confidence Interval” with “Confidence Level” CL=1-α=0.9973 for the unknown mean μ X t of the Stochastic Process X(t) [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54]. The interval LCLR----------UCLR is the “Confidence Interval” with “Confidence Level” CL=1-α=0.9973 for the unknown Range of the Stochastic Process X(t) [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54].
Notice that, ONLY for normally distributed data, the length of the Control Interval (UCLX-LCLX, which is the Confidence Interval) equals the length of the Probability Interval, PI (U-L): UCLX-LCLX=U-L.
The error highlighted, i.e. the confusion between the Probability Interval and the Control Limits (Confidence Interval!) has no consequences for decisions when the data are Normally distributed, as considered by Shewhart. On the contrary, it has BIG consequences for decisions WHEN the data are Non-Normally distributed [23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74].
We think that the paper “Quality of Methods for Quality is important”, [24] appreciated and mentioned by J. Juran at the plenary session of the EOQC (European Organization for Quality Control) Conference (1989), should be considered and meditated.
====== Now we see what AI answered to our queries.======
First Query: [NOTICE that the answer is like from the Theory!]
How can I compute LCL and UCL in Control Charts for individuals Normally distributed?
ChatGPT says:
To compute the Lower Control Limit (LCL) and Upper Control Limit (UCL) in Control Charts for Individuals (also called Individuals or X Charts), especially when the data is normally distributed, follow this step-by-step process:
Preprints 178099 i020
Preprints 178099 i002 1. Understanding Control Charts for Individuals
When you're dealing with individual measurements (i.e. sample size = 1), you usually use:
  • An Individuals (X) chart to monitor the process mean.
  • A Moving Range (MR) chart to estimate the process variability.
You don't have subgroups like in X-bar and R charts — you're monitoring single data points.
Preprints 178099 i020
Preprints 178099 i003 2. Basic Definitions
  • X ¯ : Mean of individual measurements
  • MR: Moving range (difference between consecutive individual values)
  • M R ¯ : Average of the moving ranges
  • d2: A constant depending on subgroup size (for n=2, d2≈1.128)
Preprints 178099 i020
Preprints 178099 i004 3. Formulae for Control Limits
Preprints 178099 i005 Individuals (X) Chart:
To estimate the process standard deviation: σ ^ = M R ¯ / d 2
Then: Center Line (CL): X ¯ , UCL: X ¯ +3 σ ^ , LCL: X ¯ -3 σ ^
Substitute for σ ^ : UCL= X ¯ +3 M R ¯ / d 2 ^ , LCL= X ¯ -3 M R ¯ / d 2 ^
Preprints 178099 i006 Moving Range (MR) Chart: CL: M R ¯ , UCL: D4⋅ M R ¯ , LCL: D3⋅ M R ¯
For moving range of 2 (i.e., MR = |Xᵢ − Xᵢ₋₁|): D4=3.267, D3=0
Preprints 178099 i020
Preprints 178099 i007 4. Step-by-Step Example
Suppose you have this data: X=[10,12,11,13,12]
Step 1: Compute X ¯ , X ¯ =(10+12+11+13+12)/5
Step 2: Compute Moving Ranges, MR={∣12−10∣,∣11−12∣,∣13−11∣,∣12−13∣}=[2,1,2,1] M R ¯ =(2+1+2+1)/4=1.5
Step 3: Estimate standard deviation σ ^ = M R ¯ / d 2 =1.5/1.128≈1.33
Step 4: Compute Control Limits
  • UCL= X ¯ +3 σ ^ =11.6+3(1.33)=11.6+3.99=15.59
  • LCL= X ¯ -3 σ ^ =11.6−3.99=7.61
So your X Chart limits are: UCL = 15.59, CL = 11.6, LCL = 7.61
Preprints 178099 i020
Preprints 178099 i008 Summary of Constants for n = 2:
Constant Value
d2 1.128
D3 0
D4 3.267
Preprints 178099 i009 Notes
  • This method assumes normality of the data and independence of observations.
  • If there is autocorrelation, the control chart may not work well.
  • Avoid using standard deviation directly unless the population SD is known.
Second Query: [NOTICE that the answers are all wrong!]
How can I compute LCL and UCL in Control Charts for individuals Exponentially distributed?
ChatGPT says:
To compute the Lower Control Limit (LCL) and Upper Control Limit (UCL) for individual control charts (I-Charts) when the data follow an exponential distribution, you need to take into account that the usual assumptions for I-Charts (normality, constant variance, etc.) do not hold. However, there are appropriate methods adapted for exponential data.
Preprints 178099 i020
Preprints 178099 i010 Overview of the Problem
  • You have individual observations (i.e., no subgroups).
  • The data follow an Exponential distribution (e.g., lifetimes, waiting times).
  • Traditional I-chart (X chart) limits assuming normality may not be appropriate
Preprints 178099 i011 Approach to Control Charts for Exponentially Distributed Data
The exponential distribution is often used to model the time between events in a Poisson process. It has a probability density function (PDF) given by: f(x∣λ)=λe−λx,x≥0 where λ is the rate parameter, which is the inverse of the mean (i.e., λ=1/μ.
The exponential distribution is not symmetric (it's positively skewed), and has:
  • Mean: μ=1/λ, Variance: σ2=1/λ2
In a SECOND TIME query BEGINNING=====================
Control Charts for Individual Measurements:
  • In an individual control chart for exponentially distributed data, each data point represents the individual measurement, and we are concerned with monitoring the location (mean) of the process.
For individual measurements from an exponential distribution, you can use Shewhart control charts adapted for exponential data, typically using a transformation or using a non-normal chart like the exponential control chart.
UCL and LCL Formula:
For an exponentially distributed process, the formulas for the UCL and LCL are typically derived from the properties of the exponential distribution.
  • Center Line (CL): The center line is the expected value (mean) of the exponentially distributed data, i.e., μ.
  • UCL and LCL are typically calculated as a function of the standard deviation (or sometimes the range) of the individual data points and a constant multiplier (commonly denoted as -3 +3 for a 3-sigma control chart).
Steps to Calculate:
  • Determine the Average (Mean) μ of the individual measurements.
    If you have a sample of data points x1,x2,…,xn, the mean is: μ = 1 n x i / n
  • Determine the Control Limits:
For an exponentially distributed process, the standard deviation of the individual data points is also μ, which is the same as the mean (since for an exponential distribution, the mean and standard deviation are equal); UCL (Upper Control Limit): UCL=μ+3μ=4μ and LCL (Lower Control Limit): LCL=μ−3μ=−2μ
However, since you can't have negative values for exponentially distributed data (because the data cannot go below zero), if the LCL becomes negative, it should be set to 0.
Therefore, the LCL is: LCL=max[0, −2μ]
3.
Plotting the Control Chart:
Once you have the UCL, LCL, and mean, you can plot the individual data points and compare them against the control limits. The control chart will signal out-of-control points when they fall outside the UCL or LCL.
Example: Suppose you have the following data points from an exponentially distributed process: 2.1, 3.0, 1.8, 2.5, 3.3, 2.9
  • Calculate the mean: μ=(2.1+3.0+1.8+2.5+3.3+2.9)/6=2.67
  • Calculate the UCL and LCL:
    UCL: UCL=4x2.67=10.68, LCL:LCL=max (0,−4.01)=0
  • So, for this dataset, the UCL is 10.68 and the LCL is 0.
This is the basic process for computing control limits for individual exponentially distributed data in a control chart. Would you like to go over an example in more detail or dive into other aspects of control charts?
SECOND TIME query END=====================
◇ Method 1: Control Chart Using Transformation (Log Transformation)
You can transform exponential data to approximate normality:
  • Let Xi∼Exponential(λ)
  • Then Yi=ln(Xi) is approximately Gumbel distributed, which may still not be normal, but may behave better.
  • Construct the chart based on Yi, then back-transform control limits if needed.
However, this is approximate and not always reliable.
◇ Method 2: Use of Quantiles – Exact Method for Exponential
Since exponential data is skewed, control limits can be set using quantiles of the exponential distribution. Given:
  • You assume the exponential distribution is the correct model.
  • You want the control limits to capture a certain percentage of the in-control process.
Then the LCL and UCL are set using percentiles from the exponential distribution:
Preprints 178099 i012 Control Limits for Individuals (based on quantiles):
If you want to set the control limits at, say, 3-sigma equivalent (which covers ~99.73% of normal data), then instead of standard deviation, use percentiles:
Let’s say your exponential distribution has rate λ, i.e. X∼Exp(λ), with mean μ=1/λ.
Then: LCL = [F(α/2)=]{-1}=−(1/λ)ln(1−α/2) and UCL = [F1-(α/2)=]{-1}=−(1/λ)ln(α/2)
For example, if you want the control limits to cover 99.73% (like ±3σ in normal distribution), set α=1−0.9973=0.0027
So: LCL = −(1/λ)ln(1−0.00135) and UCL = −(1/λ)ln(0.00135)
This gives you asymmetric control limits appropriate for exponential data.
Preprints 178099 i020
◇ Method 3: Use of Moving Range (I-MR Chart Alternative)
If you still want to use the Individuals-Moving Range (I-MR) chart:
  • Use mean of individual observations as the center line.
  • Estimate the average moving range M R ¯
  • Estimate process standard deviation: σ ^ = d 2 M R ¯ , where d2≈1.128 for moving range of 2.
  • Then control limits: UCL= X ¯ +3 σ ^ , and LCL= X ¯ -3 σ ^
Preprints 178099 i013 Caution: This assumes approximate normality, which does not hold for exponential data. Thus, this is not strictly valid unless the exponential is close to symmetric (i.e., large mean).
Preprints 178099 i020
Preprints 178099 i014 Recommended: Use Exponential Distribution Quantiles (Method 2)
If you’re confident the process follows an exponential distribution (e.g., lifetimes, waiting times, times between failures), the quantile-based control limits (Method 2) are the most appropriate.
Preprints 178099 i015 Example: Assume λ=0.5, so μ=2
Using α=0.0027, LCL≈−2ln(0.99865)≈0.0027, and UCL=−2ln(0.00135)≈13.21
So, your control limits would be: CL = 2, LCL ≈ 0.0027, UCL ≈ 13.21
Preprints 178099 i016 Summary
Method Use when Notes
Transformation Data not too skewed Approximate, may not always help
Quantiles (recommended) Exponential model is appropriate Exact, statistically valid
I-MR Chart Simpler method needed Assumes normality; use with caution
If you'd like, I can help generate control limits for your actual data or simulate an example. Just upload your dataset or specify the λ (or sample of data).
Third Query: [NOTICE that the answers are all wrong!]
How can I compute LCL and UCL in Control Charts for individuals Exponentially distributed?
ChatGPT says:
To compute the Lower Control Limit (LCL) and Upper Control Limit (UCL) for Control Charts for individuals when the data is exponentially distributed, we need to acknowledge that:
  • The exponential distribution is not symmetric, unlike the normal distribution assumed in traditional Shewhart Individuals (I) charts.
  • Standard I-MR (Individuals and Moving Range) charts assume normality, so applying them directly to exponential data can lead to incorrect control limits.
  • However, with appropriate transformation or modified control limits, you can still monitor such processes.
Preprints 178099 i018 Option 1: Transform the data to normality
If your data X∼Exponential(λ), then:
  • The natural log transformation: Y=ln(X) results in Y having a distribution that is closer to Extreme Value Type I (Gumbel), but still more manageable than raw exponential.
However, since control charts often require normality, it's common to apply log transformation and then use regular Individuals charts on the transformed data. Here's how:
Preprints 178099 i017 Steps for Log-Transformed I-chart:
  • Take natural log of all your data points: Yi=ln(Xi)
□ 2 Compute:
  • Y ¯ : mean of Yi
  • MRi=∣Yi−Yi−1∣: moving ranges of logged data
  • M R ¯ : average of moving ranges
□ 3 Estimate the standard deviation using: σ Y = M R ¯ / 1.128
□ 4 Compute UCL and LCL for the I-chart of logged data:
U C L Y = Y ¯ + 3 σ Y L C L Y = Y ¯ 3 σ Y
□ 5 To get control limits in the original scale, exponentiate the limits:
U C L X = e U C L Y         L C L X = e L C L Y
This approach makes the data approximately normal, so standard control chart techniques become valid.
Preprints 178099 i018 Option 2: Use control charts designed for exponential data
Some practitioners and researchers propose specialized control charts for exponential distributions, like:
1. Exponential EWMA (Exponentially Weighted Moving Average)
  • EWMA is robust to non-normality and can be tailored for exponential data.
2. Cumulative Sum (CUSUM) control charts
  • Particularly useful when dealing with non-normal distributions.
3. Exact UCL/LCL for exponential data
If you insist on not transforming the data, here's a rough approximation based on percentiles: Let X∼Exponential(λ), then: Mean: μ=1/λ, Standard deviation: σ=1/λ
If you approximate 3-sigma limits like a normal distribution (though not ideal), then:
UCL=μ+3σ=1/λ+3/λ=4/λ       LCL=μ−3σ=1/λ−3/λ=−2/λ
But LCL would be negative, which is not possible for exponential data (since it's strictly positive). So we either:
  • Set LCL = 0 (common in some practices), or
  • Use percentile-based control limits, like:
LCL=F−1(p1),   UCL=F−1(p2)
where F−1 is the inverse CDF of the exponential distribution, and p1, p2 are lower and upper percentiles (e.g., 0.0015 and 0.9985 for 3-sigma equivalent).
F−1(p)=−(1/λ)ln(1−p)
Preprints 178099 i017 Recommended Approach
Use the log-transformed individuals chart, which works well in practice and maintains simplicity. Here's a summary:
Step Action
1 Take Yi=ln(Xi)
2 Compute Y ¯ , MRi , and M R ¯
3 Estimate σ Y = M R ¯ / 1.128
4 Get UCL and LCL in log scale:
U C L Y = Y ¯ + 3 σ Y   L C L Y = Y ¯ 3 σ Y
5 Exponentiate limits:
U C L X = e U C L Y L C L X = e L C L Y
Let me know if you'd like a worked example or Excel template for this.
====== End of answers of AI to our queries.======
NOTICE that the AI answers are wrong for Exponentially distributed data: AI is actually AIG (Artificial IGnorance).
The following section explains why.

2.3. Statistics and Reliability Integral Theory (RIT)

We are going to present the fundamental concepts about RIT (Reliability Integral Theory) that we use for computing the Control Limits (Confidence Limits) of CCs. RIT is the natural way for Sequential Tests, because the transitions happen sequentially; to provide the ideas, we use a “4 units Stand-by system”, depicted by 5 states (Figure 5): 0 is the state with all units not-failed; 1 is the state with the first unit failed; 2 is the state with the second unit failed; and so on, until the system enters the state 5 where all the 4 units are failed (down state, in yellow): any transition provides a datum to be used for the computations. RIT can be found in the author’s books…
RIT can be used for parameters estimation and Confidence Intervals (CI), (Galetto 1981, 1982, 1995, 2010, 2015, 2016), in particular for Control Charts (Deming, 1986, 1997, Shewhart 1931, 1936, Galetto 2004, 2006, 2015). In fact, any Statistical or Reliability Test can be depicted by an “Associated Stand-by System” [25,26,27,28,29,30,31,32,33,34,35,36] whose transitions are ruled by the kernels bk,j(s); we write the fundamental system of integral equations for the reliability tests, whose duration t is related to interval 0-----t; the collected data tj can be viewed as the times of the various failures (of the units comprising the System) [t0=0 is the start of the test, t is the end of the test and g is the number of the data (4 in the Figure 5)]
Firstly, we assume that the kernel b j , j + 1 s t j is the pdf of the exponential distribution f ( s t j | μ , σ 2 ) = λ e λ s t j , where λ is the failure rate of each unit and λ = 1 / θ : θ is the MTTF of each unit. We state that R j t t j is the probability that the stand-by system does not enter the state g (5 in Figure 5), at time t, when it starts in the state j (0, 1, …, 4) at time tj, W ¯ j t t j is the probability that the system does not leave the state j, b j , j + 1 s t j d s is the probability that the system makes the transition j→j+1, in the interval s-----s+ds.
The system reliability R 0 t is the solution of the mathematical system of the Integral Equations (13)
R j t t j = W ¯ j t t j + t j t b j , j + 1 t t j R j + 1 t s d s   f o r j = 0 , 1 , , , g 1 ,         R g t | t g = W ¯ g t t g
With λ e λ s t j we obtain the solution (see Figure 5, putting the Mean Time To Failure MTTF of each unit=θ, λ = 1 / θ ) (see the Figure 6)
R 0 t = e λ t 1 + λ t + λ t 2 2 ! + λ t 3 3 ! + λ t 4 4 !
The reliability system (13) can be written in matrix form,
R t r = W ¯ t r + r t B s r R s d s
At the end of the reliability test, at time t, we know the data (the times of the transitions tj) and the “observedempirical sample D={x1, x2, …, xg}, where xj=tj – tj-1 is the length between the transitions; the transition instants are tj = tj-1 + xj giving the “observedtransition sample D*={t1, t2, …, tg-1, tg, t=end of the test} (times of the transitions tj).
We consider now that we want to estimate the unknown MTTF=θ=1/λ of each item comprising the “associated” stand-by system [19,20,21,22,23,24,25,26,27,28,29,30]: each datum is a measurement from the exponential pdf; we compute the determinant det B s r ;   θ ,   D * = 1 / θ g exp T t of the integral system (14), where T t is the “Total Time on Test” T t = 1 g x i   [ t 0 in the Figure 5]: the “Associated Stand-by System” [25,26,27,28,29,30,31,32,33] in the Statistics books provides the pdf of the sum of the RV Xi of the “observedempirical sample D={x1, x2, …, xg}. At the end time t of the test, the integral equations, constrained by the constraint D*, provide the equation
l n d e t B s r ;   θ ,   D * / θ = θ / g T t = 0
It is important to notice that, in the case of exponential distribution [11,12,13,14,15,16,25,26,27,28,29,30,31,32,33,34,35,36], it is exactly the same result as the one provided by the MLM Maximum Likelihood Method.
If the kernel b j , j + 1 s t j is the pdf f ( s t j | μ , σ 2 ) = 1 / 2 π σ e s t j μ 2 / ( 2 σ 2 ) the data are normally distributed, X   ~ N μ X ,   σ X 2 = 1 / 2 π σ X e x μ X 2 / ( 2   σ X 2 ) , with sample size n, then we get the usual estimator X ¯ = X i / n such that E X ¯ = μ X .
The same happens with any other distribution (e.g. see the Table 2) provided that we write the kernel b i , i + 1 s .
The reliability function R 0 t | θ , [formula (13)], with the parameter θ , of the “Associated Stand-by System” provides the Operating Characteristic Curve (OC Curve, reliability of the system) [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36] and allows to find the Confidence Limits ( θ L Lower and θ U   Upper) of the “unknown” mean θ , to be estimated, for any type of distribution (Exponential, Weibull, Rayleigh, Normal, Gamma, Inverted Weibull, General Inverted Weibull, …); by solving, with (a general) unknown (indicated as) θ , the two equations R 0 ( t 0 | θ ) = 1 α / 2     a n d     R 0 t 0 θ = α / 2 ; we get the two values ( θ L , θ U ) such that
R 0 t o | θ L = α / 2   a n d     R 0 t o | θ U = 1 α / 2
where  t o is the (computed) “total of the length of the transitions xi=tj - tj-1 data of the empirical sample D” and CL= 1 α is the Confidence Level. CI= θ L -------- θ U is the Confidence Interval: θ L = 1 / λ U and θ U = 1 / λ L .
For example, with Figure 6, we can derive θ L = 62.5   d a y s = 1 / λ U and θ U = 200   d a y s = 1 / λ L , with CL=0.8. It is quite interesting that the book [14] Meeker et al., “Statistical Intervals: A Guide for Practitioners and Researchers”, John Wiley & Sons (2017) use the same ideas of FG (shown in the formula 16) for computing the CI; the only difference is that the author FG defined the procedure in 1982 [44], 35 years before Meeker et al.
As said before, we can use RIT for the Sequential Tests; we have only to consider the various transitions and the Total Time on Test to the last transition we want to consider.

2.4. Control Charts for TBE Data: Some Ideas for Phase I Analysis

Let’s consider now TBE (Time Between Event, time between transitions) data, exponentially or Weibull distributed. Quite a lot of authors (in the “Garden … [19]”) compute wrongly the Control Limits (which are the Confidence Limits) of these CCs.
The formulae, shown in the section “Control Charts for Process Management”, are based on the Normal distribution (thanks to the CLT; see the Excerpts 3, 3a and 3b); unfortunately, they are used also for NON_normal data (e.g. see formulae (6)): for that, sometimes, the NON_normal data are transformed “with suitable transformations” in order to “produce Normal data” and to apply those formulae (above) [e.g. Montgomery in his book].
Sometimes we have few data and then we use the so called “Individual Control Charts” I-CC. The I-CCs are very much used for exponentially (or Weibull) distributed data: they are also named “rare events Control Charts for TBE (Time Between Events) data”, I-CC_TBE.
In the previous section, we computed the CI= θ L -------- θ U of the parameter θ , using the (subsample) “transition times durations”: t O =“total of the transition times durations (length of the transitions xi=tj - tj-1 data) in the empirical sample (subsample with n=4 only, as an example)” and Confidence Level CL= 1 α .
When we deal with a I-CC_TBE we compute the LCL and UCL of the mean θ through the empirical mean  t ¯ O = t O / n of each transition, for the… ; we solve the two following equations (17) for the two unknown values LCL and UCL, for R t ¯ O   θ   of each item in the sample, similar to (16)
R t ¯ O | L C L = α / 2 ,               R t ¯ O | U C L = 1 α / 2
where now  t ¯ O = t O /n is the “mean, to be attributed, to the single lengths of the single transitions xi=tj-tj-1 data in the empirical sample D with the Confidence Level CL= 1 α : L C L = 1 / λ U and U C L = 1 / λ L .
For exponentially distributed data (17) becomes (18) [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33], k=1, with CL= 1 α
e t ¯ O / L C L = 1 α / 2           a n d e t ¯ O / U C L = α / 2
The endpoints of the CI= L C L -------- U C L  are the Control Limits of the I-CC_TBE.
This is the right method to extract the “true” complete information contained in the sample (see the Figure 8, Figure 9 and Figure 10). The figures are justified by the Theory [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33] and are related to the formulae [(12), (13) for k=1], for the I-CC_TBE charts.
Remember the book Meeker et al., “Statistical Intervals: A Guide for Practitioners and Researchers”, John Wiley & Sons (2017): the authors use the same ideas of FG; the only difference is that FG invented 30 years before, at least.
Compare the formulae [(18), for k=1], theoretically derived with a sound Theory [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33], with the ones in the Excerpt [in the Appendix (a small sample from the “Garden … [19]”)] and notice that the two Minitab authors (Santiago&Smith) use the “empirical mean t ¯ O ” in place of the θ 0 in the Figure 1: it is the same trick of replacing  x ̿ to the mean μ which is valid for the Normal distributed data only; e.g., see the formulae (1)!
In the next sections we can see the Scientific Results found by a Scientific Theory.

3. Results

In this section, firstly, we provide the scientific analysis of the “Piston Rings” data [16] and compare our result with those of the authors: the findings are completely different and the decisions, consequently, should be different, with different costs of wrong decisions.

3.1. Control Charts for Piston Rings data. Phase I Analysis

The Inside Diameter Measurements for Piston Rings data are in the Table 3.
D. C. Montgomery uses a x ¯ ,   s Charts to assess the “state of control” of the production process, with Control Limits (see (12))
L C L X = x ̿ 3 s ¯ c 4 5 , U C L X = x ¯ + 3 s ¯ c 4 5   a n d   L C L S = 0 , U C L S = 1.964 s ¯
He concludes “There is no indication that the process is out of control, so those limits could be adopted for phase II monitoring of the process.
We agree with his conclusion.
On the contrary, the authors of [16] decide the opposite, with their “Distribution-free Control Chart for Joint Monitoring of Unknown Location and Scale Parameters”!
To understand we analysed the type of distribution and found that the Normal Distribution with mean μ=74.001 and σ=0.0114; the CIs are 73.998<μ<74.004 and 0.0098<σ<0.0137, both with CL=99%.
In this case the AI formulae (for normal data) provide the right answer.
On the contrary, the authors of [16] find the opposite.
The authors of [16] decide that the process is OOC, with their “Distribution-free Control Chart for Joint Monitoring of Unknown Location and Scale Parameters” and write:
Excerpt 5. From “Distribution-free Control Chart for Joint Monitoring of Unknown Location and Scale Parameters”.
Excerpt 5. From “Distribution-free Control Chart for Joint Monitoring of Unknown Location and Scale Parameters”.
Preprints 178099 sch008

3.2. Control Charts for “Time Intervals Between Consecutive Earthquakes … in the Mount St. Helens Region of the Washington State, After January 1, 1978”. Phase I Analysis

We continue our route for Quality by considering the papers “ARL-Unbiased Exponential Control Charts With Estimated Parameters: Statistical Design and Implementation” (Quality and Reliability Engineering International 2025), “Statistical design of phase II exponential chart with estimated parameters under the unconditional and conditional perspectives using exact distribution of median run length” (Quality Technology & Quantitative Management 2021), and “Improved Phase I Control Charts for Monitoring Times Between Events” (Quality and Reliability Engineering International 2014).
There are, as well, several drawbacks in many papers dealing with Control Charts (CCs) related to “Time Between Events Exponentially distributed data”. As far as 2000 you can find many papers with wrong methods for computing the Control Limits of the CCs; several are listed in the Garden of Flowers [19], in Academia.edu (where you find often the two above authors).
The problem is, as well, wrongly solved in Minitab 19&20&21 and JMP, in spite that the Companies Management was informed about it.
Both the papers use the same data (taken form a paper of Santiago & Smith) about the about the recorded time intervals between consecutive earthquakes of magnitude 1.0 or higher in the Mount St. Helens region of the Washington State, after January 1, 1978.
The “theory” given there about the CCs with exponentially distributed data is almost the same; we show it through some excerpts copied from the papers.
Assuming a known value of the “constant event rate” [notice the two different symbols θ and λ ] we read the Excerpt 1, for the years 2025 and 2021:
Excerpt 6. Statements from the two papers published by Quality and Reliability Engineering International and Quality Technology & Quantitative Management.
Excerpt 6. Statements from the two papers published by Quality and Reliability Engineering International and Quality Technology & Quantitative Management.
Preprints 178099 sch009aPreprints 178099 sch009b
Notice immediately that the formulae in the Excerpt 6 provide the Probability Limits L and U (vertical segment) of the Random Variable X (exponentially distributed) NOT the Control Limits (LCL and UCL, horizontal segment) of the Control Chart [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25], as the reader can see in the Figure 1 [θ0 is the assumed known value of the parameter θ which is the mean θ=1/λ of the exponential pdf f x = λ e x p ( λ x ) , as written in the 2021 paper (Excerpt 6)].
Figure 7. LCL and UCL for the Control Chart (exponentially distributed data).
Figure 7. LCL and UCL for the Control Chart (exponentially distributed data).
Preprints 178099 g008
The two authors analyse the recorded time intervals between consecutive earthquakes in the Mount St. Helens region and provide the figures in the Excerpt 7.
Notice that the data are shown as the natural logarithm of the original time intervals between consecutive earthquakes.
We present immediately in the Figure 8 the scientific analysis of time intervals between consecutive earthquakes, where the Control Limits are derived by the first 20 data as done in the 2025 QREI paper.
Excerpt 7. Control Charts from the two papers published by QT & QM and QREI (time intervals between consecutive earthquakes, ln vertical scale).
Excerpt 7. Control Charts from the two papers published by QT & QM and QREI (time intervals between consecutive earthquakes, ln vertical scale).
Preprints 178099 sch010
Figure 8. Scientific LCL and UCL, computed with the first 20 data, of the Control Chart (exponentially distributed data [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36]) of the time intervals between consecutive earthquakes (ln vertical scale ), to be compared with the Excerpt 7.
Figure 8. Scientific LCL and UCL, computed with the first 20 data, of the Control Chart (exponentially distributed data [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36]) of the time intervals between consecutive earthquakes (ln vertical scale ), to be compared with the Excerpt 7.
Preprints 178099 g009
It is clear that the results are completely different: the 2021 and 2025 papers have “questionable theory”.
Figure 9 shows the result provided by JMP (left side) while the right side provides the result by Santiago & Smith.
Figure 9. Control Charts of the time intervals between consecutive earthquakes to be compared with the Excerpt 7 and the Figure 7, for finding the IC and OOC situations.
Figure 9. Control Charts of the time intervals between consecutive earthquakes to be compared with the Excerpt 7 and the Figure 7, for finding the IC and OOC situations.
Preprints 178099 g010
The same authors wrote a paper “Improved Phase I Control Charts for Monitoring Times Between Events” [17], using a completely different “theory” from the 2021/2025 papers. In the Abstract, they say
In many situations, the times between certain events are observed and monitored instead of the number of events particularly when the events occur rarely. In this case, it is common to assume that the times between events follow an exponential distribution. Control charts are one of the main tools of statistical process control and monitoring. Control charts are used in phase I to assist operating personnel in bringing the process into a state of statistical control. In this paper, phase I control charts are considered for the observations from an exponential distribution with an and out-of-control performance of the proposed chart. It is seen that the proposed charts are considerably more in-control robust than two competing charts and have comparable out of control properties. Copyright © 2014 John Wiley & Sons, Ltd.
Notice the wrong statement: “It is seen that the proposed charts are considerably more in-control robust than two competing …”. That is the problem: K&C method shows In Control (IC) situations when actually the process is Out Of Control (OOC); the method is wrong.
The authors use the following symbols:
  • X ¯ = 1 n X i / n denoting the sample average of the X i , i=1, 2, …, n,
  • α0 denoting the overall false alarm rate,
  • X(l), X(m), X(u) denoting the first, the second, and the third quartile, respectively, of the ordered data.
They go on by writing
… we investigate the IC robustness of the one-sided control charts to the assumption of the underlying exponential distribution via simulation…. then carried out for the two-sided charts. The IC robustness is an important attribute of a control chart and should be investigated thoroughly, because in practice, the underlying distribution may not be exactly exponential. The more IC robust the control chart, the more confidence the user will have on the advertised false alarm rate.
The K&C method shows In Control (IC) situations when actually the process is Out Of Control (OOC); the method is wrong!
The data, given in the paper, are in the Table 4.
Table 4. Time between failures data (“Improved Phase… for Monitoring TBE”. [17]).
Table 4. Time between failures data (“Improved Phase… for Monitoring TBE”. [17]).
1.24 6.69 9.77 1.23 14.03 18.07 3.90 13.61 18.47 12.85
52.32 14.75 4.69 0.18 13.61 4.57 0.28 7.08 12.00 5.15
6.09 20.41 5.93 19.03 13.65 6.37 2.06 3.30 6.91 12.08
The K&C analysis, with its OOC (Out Of Control), is in the Figure 4:
Figure 10. CC from “Improved Phase… for Monitoring TBE”. Excerpt 6 from [17].
Figure 10. CC from “Improved Phase… for Monitoring TBE”. Excerpt 6 from [17].
Preprints 178099 g011
The authors say:
Table … shows a set of 30 failure time data generated from a Poisson distribution with a mean of 0.1. For these data, n = 30, l=8, m= 15, and u = 23. We monitor these data with the proposed phase I chart. The center line for the proposed two-sided control chart is CL=X(15)=6.91, and the lower and UCL are given by LCL=-53.9213 and UCL=47.2320. Because LCL<0, we set the LCL as LCL=0. It can be seen from Figure (our 5) that the eleventh observation 52.32 plots outside the UCL, which indicates an OOC situation that needs further investigation. Note that for these data, neither the Dovoedo and Chakraborti, nor the Jones and Champ control chart indicates any OOC situation.
On the contrary, the scientific solution is in Figure 5 (vertical axe logarithmic): UCL is >100.
Using RIT (devised by F. Galetto) we find that the Process is OOC (Out Of Control) for the opposite reason stated by the two authors! See Figure 11.
Comparing the Figure 10 and Figure 11 it becomes clear that the Control Chart from “Improved Phase… for Monitoring TBE” [17] presents 5 errors about OOC: there is NO OOC for the value (the eleventh observation 52.32) and there are 4 OOC values below the LCL.
The Garden of Flowers [19] can be considered as our literature review…

3.2. Control Charts for TBE Data: Phase II Analysis

We saw in the previous section what usually it is done during the Phase I of the application of CCs: estimation of the mean and standard deviation; later, their values are assumed as “true known” parameters of the data distribution, in view of the Phase II.
Now we apply RIT to the data (“time intervals between consecutive earthquakes ….” figured) in the Excerpt 7, where the authors applied the “theory” in the Excerpt 6.
We consider the paper QREI 2025, where the model (as written by the authors) is “the quality characteristic (e.g., time to an event, TBE) X follows an exponential distribution with probability density function f x = θ e x p ( θ x ) , for x>0, where θ>0 is the event rate.” It is a very strange notation because, in the literature, the event rate is λ (as in the QT & QM 2021, related to the Poisson distribution), and θ =1/λ is the mean of the exponential pdf).
The two authors K&C (with a number of citations 304 and 8,466 respectively) state that the analysis should be based on the Excerpt 8:
Note all the words in the Excerpt 9 and in particular “plugging-in” for the “case U” when the “rate” θ0 is unknown and the “estimator θ ^ 0 ” is “plugged-in”…
This “plugging-in” provides wrong Control Limits LCL and UCL for any distribution not-Normal, while it has no consequences (see the formulae (7)) for the Normal Distribution (in spite of being theoretically wrong). The THEORY in the section 2 shows that.
The first thing to do when we analyse data “supposed exponentially distributed” is to assess if the exponential pdf (1) fits “well” the data.
The two authors K&C did not care about that: they “assumed data exponentially distributed”!
A suitable alternative is the Weibull pdf of the TTF (Time To Failure) Random Variable, with parameters β (shape) and η (scale); the mean MTTF depends on both:
f x | η , β   = 1 / η x / η β 1 e x p x / η β
The solution of the two equations d e t [ B s | r ;   η ,   β , D = 0 , for the 77 data is β ^ = 0.727 , and Confidence Interval CIβ= β ^ L = 0.589 ,   β ^ U = 0.922 , with Confidence Level 95%, and η ^ = 204.23 , Confidence Interval CIη= η ^ L = 152.72 ,   η ^ U = 282.80 , with Confidence Level 95%.
Since 1∉ CIβ=we cannot assume, with Confidence Level 95%, that the data are exponentially distributed. Our analysis is in the Figure 12.
The Figure 13 shows the difference (graphical) between the distribution of the data and the Exponential: it is clear that … as shown by the confidence intervals.
Transforming the Weibull data into Exponential data we find the CC in the Figure 14.
Since the authors K&C (in the 2025 paper) used the first 20 data, as Phase I, to find the Control Limits we did the same: we searched for the estimates of the Weibull distribution and found the solution of the two equations d e t [ B s | r ;   η ,   β , D = 0 , for the first 20 data (Phase I); we got β ^ P h a s e   I = 0.823 , and Confidence Interval CIβ= β ^ L P h a s e   I = 0.555 , β ^ U P h a s e   I = 1.347 , with CL=95%, and η ^ P h a s e   I = 287.75 , and with CL=95% we computed the Confidence Interval CIη= η ^ L P h a s e   I = 178.18 ,   η ^ U P h a s e   I = 523.78 .
Figure 12. F. Galetto Control Chart of the “time intervals between consecutive earthquakes ….”, using the Weibull distribution; vertical axe logarithmic; UCL is >10000.(RIT used). Process OOC.
Figure 12. F. Galetto Control Chart of the “time intervals between consecutive earthquakes ….”, using the Weibull distribution; vertical axe logarithmic; UCL is >10000.(RIT used). Process OOC.
Preprints 178099 g013
Figure 13. Distribution (red) of the “time intervals between consecutive earthquakes ….”, compared to the Exponential (blue).
Figure 13. Distribution (red) of the “time intervals between consecutive earthquakes ….”, compared to the Exponential (blue).
Preprints 178099 g014
Figure 14. F. Galetto Control Chart of the “transformed (into Exponential) time intervals between consecutive earthquakes ….”; vertical axe logarithmic; UCL is >1000.(RIT used). Process OOC.
Figure 14. F. Galetto Control Chart of the “transformed (into Exponential) time intervals between consecutive earthquakes ….”; vertical axe logarithmic; UCL is >1000.(RIT used). Process OOC.
Preprints 178099 g015
Figure 15. F. Galetto Control Chart of the “time intervals between consecutive earthquakes ….”, with Control Limits found by the Phase I 20 data; vertical axe logarithmic; UCL is >100000. (RIT used). Process OOC.
Figure 15. F. Galetto Control Chart of the “time intervals between consecutive earthquakes ….”, with Control Limits found by the Phase I 20 data; vertical axe logarithmic; UCL is >100000. (RIT used). Process OOC.
Preprints 178099 g016
Since 1∈ CIβ=we can assume, with Confidence Level 95%, that the data are exponentially distributed and the Control Limits can be derived with the exponential distribution. Our analysis is in the Figure 12.
Compare Figure 12 with the figures in the Excerpt 7: Control Chart (on the right) from the paper published by QREI 2025 (time intervals between consecutive earthquakes, ln vertical scale).
You see that the K&C analysis (based on the wrong formulae, in the Excerpts 8 and 9) is not coherent with the Theory [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74].
All the results are found via RIT (Reliability Integral Theory).

4. Discussion

We decided to use the data from the papers [16,17,18] and the analysis by the authors.
We got different results from those authors: the cause is that they use the Probability Limits of the PI (Probability Interval) as they were the Confidence Limits (Control Limits of the Control Charts).
The proof of the confusion between the intervals L-------U (Probability Interval) and LCL-------UCL (Confidence Interval) in the domain of Control Charts (for Process Management) highlight the importance and novelty of these ideas in the Statistical Theory and in the applications.
For the “location” parameter in the CCs, from the Theory, we know that two mean μ X ¯ t q (parameter), q=1,2, …, n, and any other mean μ X ¯ t r   (parameter), r=1,2, …, n, are different, with risk α, if their estimates are not both included in their common Confidence Interval as the CI of the grand mean μ X ̿ = μ (parameter) is.
Let’s consider the formula (4) and apply it to a “Normal model” (due to CLT, and assuming known variance), sequentially we can write the “real” fixed interval L----U comprising the RV X ̿ (vertical interval) and the Random Interval comprising the unknown mean μ (horizontal interval) (Figure 16)
P L = μ σ z 1 α 2 k X ̿ μ + σ z 1 α 2 k = U = P X ̿ σ z 1 α 2 k μ X ̿ + σ z 1 α 2 k
When the RV X ̿ assume its determination (numerical value) x ̿ (grand mean) the Random Interval becomes the Confidence Interval for the parameter μ, with CL=1-α: risk α that the horizontal line does not comprise the “mean” μ.
This is particularly important for the Individual Control Charts for Exponential, Weibull, Inverted Weibull, General Inverted Weibull, Maxwell and Gamma distributed data: this is what Deming calls “Profound Knowledge (understanding variation)” [29,30]. In this case, you see that the Confidence Interval is the realisation of the horizontal Random Interval. The same happens for any distribution.
The case we considered shows clearly that the analyses, in the Process Management, taken so far have been wrong and the decisions have been misleading, when the collected data follow a Non-Normal distribution [19].
Since a lot of papers (related to Exponential, Weibull, Inverted Weibull, General Inverted Weibull, Maxwell and Gamma distributions), with the same problem as that of “The garden of flowers” [19], are published in reputed Journals we think that an “alternative” title “History is written by the winners. Reflections on Control Charts for Process Control” should be suitable for this paper: the authors of the wrong papers [19] are the winners.
Figure 14. Probability Interval L---U (vertical line) versus Random Intervals comprising the “mean” μ (horizontal random variable lines), for Normally distributed RVs X ¯ ~ N μ , σ 2 .
Figure 14. Probability Interval L---U (vertical line) versus Random Intervals comprising the “mean” μ (horizontal random variable lines), for Normally distributed RVs X ¯ ~ N μ , σ 2 .
Preprints 178099 g017
Further studies should consider other distributions which cannot be transformed into the above distributions considered before.

5. Conclusions

With our figures (and the Appendix, that is a short extract from the “Garden … [19]”) we humbly ask the readers to look at the references [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74] and find how much the author has been fond of Quality and Scientificness in the Quality (Statistics, Mathematics, Thermodynamics, …) Fields.
The errors, in the “Garden … [19]”, are caused by the lack of knowledge of sound statistical concepts about the properties of the parameters of the parent distribution generating the data, and the related Confidence Intervals. For the I-CC_TBE the computed Control Limits (which are actually the Confidence Intervals), in the literature are wrong due to lack of knowledge of the difference between Probability Intervals (PI) and Confidence Intervals (CI). Therefore, the consequent decisions about Process IC and OOC are wrong.
We saw that RIT is able to solve various problems in the estimation (and Confidence Interval evaluation) of the parameters of distributions. The basics of RIT have been given.
We could have shown many other cases (from papers not mentioned here, that you can find in [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75]) where errors were present due to the lack of knowledge of RIT and sound statistical ideas.
Following the scientific ideas of Galileo Galilei, the author many times tried to compel several scholars to be scientific (Galetto 1981-2025). Only Juran appreciated the author’s ideas when he mentioned the paper “Quality of methods for quality is important” at the plenary session of EOQC Conference, Vienna. [24]
For the control charts, it came out that RIT proved that the T Charts, for rare events and TBE (Time Between Events), used in the software Minitab, SixPack, JMP or SAS are wrong [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74]. So doing the author increased the h-index of the mentioned authors who published wrong papers.
RIT allows the scholars (managers, students, professors) to find sound methods also for the ideas shown by Wheeler in Quality Digest documents.
We informed the authors and the Journals who published wrong papers by writing various letters to the Editors [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15] …: no “Corrective Action”, a basic activity for Quality has been carried out by them so far. The same happened for Minitab Management. We attended a JMP forum in the JMP User Community and informed them that their “Control Charts for Rare Events” were wrong: they preferred to stop the discussion, instead to acknowledge the JMP faults [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74].
So, dis-quality continues to be diffused people and people continue taking wrong decisions…
Deficiencies in products and methods generate huge cost of Dis-quality (poor quality) as highlighted by Deming and Juran. Any book and paper are products (providing methods): their wrong ideas and methods generate huge cost for the Companies using them. The methods given here provide the way to avoid such costs, especially when RIT gives the right way to deal with Preventive Maintenance (risks and costs), Spare Parts Management (cost of unavailability of systems and production losses), Inventory Management, cost of wrong analyses and decisions.
Figure 15. Probability Intervals L-----U versus Confidence Intervals LCL-----UCL in Control Charts.
Figure 15. Probability Intervals L-----U versus Confidence Intervals LCL-----UCL in Control Charts.
Preprints 178099 g018
We think that we provided the readers with the belief that Quality of Methods for Quality is important.
The reader should remember the Deming’s statements and the ideas in [29,30].
Unfortunately, many authors do not know Scientifically the role (concept) of Confidence Intervals for Hypothesis Testing. The same happens for AIg…
Figure 16. Knowledge versus Ignorance, in Tools and Methods. The same happens for AIg….
Figure 16. Knowledge versus Ignorance, in Tools and Methods. The same happens for AIg….
Preprints 178099 g019
Therefore, they do not extract the maximum information form the data in the Process Control.
Control Charts are a means to test the hypothesis about the process states, H0={Process In Control} versus H1={Process Out Of Control}, with stated risk α=0.0027.
We have a big problem about Knowledge: sound Education is needed.
We think that the Figure 16 conveys the fundamental ideas about the need of Theory for devising sound Methods, to be used in real applications in order to avoid the Dis-quality Vicious Circle. The same happens for AIg…
Humbly, given our commitment to Quality and our long-life love for it [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74], we would venture to quote Voltaire:
It is dangerous to be right in matters on which the established men are wrong.” because “Many are destined to reason wrongly; others, not to reason at all; and others, to persecute those who do reason.” So, “The more often a stupidity is repeated, the more it gets the appearance of wisdom.” and “It is difficult to free fools from the chains they revere.”
Let’s hope that Logic and Truth prevail and allow our message to be understood.
The objective of collecting and analysing data is to take the right action. The computations are merely a means to characterize the process behaviour. However, it is important to use the right Control Limits take the right action about the process states, i.e., In Control versus Out Of Control.
On July-December 2024 we again verified (through several new downloaded papers, not shown here) that the Pandemic Disease about the (wrong) Control Limits, that are actually the Probability Limits of the PI is still present …
There will be any chance that the Pandemic Disease ends? See the Excerpt 12: notice the (ignorant) words “plugging into …”. The only way out is Knowledge… (Figure 16): Deming’s [29,30] Profound Knowledge, Metanoia, Theory. AIg is the opposite of Knowledge
Excerpt 10. From “Conditional analysis of Phase II exponential chart… an event”, Q. Tech. & Quantitative Mgt, ’19.
Excerpt 10. From “Conditional analysis of Phase II exponential chart… an event”, Q. Tech. & Quantitative Mgt, ’19.
Preprints 178099 sch013
We think that we provided the readers with several ideas and methods to be meditated in view of the applications, generating wealth for the companies using them.
The documents [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74] are very important: ASSURE …
There is no “free lunch”: metanoia and study are needed and necessary.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A (Related to [19])

Excerpt A1. Typical statements in the “Garden …[19]” where the authors name LCL and UCL what actually are the Probability Limits L and U. See the Figure 9 and the Excerpt 10.
Excerpt A1. Typical statements in the “Garden …[19]” where the authors name LCL and UCL what actually are the Probability Limits L and U. See the Figure 9 and the Excerpt 10.
Preprints 178099 sch014aPreprints 178099 sch014b
Many other cases, with the same errors, can be found in the “Garden …[19], and the Conclusions” where the authors name LCL and UCL what actually are the Probability Limits L and U.
There is no “free lunch”: metanoia and study are needed and necessary.

References

  1. ResearchGate: M. Ananda Articles.
  2. ResearchGate: S. Chakraborti Articles.
  3. ResearchGate: S. Chowdhury Articles.
  4. ResearchGate: A. McCracken Articles.
  5. ResearchGate: O. Dag Articles.
  6. ResearchGate: W. Kossler Articles.
  7. ResearchGate: N. Kumar Articles.
  8. ResearchGate: S. Lian Articles.
  9. ResearchGate: B. Liu Articles.
  10. ResearchGate: H. Murakami Articles.
  11. ResearchGate: A. Mukherjee Articles.
  12. ResearchGate: A. Rakitzis Articles.
  13. ResearchGate: S. Weerahandi Articles.
  14. ResearchGate: J. Zhang Articles.
  15. Journals: Quality and Reliability Engineering International, Quality Engineering,, Reliability Engineering and System Safety, IIE Transactions, Quality Technology and Quantitative Management, Quality & Quantity, J. of Applied Statistics, J. of Quality Technology, J. of Statistical Planning and Inference, J. of Statistical Computation and Simulation, International J. of Production Research, J. of the American Statistical Association, International J. for Quality Research, J. of Quality Technology, J. of Statistical Theory and Practice, J. of Statistical Theory and Applications, J. of Applied Probability and Statistics, International J. of Advanced Manufacturing Technology, J. of Systems Science and Information, European J. of Operational Research, J. of Nonparametric Statistics, J, of the Japanese Society of Computational Statistics, Communications in Statistics - Simulation and Computation, Statistics and Probability Letters, Communications in Statistics - Theory and Methods, Computational Statistics, Computational Statistics & Data Analysis,.
  16. Chakraborti S. et al., Distribution-free Control Chart for Joint Monitoring of Unknown Location and Scale Parameters, Quality and Reliability Engineering International (QREI), 2013 and ARL-Unbiased Exponential Control Charts With Estimated Parameters : Statistical Design and Implementation, Quality and Reliability Engineering International, May 2025.
  17. N. Kumar et al., Improved Shewhart-type Charts for Monitoring Times Between Events, Journal of Quality Technology, 2016, and Improved Phase I Control Charts for Monitoring Times Between Events, QREI, 2014,.
  18. N. Kumar, Statistical design of phase II exponential chart with estimated parameters under the unconditional and conditional perspectives using exact distribution of median run length, Quality Technology & Quantitative Management, 2021. [CrossRef]
  19. Galetto, F., (2024), The garden of flowers (including various of the above authors and Journals), Academia.edu.
  20. Galetto F. (2020-25) Various Letters to the EDITORS of QREI (Quality Reliability Engineering International, Academia.edu.
  21. Galetto, F., (2024), JMP and Minitab betray Quality, Academia.edu.
  22. Galetto, F., (2014), Papers, and Documents of FG, Research Gate.
  23. Galetto, F., (2015-2025), Papers, and Documents of FG, Academia.edu.
  24. Galetto, F., Quality of methods for quality is important. European Organisation for Quality Control Conference, Vienna. 1989.
  25. Galetto, F., GIQA, the Golden Integral Quality Approach: from Management of Quality to Quality of Management. Total Quality Management (TQM), Vol. 10, No. 1; 1999. [CrossRef]
  26. Belz, M. Statistical Methods in the Process Industry: McMillan; 1973.
  27. Casella, Berger, Statistical Inference, 2nd edition: Duxbury Advanced Series; 2002.
  28. Cramer, H. Mathematical Methods of Statistics: Princeton University Press; 1961.
  29. Deming W. E., Out of the Crisis, Cambridge University Press; 1986.
  30. Deming W. E., The new economics for industry, government, education: Cambridge University Press; 1997.
  31. Dore, P., Introduzione al Calcolo delle Probabilità e alle sue applicazioni ingegneristiche, Casa Editrice Pàtron, Bologna; 1962.
  32. Juran, J., Quality Control Handbook, 4th, 5th ed.: McGraw-Hill, New York: 1988-98.
  33. Kendall, Stuart, (1961) The advanced Theory of Statistics, Vol 2, Inference and Relationship:, Hafner Publishing Company; 1961.
  34. Meeker, W., Hahn, G.., Escobar, L. Statistical Intervals: A Guide for Practitioners and Researchers. John Wiley & Sons. 2017.
  35. Mood, Graybill, Introduction to the Theory of Statistics, 2nd ed.: McGraw Hill; 1963.
  36. Rao, C. R., Linear Statistical Inference and its Applications: Wiley & Sons; 1965.
  37. Rozanov, Y., Processus Aleatoire, Editions MIR: Moscow, (traduit du russe); 1975.
  38. Ryan, T. P., Statistical Methods for Quality Improvement: Wiley & Sons; 1989.
  39. Galetto, F., (2025). Sequential Tests and Reliability Integral Theory. Eng OA, 3 (7), 01-37.
  40. Shewhart W. A., Economic Control of Quality of Manufactured Products: D. Van Nostrand Company; 1931.
  41. Shewhart W.A., Statistical Method from the Viewpoint of Quality Control: Graduate School, Washington; 1936.
  42. D. J. Wheeler, Various posts, Online available from Quality Digest.
  43. Galetto, F., Affidabilità Teoria e Metodi di calcolo: CLEUP editore, Padova (Italy); 1981-94.
  44. Galetto, F., Affidabilità Prove di affidabilità: distribuzione incognita, distribuzione esponenziale: CLEUP editore, Padova (Italy); 1982, 85, 94.
  45. Galetto, F., Qualità. Alcuni metodi statistici da Manager: CUSL, Torino (Italy; 1995-99).
  46. Galetto, F., Gestione Manageriale della Affidabilità: CLUT, Torino (Italy); 2010.
  47. Galetto, F., Manutenzione e Affidabilità: CLUT, Torino (Italy); 2015.
  48. Galetto, F., Reliability and Maintenance, Scientific Methods, Practical Approach, Vol-1: www.morebooks.de.; 2016.
  49. Galetto, F., Reliability and Maintenance, Scientific Methods, Practical Approach, Vol-2: www.morebooks.de.; 2016.
  50. Galetto, F., Statistical Process Management, ELIVA press ISBN 9781636482897; 2019.
  51. Galetto F., Affidabilità per la manutenzione, Manutenzione per la disponibilità: tab edizioni, Roma (Italy), ISBN 978-88-92-95-435-9, www.tabedizioni.it; 2022.
  52. Galetto, F., (2015) Hope for the Future: Overcoming the DEEP Ignorance on the CI (Confidence Intervals) and on the DOE (Design of Experiments), Science J. Applied Mathematics and Statistics, Vol. 3, No. 3, pp. 99-123. [CrossRef]
  53. Galetto, F., (2015) Management Versus Science: Peer-Reviewers do not Know the Subject They Have to Analyse, Journal of Investment and Management. Vol. 4, No. 6, pp. 319-329.
  54. Galetto, F., (2015) The first step to Science Innovation: Down to the Basics., Journal of Investment and Management. Vol. 4, No. 6, pp. 319-329.
  55. Galetto F., (2021) Minitab T charts and quality decisions, Journal of Statistics and Management Systems. [CrossRef]
  56. Galetto, F., (2012) Six Sigma: help or hoax for Quality?, 11th Conference on TQM for HEI, Israel.
  57. Galetto, F., (2020) Six Sigma_Hoax against Quality_Professionals Ignorance and MINITAB WRONG T Charts, HAL Archives Ouvert, 2020.
  58. Galetto, F., (2021) Control Charts for TBE and Quality Decisions, Academia.edu.
  59. Galetto F. (2023) Control Charts, Scientific Derivation of Control Limits and Average Run Length, International Journal of Latest Engineering Research and Applications (IJLERA) ISSN: 2455-7137 Volume – 08, Issue – 01, January 2023, PP – 11-45.
  60. Galetto, F., (2006) Quality Education and quality papers, IPSI, Marbella (Spain).
  61. Galetto, F., (2006) Quality Education versus Peer Review, IPSI, Montenegro.
  62. Galetto, F., (2006) Does Peer Review assure Quality of papers and Education? 8th Conference on TQM for HEI, Paisley (Scotland).
  63. Galetto, F., (1998), Quality Education on Quality for Future Managers, 1st Conference on TQM for HEI (Higher Education Institutions), Toulon (France).
  64. Galetto, F., (2000), Quality Education for Professors teaching Quality to Future Managers, 3rd Conference on TQM for HEI, Derby (UK).
  65. Galetto, F., (2001), Looking for Quality in "quality books", 4th Conference on TQM for HEI, Mons (Belgium).
  66. Galetto, F., (2001), Quality and Control Carts: Managerial assessment during Product Development and Production Process, AT&T (Society of Automotive Engineers), Barcelona (Spain).
  67. Galetto, F., (2001), Quality QFD and control charts, Conference ATA, Florence (Italy).
  68. Galetto, F., (2002), Business excellence Quality and Control Charts”, 7th TQM Conference, Verona (Italy).
  69. Galetto, F., (2002), Fuzzy Logic and Control Charts, 3rd ICME Conference, Ischia (Italy).
  70. Galetto, F., (2002), Analysis of "new" control charts for Quality assessment, 5th Conference on TQM for HEI, Lisbon (Portugal).
  71. Galetto, F., (2009), The Pentalogy, VIPSI, Belgrade (Serbia).
  72. Galetto, F., (2010),The Pentalogy Beyond, 9th Conference on TQM for HEI, Verona (Italy).
  73. Galetto, F., (2021) ASSURE: Adopting Statistical Significance for Understanding Research and Engineering. Journal of Engineering and Applied Sciences Technology. SRC/JEAST-128. [CrossRef]
  74. Galetto, F., (2024), News on Control Charts for JMP, Academia.edu.
Excerpt 1. Statements from papers [1,2,3,4,5,6,7,8,9,10,11,12,13,14].
Excerpt 1. Statements from papers [1,2,3,4,5,6,7,8,9,10,11,12,13,14].
Preprints 178099 sch001aPreprints 178099 sch001b
Excerpt 2. From the paper “Misguided Statistical Process Monitoring Approaches”.
Excerpt 2. From the paper “Misguided Statistical Process Monitoring Approaches”.
Preprints 178099 sch002
Figure 1. The pictorial framework of a Statistical Hypothesis (based on a Probability Model).
Figure 1. The pictorial framework of a Statistical Hypothesis (based on a Probability Model).
Preprints 178099 g001
Excerpt 4. From a paper in the “Garden… [19]”. Notice that one of the authors wrote several papers….
Excerpt 4. From a paper in the “Garden… [19]”. Notice that one of the authors wrote several papers….
Preprints 178099 sch007
Figure 5. A “4 units Stand-by system” and its states.
Figure 5. A “4 units Stand-by system” and its states.
Preprints 178099 g006
Figure 6. Example of Reliability R 0 λ t 0 of a “4 units Stand-by system” with MTTF=θ=123 days; t 0 is the total time on test of the 4 units. To compute the CI (with CL=0.8), find the abscissas of the intersections at R 0 λ L t 0 = 0.9 and R 0 λ U t 0 = 0.1 ….
Figure 6. Example of Reliability R 0 λ t 0 of a “4 units Stand-by system” with MTTF=θ=123 days; t 0 is the total time on test of the 4 units. To compute the CI (with CL=0.8), find the abscissas of the intersections at R 0 λ L t 0 = 0.9 and R 0 λ U t 0 = 0.1 ….
Preprints 178099 g007
Figure 11. F. Galetto Control Chart of the data from “Improved Phase… for Monitoring TBE”. [17]; vertical axe logarithmic; UCL is >100.(RIT used). Process OOC.
Figure 11. F. Galetto Control Chart of the data from “Improved Phase… for Monitoring TBE”. [17]; vertical axe logarithmic; UCL is >100.(RIT used). Process OOC.
Preprints 178099 g012
Excerpt 8. From QREI 2025, named by the authors “case K” when the “rate” θ0 is known.
Excerpt 8. From QREI 2025, named by the authors “case K” when the “rate” θ0 is known.
Preprints 178099 sch011
Excerpt 9. From QREI 2025, named “case U” when the “rate” θ0 is unknown.
Excerpt 9. From QREI 2025, named “case U” when the “rate” θ0 is unknown.
Preprints 178099 sch012
Table 1. Statistical Hypotheses and risks.
Table 1. Statistical Hypotheses and risks.
Preprints 178099 i001
Table 2. Some probability models for data analysis.
Table 2. Some probability models for data analysis.
Name F x | θ Parameters Symbol
Exponential 1 e x p ( x / θ ) θ E(x|θ)
Weibull 1 e x p ( x / η ) β β η W(x|β,η)
Inverted Weibull 1 e η / x β β η IW(x|β,η)
Maxwell 2 / π σ 3 0 x t 2 e t 2 / ( 2 σ 2 ) d t σ2 MW(x|σ)
Normal 1 2 π σ 0 x e t μ 2 / ( 2 σ 2 ) d t μ σ2 N(x|μ, σ2)
Distribution-free No form of F x | θ D-free
Table 3. Inside Diameter Measurements (mm) for Automobile Piston Rings (Table 6.3 of the Montgomery Book, Introduction to Statistical Quality Control 8th Ed.).
Table 3. Inside Diameter Measurements (mm) for Automobile Piston Rings (Table 6.3 of the Montgomery Book, Introduction to Statistical Quality Control 8th Ed.).
Preprints 178099 i021
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated