Preprint
Article

This version is not peer-reviewed.

Identifying Multiple Randomness in Random Experiments: Definition and Examples

Submitted:

23 October 2025

Posted:

24 October 2025

You are already at the latest version

Abstract
Counting processes with multiple randomness, as defined in this article, differ essentially from known stochastic processes, such as doubly stochastic Poisson processes and non-homogeneous Poisson processes. By introducing general results concerning multiple randomness in a random experiment performed to count the number of ``events'' and providing specific examples to illustrate properties of the counting processes with multiple randomness, this study aims at demonstrating the existence of phenomena modeled by such processes. The specific examples are popular models taken from queuing theory, which may help the reader to understand the general results. Modeled by other stochastic processes, phenomena with multiple randomness similar to queuing phenomena modeled by counting processes with two-fold randomness may appear in various experiments concerning practical applications of sciences and engineering disciplines but may not have been identified. It may be reasonable to expect more phenomena with multiple randomness to be identified, which may help scientists and engineers to explain weird phenomena and solve puzzling problems in the real world.
Keywords: 
;  ;  ;  

1. Introduction

Many scientists and engineers use counting processes to study interesting random phenomena in the real world. Let N ( t ) be a counting process, which represents the total number of “events” occurred up to time t 0 . When performing an experiment to count the number of events, we cannot determine the outcome in advance; counting the number of events is a random experiment; its sample space Ω can take different forms. In this study,
Ω = { ( τ n ) n 1 , τ n + 1 > τ n } ,
which consists of all possible sequences of instants at which the events occur. In other words, the sequence ( τ n ) n 1 is a sample-path of a point process. For the purpose of this study, we must explicitly distinguish between sample-paths and stochastic sequences. Because elements of Ω are labeled by n, to avoid confusion, random instants or variables defined on the sample space will be labeled by j. Accordingly, if the jth event occurs at instant τ j , the inter-event time τ j + 1 τ j is a random variable. If τ j + 1 τ j for every j is defined on the whole sample space, i.e., we can assign a value to τ j + 1 τ j at each ω Ω , then N ( t ) is a usual counting process, and ( τ n ) n 1 is a sample-path of a stationary point process. For a usual counting process, such as a renewal process, we can use a marginal distribution to characterize all inter-event times. Usual counting processes and other stochastic processes introduced in the standard texts, such as [1], are known processes. There exist many other known processes, such as doubly stochastic Poisson processes, mixed processes, non-homogeneous renewal processes (see, for example, the monograph [2]), and processes in computer applications of queuing theory, for example, the process used to model correlated inter-arrival and service times [3]. Counting processes with multiple randomness, as defined below, differ essentially from all known processes.
Let A represent the σ -algebra of subsets of Ω . The probability measure P is defined on the measurable space ( Ω , A ) . A random experiment performed to count the number of events can be modeled by probability space ( Ω , A , P ) . Denote by N the set of all positive integers. Let m 1 be the total number of distributions for inter-event times. Each inter-event time can follow one and only one of such distributions. If m = 1 (i.e., all inter-event times follow a common distribution), then we can use a marginal distribution to characterize inter-event times τ j + 1 τ j , and N ( t ) is a usual counting process. For an arbitrarily given  j N , write
Ω i ( j ) = { ω Ω : ( τ j + 1 τ j ) ( ω ) = T i ( ω ) } , i { 1 , 2 , , m } .
We require
P [ Ω i ( j ) ] 0 , i = 1 m P [ Ω i ( j ) ] = 1
and
Ω = Ω 1 ( j ) Ω 2 ( j ) Ω m ( j ) .
Denote by F i the distribution of T i in the expression of Ω i ( j ) . For 1 < k m and 1 < k m , if k k , then F k F k , thus Ω k ( j ) Ω k ( j ) = , i.e., { Ω i ( j ) , i = 1 , 2 , , m } is a partition of Ω . Let ω Ω be an arbitrarily given sample point. For i { 1 , 2 , , m } , write
M i ( ω ) = { j N : ( τ j + 1 τ j ) ( ω ) = T i ( ω ) } .
We require
N = M 1 ( ω ) M 2 ( ω ) M m ( ω ) .
As we can readily see, { M i ( ω ) , i = 1 , 2 , , m } is a random partition of N . The partition is random, because M i ( ω ) depends on the sample point. At the given ω , if k k , then M k ( ω ) M k ( ω ) = .
Definition 1.
If Ω possesses the above required properties, the stochastic process N ( t ) 0 is said to be a counting process with multiple (or m-fold) randomness for m > 1 . The corresponding inter-event times constitute a stochastic sequence ( τ j + 1 τ j ) j 1 with m-fold randomness.
According to the above definition, for a counting process with m-fold randomness, the inter-event time τ j + 1 τ j is not defined on the whole sample space for every j or characterized by a marginal distribution. The distribution of τ j + 1 τ j is not determined by a joint distribution of random variables defined on Ω ; it is determined uniquely by the random experiment in question. Furthermore, the sequence ( τ j + 1 τ j ) j 1 is not stationary, and ( τ n ) n 1 is a sample-path of a non-stationary point process.
The counting processes with multiple randomness, as defined above, have not been identified yet. But they appear naturally in practical applications, such as queuing theory applied to different fields (see [3,4,5,6]). Unfortunately, the counting processes with multiple randomness are all mistaken for their usual counterparts or other known processes. The present study aims at demonstrating the existence of phenomena modeled by the counting processes with multiple randomness. Providing examples is the best way of demonstrating their existence and illustrating their properties. Thus, we shall consider several well-known queuing models. By identifying multiple randomness in the queuing models, an inconsistency in queuing theory concerning departure processes of stable queues in steady state and product-form equilibrium distributions of queuing networks can be resolved. Resolving the inconsistency can help us find interesting results in practical applications of queuing models. To see this, denote by K t a time interval of an infinitesimal length. For a stable M/M/1 queue in steady state, the following facts are well known.
  • There are two kinds of inter-departure times, T 1 and T 2 , determined by whether the server is idle or busy immediately after a customer departs from the system.
  • Each inter-departure time corresponds to one and only one departure.
Consider the following events.
E = { A departure occurs corresponding to T 1 K t   or   T 2 K t }
E 1 = { The server is idle immediately after a departure }
and
E 2 = { The server is busy immediately after a departure } .
By applying a technique called splitting [2], we have
P ( E 1 ) P ( T 1 K t ) + P ( E 2 ) P ( T 2 K t ) = P ( E ) .
To the knowledge of the author, the above result has not been reported in the existing literature. All existing theories need to be developed to find new results in practical applications; queuing theory is not an exception. The purpose of identifying multiple randomness observed in queuing phenomena and resolving the inconsistency is to develop queuing theory. Without resolving the inconsistency, we cannot develop queuing theory.
In Section 2, we prove some general results given by a lemma and its corollary concerning the departure process from a stable, general single-server queue in steady state: the instants of departures from this queue constitute a counting process with two-fold randomness (i.e., m = 2 ). Based on the lemma and its corollary, we revisit the departure process of a stable M/M/1 queue in steady state in Section 3 and scrutinize stability of Jackson networks of queues in Section 4. The article is concluded in Section 5.

2. Two-Fold Randomness in Counting Processes: General Results

Consider departures from a stable, general single-server queue (i.e., a GI/GI/1 system) in steady state. This queuing model has an infinite waiting room and a “work-conserving” server with a finite service capacity defined by the maximum rate at which the server can perform work. By definition, the work-conserving server will not stand idle when there is unfinished work in the system. Customers arrive at this system according to a renewal process and are served one at a time. Times between successive arrivals are independent and identically distributed (i.i.d.) random variables with a finite mathematical expectation, and so are service times of customers. Inter-arrival times and service times are also mutually independent. For such a queue, a customer leaves the system if and only if the customer has been served. The mean inter-arrival time is greater than the mean service time. Hence the queue is stable and can reach its steady state as time approaches infinity. For the purpose of this study, it is not necessary to assume a specific queuing discipline.
According to the literature, times between consecutive departures from a stable queue in steady state follow a marginal distribution [7]. As we can readily see in this section, the existence of such a marginal distribution is only an unjustified assumption taken for granted without verification; for systems modeled by a stable GI/GI/1 queue in steady state, the stochastic sequence formed by inter-departure times between customers consecutively leaving from the queue is not stationary, or equivalently, the corresponding point process is not stationary.
Usually, properties of the arrival process and stability of the GI/GI/1 queue are established by using the strong law of large numbers. Results of this kind are true for every ω Ω , with the exception at most of a set N A . The probability of N is zero. Such results hold with probability one or almost surely. Removing sample points in N will not change anything when the queue is already in steady state. We can regard Ω as the sample space of an ideal random experiment performed after the queue has reached its steady state.
Let c j represent the jth customer served. Using the notation introduced in Section 1, the event “ c j departs from the queue” occurs at τ j . Denote by Q j the queue size immediately after the jth departure. By definition, “queue size” refers to the total number of customers in the queue (including the customer in service). Let T 1 and T 2 be the times between the jth and the ( j + 1 ) th departures when Q j = 0 and Q j > 0 , respectively. Denote by I j the idle time spent by the server waiting for the arrival of a customer, and S j + 1 the service time of c j + 1 . We have
T 1 = I j + S j + 1
and
T 2 = S j + 1 .
All arrived customers need to be served; their service times are not zero.
By definition, a random variable is a measurable real-valued function; its domain is the whole sample space or a proper subset of the sample space. To define a random variable U on the whole sample space, we must assign a value to U at each ω Ω ; To define a random variable on a proper subset of Ω , we must assign a value to this random variable at each sample point in the subset. Denote by B the Borel algebra of subsets in the real line. Let B be an arbitrary element of B . For a random variable defined on the whole sample space, such as U, its (marginal) distribution is defined by
P U ( B ) = P ( U B ) .
If it is necessary to emphasize the connection between U and P defined on ( Ω , A ) , we can use the right-hand side of the above equation to express the distribution of U directly. Similarly, components of a random vector or terms of a stochastic sequence are all random variables defined on Ω . By definition, components of a random vector (or terms of a stochastic sequence) must take their values at the same  ω Ω . If a random vector has been defined, then the joint distribution of its components is fixed, and the marginal distribution of each component is determined by the joint distribution. Based on a given random vector, we can construct a random variable on a proper subset Θ of Ω so long as P ( Θ ) > 0 . For example, let ( U , V ) be a random vector. The joint distribution is P U , V , which determines P U and P V , the marginal distributions of U and V. Based on this random vector, we can construct a random variable W on Θ , such that W ( ω ) = V ( ω ) for ω Θ ; some values taken by U specify Θ . Thus, W follows a conditional distribution P W determined by P U , V .
So far, we have mentioned two types of random variables: U and V are defined on the whole sample space, but W is defined on a proper subset of the sample space and follows a conditional distribution. There also exist random variables different from those mentioned above. Random variables of this type are not components of a random vector, and their distributions are not determined by a joint distribution. Times between successive departures from the GI/GI/1 queue are random variables of this type. Proper subsets of Ω are their domains. A chronological order of events experienced by customers determines their distributions and properties. The chronological order is not determined by properties of the queuing model with assumed properties such as stationarity; it is determined by physical systems modeled by the queue. The general results given in this section are grounded on this chronological order. To avoid repeatedly saying “the chronological order of events experienced by customers”, we shall define it formally and refer to it simply as “the chronological order”.
Definition 2.
For a work-conserving queuing model with an infinite waiting room, events experienced by each customer occur naturally in the following chronological order:
  • First, a customer arrives at the queue.
  • Upon arrival,
    the customer receives service immediately if the server is idle;
    otherwise the customer has to wait in line.
  • Finally, after being served, the customer departs from the system.
Consequently, for an arbitrarily given j, if the server becomes idle immediately after c j leaves, the time between the departures of c j and c j + 1 is the sum of an idle time and a service time, as shown by Equation (2.1); otherwise the inter-departure time is a service time, see Equation (2.2).
Lemma 1.
If a stable GI/GI/1 queue is in steady state, and if the server has a finite service capacity, then for an arbitrarilygiven j N ,
 (a) 
the inter-departure time τ j + 1 τ j cannot be expressed by a fixed random variable defined on Ω or characterized by a marginal (i.e., unconditional) distribution;
 (b) 
the expression ( Q j , τ j + 1 τ j ) does not represent a random vector, and τ j + 1 τ j has no distributions conditional on values taken by Q j .
Corollary 1.
With two-fold randomness, the inter-departure time sequence ( τ j + 1 τ j ) j 1 and the corresponding point process are not stationary.
To understand Lemma 1 and its corollary, the reader may consider how to answer the following question: if τ j + 1 τ j could be expressed by a fixed random variable Z j defined on Ω , what would be the value of Z j at ω corresponding to Q j ( ω ) ? At each ω Ω , the value of Q j equals either zero or a positive integer. Whatever value Q j takes at ω , it is impossible to assign a value to Z j at ω corresponding to Q j ( ω ) . In contrast to τ j + 1 τ j , service times of customers, the queue size Q j , and times between consecutive arrivals are all random variables defined on Ω . There is a subtlety here, however. When playing the role of an inter-departure time T 2 defined on Ω 2 ( j ) (see below), the service time S j + 1 is no longer a random variable defined on Ω .
Proof. 
Because of the chronological order, the corresponding sample pace Ω of the general queuing model possesses the properties required by a counting process with two-fold randomness. To see this, write
M 1 ( ω ) = { j N : Q j ( ω ) = 0 }
M 2 ( ω ) = { j N : Q j ( ω ) > 0 }
Ω 1 ( j ) = { ω Ω : Q j ( ω ) = 0 }
and
Ω 2 ( j ) = { ω Ω : Q j ( ω ) > 0 } .
Clearly, at an arbitrarily given  ω Ω , { M 1 ( ω ) , M 2 ( ω ) } is a random partition of N . For an arbitrarily given  j 1 , we can verify Ω 1 ( j ) Ω 2 ( j ) = : if Ω 1 ( j ) Ω 2 ( j ) , then for the given j, there would exist a sample point ω Ω 1 ( j ) Ω 2 ( j ) ; immediately after c j departs from the system, the server would be both idle and busy simultaneously, which is absurd. Thus { Ω 1 ( j ) , Ω 2 ( j ) } must be a partition of the sample space, and we have
Ω 1 ( j ) Ω 2 ( j ) = Ω .
Denote by ρ the utilization factor. Because the queue is already in steady state, P [ Ω 1 ( j ) ] = 1 ρ > 0 , P [ Ω 2 ( j ) ] = ρ > 0 , and
P [ Ω 1 ( j ) ] + P [ Ω 2 ( j ) ] = 1 .
Consequently, at an arbitrarily given  ω Ω , either ω Ω 1 ( j ) and
Q j ( ω ) = 0 τ j + 1 ( ω ) τ j ( ω ) = T 1 ( ω )
or ω Ω 2 ( j ) and
Q j ( ω ) > 0 τ j + 1 ( ω ) τ j ( ω ) = T 2 ( ω ) .
As shown by Equation (2.1) and Equation (2.2), both T 1 and T 2 are well-defined random variables; their distributions, P T 1 and P T 2 , are determined by the chronological order implied by Equation (2.3) and Equation (2.4). For an arbitrarily fixed B B ,
P T 1 ( B ) = P [ T 1 B | Ω 1 ( j ) ] , P T 2 ( B ) = P ( T 2 B | Ω 2 ( j ) ] .
Either Equation (2.3) or Equation (2.4) must hold exclusively at the given sample point ω Ω , and for the given j, the inter-departure time τ j + 1 τ j can only be described by T 1 or T 2 exclusively. To see this, suppose to the contrary: a fixed random variable Z j = τ j + 1 τ j is defined on Ω , which allows Z j to be characterized by a marginal distribution P Z j . Consequently,
{ Ω 1 ( j ) , T 1 B } = { Ω 1 ( j ) , Z j B }
and
{ Ω 2 ( j ) , T 2 B } = { Ω 2 ( j ) , Z j B } .
Because { Ω 1 ( j ) } is independent of events concerning future departures such as { T 1 B } and { Z j B } after c j leaves, Equation (2.5) implies
P [ Ω 1 ( j ) ] P T 1 ( B ) = P [ Ω 1 ( j ) ] P Z j ( B ) .
Similarly, { Ω 2 ( j ) } is independent of { T 2 B } and { Z j B } ; thus Equation (2.6) implies
P [ Ω 2 ( j ) ] P T 2 ( B ) = P [ Ω 2 ( j ) ] P Z j ( B ) .
Because P [ Ω 1 ( j ) ] = 1 ρ > 0 and P [ Ω 2 ( j ) ] = ρ > 0 for all j, if Z j = τ j + 1 τ j is fixed and defined on Ω , then
P T 1 ( B ) = P T 2 ( B ) = P Z j ( B ) .
This is absurd. The absurdity shows { Z j B } A , which prevents us from describing τ j + 1 τ j as a fixed random variable defined on Ω or using a marginal distribution to characterize τ j + 1 τ j . This proves (a).
By definition, each component of a random vector (or each term of a stochastic sequence) must be a random variable defined on Ω , and all the components (or terms) must take values at the same  ω Ω . Although Q j is a random variable defined on Ω , the inter-departure time τ j + 1 τ j and the queue size Q j cannot form a random vector because τ j + 1 τ j is not defined on Ω . Were ( Q j , Z j ) a random vector, its components would take Q j ( ω ) and Z j ( ω ) as their values at the same sample point ω , and immediately after c j departs from the system, Q j ( ω ) = 0 and Q j ( ω ) > 0 would hold simultaneously, which is impossible, because Ω 1 ( j ) Ω 2 ( j ) = as required by the chronological order. Therefore, τ j + 1 τ j is not eligible to have distributions conditional on values taken by Q j . This proves (b), and the corollary of the lemma follows immediately. □
Standard tools given in the literature, such as Palm distributions, stationary marked point processes, and ergodic-theoretic frameworks [2], are applicable to stationary processes. Because counting processes with multiple randomness are not stationary, the standard tools are not used to prove the lemma and its corollary.
Lemma 1 and its corollary are grounded on the chronological order. The conditions required by Lemma 1 exclude two conceivable scenarios corresponding to two extreme situations. One is the worst situation, and the other is an ideal situation. Both situations allow τ j + 1 τ j to be a fixed random variable defined on Ω .
The worst situation is a queue with P [ Ω 1 ( j ) ] = 0 for all j, i.e., the queue is unstable. Because Ω 1 ( j ) Ω 2 ( j ) = Ω , and because P [ Ω 1 ( j ) ] = 0 ,
P [ Ω 1 ( j ) Ω 2 ( j ) ] = P [ Ω 2 ( j ) ] = 1 .
Thus, with probability one, when each customer departs from the system, the queue is not empty, which implies τ j + 1 τ j = S j + 1 for all j. In other words, the idle time I j vanishes identically on Ω , making the server always busy almost surely. Consequently, the queue will never become empty, and the queue size will approach infinity with probability one.
The ideal situation is a queue with P [ Ω 2 ( j ) ] = 0 for all j, i.e., the queue is always empty almost surely, because P [ Ω 2 ( j ) ] = 0 for all j implies
P [ Ω 1 ( j ) Ω 2 ( j ) ] = P [ Ω 1 ( j ) ] = 1 .
In other words, when each customer departs from the system, the server becomes idle almost surely, which implies τ j + 1 τ j = I j for every j. That is, the server must have an infinite service capacity; this is the only way to make the service times of customers vanish identically. Consequently, the idle time is identical to the inter-arrival time with probability one.
If a stable queue is in steady state, then P [ Ω 1 ( j ) ] > 0 and P [ Ω 2 ( j ) ] > 0 for each j. As implied by the chronological order, the sequence ( τ j + 1 τ j ) j 1 is not stationary and differs essentially from a sequence of random variables defined on the whole sample space Ω . To illustrate the difference, consider an arbitrarily given  ω Ω . At this sample point,
(i)
if j M 1 ( ω ) (i.e., immediately after c j departs, there is no unfinished work in the system), then τ j + 1 τ j equals T 1 ( ω ) ; otherwise the server is busy, and τ j + 1 τ j takes T 2 ( ω ) as its value;
(ii)
thus, ( τ j + 1 τ j ) j 1 can be divided into two subsequences randomly as follows.
[ τ j + 1 ( ω ) τ j ( ω ) ] j 1 = [ T 1 ( ω ) ] j M 1 ( ω ) [ T 2 ( ω ) ] j M 2 ( ω ) .
Having a finite service capacity, the server is either busy or idle with a positive probability, thus realistic inter-departure times always fall into two categories. In steady state, the probability for the server to be busy or idle is fixed, and the values of inter-departure times are given by either T 1 ( ω ) or T 2 ( ω ) exclusively, according to whether ω Ω 1 ( j ) (i.e, the server is idle) or ω Ω 2 ( j ) (i.e., the server is busy) immediately after c j departs, see Equation (2.1) and Equation (2.2).
In the literature, the distribution of T 1 or T 2 is interpreted as the distribution of τ j + 1 τ j conditional on values taken by Q j , and a marginal distribution of τ j + 1 τ j could be constructed based on the distributions of T 1 and T 2 . However, the chronological order prevents us from using a marginal distribution to characterize τ j + 1 τ j . The general results presented in this section may be better understood if specific examples are provided. A stable M/M/1 queue in steady state may serve as such an example and will be considered in the next section. More examples concerning queues in tandem and networks of queues will be considered in Section 4.

3. Two-Fold Randomness in Counting Processes: Specific Examples

A stable M/M/1 queue in steady state is the simplest instance of the GI/GI/1 queue. Customers arrive at this system according to a Poisson process; the arrival rate is λ . Service times are mutually independent and follow a common exponential distribution. According to Burke’s theorem [8], times between successive departures from this queue in steady state are mutually independent and follow a marginal distribution identical to the distribution of inter-arrival times. In this section, we scrutinize Burke’s original proof given in [8] (SubSection 3.1), Reich’s proof based on time reversibility given in [9] (SubSection 3.2), and experimental results obtained by simulation concerning the departure process from the M/M/1 queue (SubSection 3.3). We find that Burke’s theorem holds only if the server has an infinite service capacity.

3.1. Burke’s Proof Based on Differential Equations

Burke considered “the length of an arbitrary inter-departure interval” in his proof [8]. Expressed by the notation introduced in Section 1, what Burke called “an arbitrary inter-departure interval” is ( τ j , τ j + 1 ) for an arbitrary j. Denote by Q ( t ) the queue size at an instant t after τ j , and τ j is taken to be the instant of the last previous departure. Burke’s proof begins with calculating the probability of an event { Q ( t ) = k , τ j + 1 τ j > t } . By comparing τ j + 1 τ j with t, Burke chose τ j as the origin on the time axis, namely, the jth departure occurs at t = 0 . During the interval ( τ j , τ j + 1 ) , the number of arrived customers is at least one. Denote by t 1 the first arrival instant in this interval. Clearly, t 1 satisfies 0 < t 1 < τ j + 1 .
Burke’s proof is based on the following assumption: the inter-departure time τ j + 1 τ j is a fixed random variable Z j defined on the whole sample space and can take values together with Q ( t ) at every ω Ω . The assumption is unjustified and problematic (see Section 2). But Burke did not realize the problem; he took the assumption for granted unconsciously and did not mention it in his paper. Under this assumption, Burke treated [ Q ( t ) , Z j ] as a random vector and derived a set of differential equations governing the probability of { Q ( t ) = k , Z j > t } . Solving the equations yields the solution [8]
P [ Q ( t ) = k , Z j > t ] = P [ ( Q ( t ) = k ] P ( Z j > t ) , k = 0 , 1 ,
which further implies
k = 0 P [ Q ( t ) = k , Z j > t ] = P ( Z j > t ) = e λ t .
As we shall see below, the assumption underlying Burke’s proof is responsible for the widely accepted interpretation of the solution:
  • for each k 0 , { Q ( t ) = k } and { Z j > t } are independent events;
  • the event { Q ( t ) = k } has an equilibrium probability P [ Q ( t ) = k ] ; which actually does not depend on t;
  • the distribution of inter-departure times is P ( Z j > t ) and identical to the distribution of inter-arrival times.
    P ( Z j > t ) = e λ t .
Needless to say, the differential equations established by Burke and the solution are correct results in mathematics. Nevertheless, because of the assumption underlying Burke’s proof, the interpretation of the mathematical results contradicts the chronological order and makes little sense in practice, if the server does not have an infinite service capacity. To see this, let us consider Q ( t ) = 0 first, i.e., the server is idle at time t > 0 . If Q ( t ) = 0 , then 0 < t < t 1 , and
{ Q ( t ) = 0 , Z j > t } { Q ( 0 ) = 0 , Z j > t } .
In other words, if event { Q ( t ) = 0 , Z j > t } occurs, event { Q ( 0 ) = 0 , Z j > t } also occurs. When the queue is empty at time t > 0 , the queue must be empty in the closed interval [ 0 , t ] , because c j has departed, and because the next customer has not arrived as required by 0 < t < t 1 . As indicated by the chronological order, whenever Q ( t ) = 0 , we see
Z j = I j + S j + 1 ,
which does not follow the distribution of inter-arrival times, unless the server has an infinite service capacity making all service times vanish identically. Suppose the server has a finite service capacity and Q ( t ) > 0 . Clearly, now t satisfies t 1 t < τ j + 1 . When Q ( t ) > 0 , the chronological order requires
Z j = S j + 1 ,
which is not distributed as an inter-arrival time either. Therefore, it is the chronological order that does not allow us to express an inter-departure time as a fixed random variable defined on Ω or use a marginal distribution to characterize it. For each j, there are only two kinds of realistic inter-departure times, T 1 and T 2 , which are already defined on Ω 1 ( j ) and Ω 2 ( j ) , respectively (see Equation (2.1) and Equation (2.2)). Such inter-departure times are terms of a sequence with two-fold randomness; their distributions and properties are determined by the chronological order rather than by joint distributions of random vectors defined on the whole sample space Ω . Because Z j is neither defined on Ω nor on a subset of Ω , we cannot use it to describe realistic inter-departure times.

3.2. Reich’s Proof Based on Time-Reversible Markov Process

As a birth-death process in steady state (i.e., in statistical equilibrium), Q ( t ) is a time-reversible Markov process. If instants on the time axis are ordered in the reversed direction, Q ( t ) has a time-reversed counterpart Q * ( t ) , which is also a birth-death process. If Q ( t ) is used to model the size of a stable M/M/1 queue in steady state at time t, then “death” and “birth” represent “departure” and “arrival”, respectively. Consequently, intervals between consecutive deaths are inter-departure intervals. Reich provided a different proof of Burke’s theorem based on Q * ( t ) [9]. See also [1,10].
The proof based on Q * ( t ) contradicts the chronological order. Although arrival instants when looking forwards in time constitute a Poisson process, the instant τ j for each j at which Q * ( t ) increases by 1 cannot form a Poisson process, because
| τ j τ j + 1 | = τ j + 1 τ j
and because τ j + 1 τ j is not an exponentially distributed random variable defined on the whole sample space. If the birth-death process Q ( t ) serves to model the queue size, then times between consecutive deaths and times between consecutive births do not necessarily follow the same distribution in whatever direction of time.
Let m 0 . During an open interval between two consecutive deaths (i.e., departures from the M/M/1 queue), the state of Q ( t ) may change m times. A change of Q ( t ) during ( τ j , τ j + 1 ) is due to an arrival in ( τ j , τ j + 1 ) . If m = 0 , then Q ( t ) remains unchanged in ( τ j , τ j + 1 ) , which implies Q ( t ) > 0 for τ j t < τ j + 1 , because at least c j + 1 is still in the system, and because no customer arrives in the interval.
Q ( t ) = k + 2 t < τ j k + 1 τ j t < τ j + 1 k t = τ j + 1 .
The inter-departure time τ j + 1 τ j = S j + 1 , because Q ( t ) decreases by 1 if and only if one customer has been served.
If m 1 , then Q ( t ) changes at least once during ( τ j , τ j + 1 ) . Let t i , i = 1 , 2 , represent the instants at which Q ( t ) changes; the ith arrival instant in the interval is t i . Thus,
  • either Q ( t ) > 0 for τ j t < τ j + 1 ,
  • or Q ( t ) = 0 for τ j t < t 1 < τ j + 1 , and Q ( t ) > 0 for t 1 t < τ j + 1 .
Q ( t ) = k + 1 t < τ j k τ j t < t 1 k + 1 t 1 t < t 2 k + 2 t 2 t < t 3 k + m t m t < τ j + 1 k + m 1 t = τ j + 1 .
If k > 0 , τ j + 1 τ j = S j + 1 still holds. However, if k = 0 , because the arrival instant of c j + 1 is t 1 ,
Q ( t ) = 0 τ j t < t 1 1 t 1 t < τ j + 1
we see that the inter-departure time τ j + 1 τ j consists of an idle time of the server (i.e., t 1 τ j = I j ) and a service time (i.e., τ j + 1 t 1 = S j + 1 ). Whatever value k takes, τ j + 1 τ j is not distributed as an inter-arrival time. Similarly, for Q * ( t ) , if m = 0 ,
Q * ( t ) = k t = τ j + 1 k + 1 τ j + 1 < t τ j k + 2 t > τ j
and | τ j τ j + 1 | = S j + 1 . When m 1 ,
Q * ( t ) = k + m 1 t = τ j + 1 k + m τ j + 1 < t t m k + 2 t 3 < t t 2 k + 1 t 2 < t t 1 k t 1 < t τ j k + 1 t > τ j
and | τ j τ j + 1 | = S j + 1 if k > 0 ; otherwise | τ j τ j + 1 | = I j + S j + 1 . As indicated by Equation (2.1) and Equation (2.2), times between consecutive departures from the M/M/1 queue in steady state always follow two different distributions, P T 1 and P T 2 ; both differ from the distribution of inter-arrival times.
For a given t, Q ( t ) is independent of future arrivals after t. Time reversibility is used here to argue for “ Q ( t ) is also independent of past departures before t.” Based on an analogy between Q ( t ) and Q * ( t ) , the argument goes as follows: Q ( t ) is independent of future arrivals after t; departures are “arrivals” when looking backwards in time; because Q ( t ) and Q * ( t ) are statistically identical, Q ( t ) is independent of past departures before t.
However, the analogy between Q ( t ) and Q * ( t ) is not appropriate, and the argument based on the analogy is problematic. Although Q ( t ) is independent of arrivals after t, both arrivals and departures prior to t determine Q ( t ) : increase and decrease in Q ( t ) are due to arrivals and departures before t, respectively. For a stable M/M/1 queue in steady state, Q ( t ) depends on departures before t necessarily.
A queuing model for solving problems in the real world must satisfy the constraints imposed by physical systems to be studied based on the model. The reversed process Q * ( t ) is constructed mathematically without considering the chronological order and irrelevant to the departure process from the M/M/1 queue. By Corollary 1, ( τ j + 1 τ j ) j 1 is a stochastic sequence with two-fold randomness; time reversibility cannot change ( τ j + 1 τ j ) j 1 into a sequence of i.i.d. random variables defined on Ω . In queuing theory, the arguments based on Q * ( t ) are questionable.
Questioning the queuing theoretical arguments based on Q * ( t ) is not to say “a birth-death process Q ( t ) in statistical equilibrium has no time-reversed counterpart Q * ( t ) .” The birth-death process Q ( t ) is still time-reversible and has its time-reversed counterpart Q * ( t ) . But Q * ( t ) is just not applicable to queuing models.

3.3. Experimental Results Concerning Counting Processes with Two-Fold Randomness

There are many experimental results obtained by simulation concerning departures from the M/M/1 queue. All of them are claimed to be in agreement with Burke’s theorem. As we can see, however, the experimental results are misinterpreted. Let μ < represent the parameter of the service-time distribution. The mean inter-arrival time and the mean service time are 1 / λ and 1 / μ , respectively. Using the notations in Section 2, for all j,
P [ Ω 1 ( j ) ] = 1 ρ < 1
and
P [ Ω 2 ( j ) ] = ρ = λ / μ > 0 .
If Q j = 0 , then τ j + 1 τ j = T 1 , see Equation (2.1), and its probability density function (pdf) is
f T 1 ( t ) = λ μ μ λ ( e λ t e μ t ) .
If Q j > 0 , then τ j + 1 τ j = T 2 , see Equation (2.2), and its pdf is f T 2 ( t ) = μ e μ t . Let K t represent a time interval ( t , t + d t ] of an infinitesimal length d t . Write
H j = { Q j = 0 , T 1 K t } { Q j > 0 , T 2 K t } .
For a stable M/M/1 queue in steady state, a simple calculation yields
P ( H j ) = P ( Q j = 0 , T 1 K t ) + P ( Q j > 0 , T 2 K t ) = λ e λ t d t .
The probability obtained by the calculation is indeed in agreement with the experimental results, but we should not interpret it as
P ( Q j = 0 , Z j K t ) + P ( Q j > 0 , Z j K t ) = P ( Z j K t ) = λ e λ t d t .
In the above interpretation, Z j is a fixed random variable representing τ j + 1 τ j defined at each ω Ω , which is a result obtained by treating ( Q j , τ j + 1 τ j ) as a random vector and prohibited by the chronological order. Because { Z j K t } A (see Section 2), we are not allowed to interpret Equation (3.1) as identical to Equation (3.2).
Simulation studies are based on the strong law of large numbers, and the sequence studied by simulation should consist of i.i.d. random variables defined on Ω . As we have seen in Section 2, ( τ j + 1 τ j ) j 1 is a stochastic sequence with two-fold randomness and can be divided into two subsequences at each given ω Ω .
[ τ j + 1 ( ω ) τ j ( ω ) ] j 1 = [ T 1 ( ω ) ] j M 1 ( ω ) [ T 2 ( ω ) ] j M 2 ( ω ) .
Accordingly, we can also divide ( H j ) j 1 into two subsequences at the sample point.
[ H j ( ω ) ] j 1 = [ Q j ( ω ) = 0 , T 1 ( ω ) K t ] j M 1 ( ω ) [ Q j ( ω ) > 0 , T 2 ( ω ) K t ] j M 2 ( ω ) ,
which prevents us from constructing a sequence consisting of i.i.d. random variables defined on the whole sample space. Thus, applying the strong law to study ( H j ) j 1 is illegitimate. Nevertheless, if we consider I ( H j ) , the indicator of H j , then I ( H 1 ) , I ( H 2 ) , constitute a sequence of i.i.d. random variables all defined on Ω . By the strong law,
lim j = 1 I ( H j ) = E [ I ( H 1 ) ]
with probability one, and
E [ I ( H 1 ) ] = P ( Q 1 = 0 , T 1 K t ) + P ( Q 1 > 0 , T 2 K t ) = λ e λ t d t .
The above analysis not only explains why Equation (3.1) is in agreement with the experimental results of simulation but also reveals the essential difference between Equation (3.1) and Equation (3.2). To see the difference, let us consider an arbitrarily given j. Equation (3.2) implies a sample point ω Ω ; at this ω , when c j departs, a value would be taken by Z j corresponding to both Q j ( ω ) = 0 and Q j ( ω ) > 0 simultaneously. In no sense can anything like this happen in the real world.
In contrast, even though the inter-departure time sequence and the corresponding point process are not stationary, Equation (3.1) coincides with the probability of “a departure occurs from the M/M/1 queue corresponding to T 1 K t or T 2 K t ” obtained by applying the splitting technique [2], see Equation (1.1).

4. Two-Fold Randomness in Counting Processes: Networks of Queues

Consider two queues Q 1 and Q 2 in tandem. The first queue Q 1 is an M/M/1 queue. Customers arrive first at Q 1 according to a Poisson process with a rate λ . After being served at Q 1 , they join Q 2 immediately; Q 2 also has an infinite waiting room. Service times of a customer spent at Q 1 and Q 2 are mutually independent, following exponential distributions with finite service rates μ 1 and μ 2 . In addition, service times at Q 2 are not only mutually independent but also independent of arrivals both at Q 1 and at Q 2 . If μ 1 > λ , then Q 1 is stable. According to Burke’s theorem, departures from Q 1 in steady state constitute a Poisson process with the rate λ . According to Jackson’s theorem [3,11,12], Q 2 is stable if μ 2 > λ . However, because times between consecutive departures (i.e., τ j + 1 τ j ) from Q 1 are times between consecutive arrivals at Q 2 , and because the chronological order precludes fixed random variables defined on the whole sample space used to describe τ j + 1 τ j , we are not allowed to describe the number of customers in Q 2 as a fixed random variable, even if Q 1 is stable and has reached its steady state. A Jackson network consisting of such two queues in tandem is actually a counterexample to Jackson’s theorem, which will be scrutinized next.

4.1. Jackson Networks of Queues and Jackson’s Theorem

Consider a Jackson network consisting of J single-server queues denoted by Q k , k = 1 , 2 , , J . At each queue, there is an infinite waiting room, and customers are served by a work-conserving server. The mean service time is 1 / μ k at Q k . By Jackson’s theorem [3,11,12], these queues are stable if the total arrival rate λ k at Q k satisfies λ k < μ k , and λ k is determined by the following equations.
λ k = γ k + i = 1 J λ i P i k , i = 1 , 2 , , J .
In Equation (4.1), γ k is the arrival rate of customers at Q k from outside of the system, and P i k is the probability for a customer to join Q k immediately after leaving Q i . Thus the arrival rate of customers at Q k from Q i is λ i P i k . Various proofs of Jackson’s theorem, including Jackson’s original proof [11], which is considered lack of mathematical rigor, and the proof based on time reversibility [1,10], which is considered mathematically rigorous, all depend on the stability condition λ k < μ k obtained by solving Equation (4.1).
According to Jackson’s interpretation (e.g, [3,11,12]), after the network has been in operation for an infinitely long time, each queue in the network behaved “as if”
  • its size were a random variable characterized by a marginal distribution, and
  • all such random variables were independent, possessing a joint distribution equal to the product of their marginal distributions.
However, the phrase “as if” in Jackson’s interpretation makes it questionable, because it is inconsistent with the standard notion of independent random variables. For example, if the joint distribution of two random variables can be expressed as the product of their marginal distributions, the random variables are indeed independent rather than behaved as if they were independent. The inconsistency is due to λ k < μ k , k = 1 , 2 , , J , the stability condition required by Jackson’s theorem, which relies on an unjustified assumption (see below) and does not necessarily imply stability of every queue in the network. Consequently, the assumption makes Jackson’s theorem irrelevant to physical systems modeled by Jackson networks of queues.
The unjustified assumption underlying Equation (4.1) is this: times between successive departures from a stable queue in steady state follow a marginal distribution. This assumption is the basis to define the arrival rate at a queue for customers coming from inside of the system. By Lemma 1, for the network of the two queues in tandem, the assumption is problematic, unless the server at Q 1 has an infinite service capacity. Only in this unrealistic scenario, treating Q 2 as a stable M/M/1 queue isolated from Q 1 will not lead to the inconsistency. Because service capacities in the real world must be finite, it is illegitimate to use λ , the arrival rate at the first queue, to characterize the arrivals at the second queue. As terms of a stochastic sequence with two-fold randomness, τ j + 1 τ j do not have a marginal distribution. The proof of Jackson’s theorem based on time reversibility [1,10] cannot change this fact or explain away the inconsistency mentioned above, see also SubSection 3.2.
In general, so long as inter-arrival times between customers at a queue are times between consecutive departures from another queue, the chronological order prohibits fixed random variables used to describe such inter-arrival times, and at least one queue is not stable. Consequently, a fixed random vector cannot describe the behavior of a Jackson network of queues, regardless of whether the structure of the network is simple or complex, with or without feedback paths. Therefore, statistical equilibrium with respect to the numbers of customers in all the queues in the network as a whole does not exist, although the solutions to Equation (4.1) are not difficult to find.
By definition [7], if the number of customers in a queue remains finite after the queue has been in operation for an infinitely long time, the queue is sub-stable. A stable queue is of course sub-stable. But a sub-stable queue may not necessarily be stable. If a queue is sub-stable but not stable, the queue is properly sub-stable. If a queue is properly sub-stable, the number of customers in the queue is always finite, but its distribution will not converge to a limit. That is, the behavior of the queue cannot be described by a fixed random variable. The meaning of “not stable” is not “unstable”; the latter means “not sub-stable”. The number of customers in an unstable queue will become infinitely large as time approaches infinity. As shown above, in a Jackson network consisting of two queues at least, a properly sub-stable queue is mistaken for a stable queue.

4.2. Generalization

It is possible to generalize the results concerning Jackson networks of queues. First, let us consider a system of work-conserving, single-server queues in tandem. Each queue has a finite service capacity and an infinite waiting room. At each queue, service times are generally distributed, mutually independent, and independent of its arrivals. The first queue is a stable GI/GI/1 queue in steady state. All customers arrive from outside of the system at the first queue and leave the system from the last queue after being served there. A queue in the system is called a downstream queue, if it is not the first queue or the last queue. After receiving service at the first queue or at a downstream queue, a customer goes immediately to the next queue. A queue in the system is merely properly sub-stable rather than stable, if it is not the first queue. Consequently, this system of queues in tandem as a whole is not stable, in the sense that its behavior cannot be described by a fixed random vector. For the downstream queues and the last queue, the two different notions, proper sub-stability and stability, are confused in the literature; the confusion leads to mistaking properly sub-stable queues for stable queues. Such proper sub-stability due to the departure processes with two-fold randomness is entirely ignored in the existing literature.
Based on the above analysis, the results may be further generalized as follows. The network may have a general topological structure. Each queue in the network may have multiple work-conserving servers with finite service capacities and its waiting room may not necessarily be infinite. External arrivals at a queue may not necessarily be independent of service times or form a renewal process. Service times may follow different distributions at different queues.

4.3. Future Work

Because the stability condition obtained by solving Equation (4.1) ensures that each queue in a Jackson network is properly sub-stable, the total number of customers in the network is always finite. In the above sense, the network as a whole is also properly sub-stable. It may be possible to use the splitting technology to study the departure process of each queue in the network. By doing so, we may obtain the probability of “a customer departs from the queue corresponding to a given type of inter-departure times”; by summing the probabilities corresponding to different types of inter-departure times, we may further obtain the probability of “a customer departs from the queue”, similar to Equation (1.1). The product-form distribution given by Jackson’s theorem appears to be a consequence of ignoring the difference between different types of inter-departure times. By taking the difference into account, we may obtain a practically meaningful solution of the queuing network model. The product-form distribution may be considered an approximation to the solution obtained by applying the splitting technology. Future work should focus on finding practically meaningful solutions of queuing systems based on the splitting technology, so long as the queues consisting of a system, which may be a Jackson network or a queuing network discussed in the last subsection, are all properly sub-stable.

5. Conclusion

Many scientists and engineers use counting processes to study random phenomena in the real world, such as queuing theory applied to different fields. This study aims at demonstrating the existence of phenomena modeled by the counting processes with multiple randomness defined in this article and illustrating their properties. All existing theories, including queuing theory, should be developed to find new results in practical applications. The purpose of illustrating two-fold randomness observed in queuing phenomena is to develop queuing theory. Modeled by other stochastic processes, phenomena with multiple randomness similar to queuing phenomena modeled by counting processes with two-fold randomness may appear in various experiments concerning practical applications of sciences and engineering disciplines but may not have been identified. It may be reasonable to expect more phenomena with multiple randomness to be identified, which may help scientists and engineers to explain weird phenomena and solve puzzling problems in the real world.

Funding

This research received no funds or grants.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Ross, S.M. Stochastic Processes; John Wiley & Sons: New York, 1991.
  2. D.J.Daley.; D.Vere-Jones. An Introduction to the Theory of Point Processes; Springer-Verlag: New York, 1988.
  3. Kleinrock, L. Queueing Systems Vol. II: Computer Applications; John Wiley & Sons: New York, 1983.
  4. Kloska1, S.M.; Pałczyński, K.; Marciniak, T.; Talaśka, T.; Miller, M.; Wysocki, B.J.; Davis, P.; Wysocki, T.A. Queueing theory model of pentose phosphate pathway. Scientific Reports 2022. [CrossRef]
  5. Buzna, L.; Carvalho, R. Controlling congestion on complex networks: fairness, efficiency and network structure. Scientific Reports 2017. [CrossRef]
  6. Baskett, F.; Chandy, K.M.; Muntz, R.R.; Palacios, F.G. Open, closed, and mixed networks of queues with different classes of customers. Journal of the ACM 1975, 22, 248–260. [CrossRef]
  7. Loynes, R.M. The stability of a queue with non-independent inter-arrival and service times. Proc. Camb. Philos. Soc. 1962, 58, 497–520. [CrossRef]
  8. Burke, P.J. The output of a queuing system. Operations Research 1956, 4, 699–704. [CrossRef]
  9. Reich, E. Waiting times when queues are in tandem. Ann. Math. Statist. 1957, 28, 768–773. [CrossRef]
  10. Kelly, F.P. Reversibility and Stochastic Networks; John Wiley & Sons: New York, 1979.
  11. Jackson, J.R. Networks of waiting lines. Operations Research 1957, 5, 518–521. [CrossRef]
  12. Jackson, J.R. Job-shop like queueing systems. Management Science 1963, 10, 131–142. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated