Preprint
Article

This version is not peer-reviewed.

Identifying Multiple Randomness in Random Experiments: Definition and Examples

Submitted:

27 July 2025

Posted:

29 July 2025

Read the latest preprint version here

Abstract

Multiple randomness is a new kind of random phenomena appeared in experiments concerning practical applications of sciences and engineering disciples. But phenomena with multiple randomness have been ignored in the existing literature, because correctly derived results in mathematics are used to interpret, incorrectly, outcomes obtained by experiments designed to solve practical problems in the real world, leading to questionable theorems in practice. Multiple randomness is characterized by unique properties of the sample space consisting of all possible outcomes obtained by the corresponding random experiment and differs essentially from any known phenomenon observed. This study aims to demonstrate the existence of such phenomena. To this end, the general definition of multiple randomness in a random experiment performed to count the number of ``events'' is given, and specific examples in queuing theory are provided to illustrate the properties of multiple randomness in counting processes. More phenomena with multiple randomness in other stochastic processes and in other application fields of sciences and engineering disciples may also be identified, which may help scientists and engineers to explain weired phenomena and solve puzzling problems in the real world.

Keywords: 
;  ;  ;  

1. Introduction

Random phenomena are typically modeled by probability spaces used to solve practical problems in the real world. Counting processes are popular models of many interesting random phenomena. Let N ( t ) be a counting process. As a stochastic process, N ( t ) represents the total number of “events” occurred up to time t 0 . If the jth event occurs at instant τ j , where j 1 , the inter-event time τ j + 1 τ j is a random variable. If an experiment is performed to count the number of events, the outcome cannot be determined in advance; counting the number of events is a random experiment; its sample space Ω is the set of all possible outcomes, which may take different forms. In this study, Ω consists of all possible sequences of instants at which the events occur, i.e.,
Ω = { ( τ n ) n 1 , τ n + 1 > τ n } .
If τ j + 1 τ j for every j is defined on the whole sample space, i.e., a value can be assigned to τ j + 1 τ j at each ω Ω , then N ( t ) is a usual counting process. For usual counting processes, such as a Renewal process, all inter-event times can be characterized by a marginal distribution.
There are also random phenomena modeled by counting processes different from any known phenomena. The difference is this: for any j, the inter-event time τ j + 1 τ j is defined only on some proper subset of Ω and cannot be characterized by a marginal distribution. Such counting processes differ essentially from any known stochastic process in the existing literature. Let ( Ω , A , P ) be the probability space of the random experiment performed to count the number of events, where A represents the σ -algebra of subsets of Ω , and the probability measure P is defined on the measurable space ( Ω , A ) . Denote by N the set of all positive integers. Let m 1 be the total number of distributions for inter-event times. Any inter-event time can only follow one and only one of such distributions. If m = 1 , then for all j 1 , the inter-event times τ j + 1 τ j can be characterized by a marginal distribution, and N ( t ) degenerates into a usual counting process. For an arbitrarily given  j N , write
Ω = Ω 1 ( j ) Ω 2 ( j ) Ω m ( j )
where
Ω i ( j ) = { ω Ω : ( τ j + 1 τ j ) ( ω ) = T i ( ω ) } , P [ Ω i ( j ) ] > 0 , i = 1 m P [ Ω i ( j ) ] = 1 .
Denote by F i the distribution of T i in the expression of Ω i ( j ) . If k k , where 1 < k m and 1 < k m , then F k F k , thus Ω k ( j ) Ω k ( j ) = , i.e., { Ω i ( j ) , i = 1 , 2 , , m } is a partition of Ω . Let ω Ω be an arbitrarily given sample point. Write
N = M 1 ( ω ) M 2 ( ω ) M m ( ω )
where
M i ( ω ) = { j N : ( τ j + 1 τ j ) ( ω ) = T i ( ω ) } , i { 1 , 2 , , m }
constitute a random partition of N . The partition is random, because M i ( ω ) depends on the sample point. At the given ω , if k k , then M k ( ω ) M k ( ω ) = .
Definition 1. 
If Ω possesses the properties given above, N ( t ) 0 is said to be a counting process with multiple (or m-fold) randomness for m > 1 . The corresponding inter-event times constitute a stochastic sequence ( τ j + 1 τ j ) j 1 with m-fold randomness.
According to the above definition, for a counting process with m-fold randomness, the inter-event time τ j + 1 τ j is not defined on the whole sample space for any j and cannot be characterized by a marginal distribution. The distribution of τ j + 1 τ j is not determined by a joint distribution of random variables on Ω ; it is determined uniquely by the random experiment in question. As terms of the stochastic sequence ( τ j + 1 τ j ) j 1 , the inter-event times are not independent random variables. Furthermore, immediately from Definition 1, ( τ j + 1 τ j ) j 1 is not a stationary sequence.
Counting processes with multiple randomness appear naturally in practical applications. Unfortunately, they have not been identified and are all mistaken for their usual counterparts. The best way of demonstrating their existence and illustrating their properties is to provide examples. To this end, several well-known queuing models will be considered. By identifying multiple randomness in the queuing models, an inconsistency in queuing theory concerning departure processes of stable queues in steady state and product-form equilibrium distributions of queuing networks can be readily resolved.
A lemma and its corollary concerning the departure process from a stable, general single-server queue in steady state are proved: the instants of departures from this queue constitute a counting process with two-fold randomness (i.e., m = 2 ), or equivalently, the corresponding inter-departure times form a stochastic sequence with two-fold randomness (Section 2). Based on the lemma and its corollary, the departure process of a stable M/M/1 queue in steady state is revisited, and stability of Jackson networks of queues is scrutinized (Section 3). The article is concluded in Section 4.

2. Two-Fold Randomness in Counting Processes: General Results

Consider departures from a stable, general single-server queue (i.e., a GI/GI/1 system) in steady state. This queuing model has an infinite waiting room and a work-conserving server with a finite service capacity defined by the maximum rate at which the server can perform work. The meaning of “work-conserving” is this: the server will not stand idle when there is unfinished work in the system. Customers arrive at this system according to a renewal process and are served one at a time. Times between successive arrivals are independent and identically distributed (i.i.d.) random variables with a finite mathematical expectation, and so are service times of customers. Inter-arrival times and service times are also mutually independent. For such a queue, a customer leaves the system if and only if the customer has been served. The mean inter-arrival time is greater than the mean service time. Hence the queue is stable and can reach its steady state as time approaches infinity. For the purpose of this study, it is not necessary to assume a specific queuing discipline.
According to the literature, times between consecutive departures from a stable queue in steady state follow a marginal distribution [1]. However, the existence of such a marginal distribution is only an unjustified assumption taken for granted without verification. As can be readily seen in this section, for systems modeled by a stable GI/GI/1 queue, inter-departure times between customers consecutively leaving from the queue cannot be characterized by a marginal distribution, even if the queue is in steady state.
Let c j represent the jth customer served. Using the notation introduced in Section 1, the event “ c j departs from the queue” occurs at τ j . Denote by Q j the queue size immediately after the jth departure. By definition, “queue size” refers to the total number of customers in the queue (including the customer in service). Let T 1 and T 2 be the times between the jth and the ( j + 1 ) th departures when Q j = 0 and Q j > 0 , respectively.
T 1 = I j + S j + 1
T 2 = S j + 1
where I j is the idle time spent by the server waiting for the arrival of a customer, and S j + 1 is the service time of c j + 1 . All arrived customers need to be served; their service times are not zero.
By definition, a random variable is a measurable real-valued function; its domain can be the whole sample space or a subset of the sample space. To define a random variable U on the whole sample space Ω , a value must be assigned to U at each ω Ω . Similarly, to define a random variable on a subset of Ω , a value must be assigned to this random variable at each sample point in the subset. Denote by B the Borel algebra in real line. For a random variable on the whole sample space, such as U, its (marginal) distribution is defined by
P U ( B ) = P ( U B )
where B B is arbitrary. When it is necessary to emphasize the connection between U and P on ( Ω , A ) , the right-hand side of the above equation can be used to express the distribution of U directly.
Similarly, components of a random vector or terms of a stochastic sequence are random variables on Ω . By definition, the components of a random vector (or the terms of a stochastic sequence) must take their values at the same  ω Ω . If a random vector has been defined, then the joint distribution of its components is fixed, and the marginal distribution of each component is determined by the joint distribution. Based on a given random vector, some new random variables may be constructed on subsets of Ω .
For example, let ( U , V ) be a random vector. The joint distribution is P U , V , which determines P U and P V , the marginal distributions of U and V. Based on this random vector, a random variable W may be constructed on Θ Ω with P ( Θ ) > 0 , such that W ( ω ) = V ( ω ) for ω Θ ; some values taken by U specify Θ . Thus, W follows a conditional distribution P W determined by P U , V .
So far, two types of random variables have been mentioned; U and V are defined on the whole sample space, but W is only defined on a proper subset of the sample space and follows a conditional distribution. There also exist random variables different from those mentioned above. Random variables of this type are not components of a random vector, and their distributions are not determined by a joint distribution; times between successive departures from the GI/GI/1 queue are random variables of this type. They are defined on some proper subsets of Ω . Their distributions and properties are determined by a chronological order of events experienced by customers. This chronological order is not determined by properties of the queuing model; it is determined by physical systems modeled by the queue. For a work-conserving system with an infinite waiting room, events experienced by every customer occur naturally as follows:
  • First, a customer arrives at the queue.
  • Upon arrival,
    -
    the customer receives service immediately if the server is idle;
    -
    otherwise the customer has to wait in line.
  • Finally, after being served, the customer departs.
Consequently, for an arbitrarily given j, if the server becomes idle immediately after c j leaves, the time between the departures of c j and c j + 1 is the sum of an idle time and a service time, as shown by Equation (2.1); otherwise the inter-departure time is a service time, see Equation (2.2). For this queuing model, the corresponding sample pace Ω possesses the properties required by a counting process with two-fold randomness due to the chronological order. Write
M 1 ( ω ) = { j N : Q j ( ω ) = 0 }
M 2 ( ω ) = { j N : Q j ( ω ) > 0 }
Ω 1 ( j ) = { ω Ω : Q j ( ω ) = 0 }
and
Ω 2 ( j ) = { ω Ω : Q j ( ω ) > 0 } .
Clearly, at an arbitrarily given  ω Ω , { M 1 ( ω ) , M 2 ( ω ) } is a random partition of N , and it is not difficult to verify Ω 1 ( j ) Ω 2 ( j ) = for an arbitrarily given  j 1 , thus { Ω 1 ( j ) , Ω 2 ( j ) } is a partition of Ω . If Ω 1 ( j ) Ω 2 ( j ) , then for the given j, there would exist a sample point ω Ω 1 ( j ) Ω 2 ( j ) ; when c j departs, the server would be both idle and busy. This is impossible. Because the queue is already in steady state, P [ Ω 1 ( j ) ] = 1 ρ > 0 , P [ Ω 2 ( j ) ] = ρ > 0 , and
P [ Ω 1 ( j ) ] + P [ Ω 2 ( j ) ] = 1
where ρ is the utilization factor.
In the literature, properties of the arrival process and stability of the GI/GI/1 queue are usually established by using the strong law of large numbers. Results of this kind are true for every ω Ω , with the exception at most of a set N A where P ( N ) = 0 . Such results hold with probability one or almost surely. Removing sample points in N will not change anything when the queue is already in steady state, and Ω can be regarded as the sample space of an ideal random experiment performed after the queue has reached its steady state. Because of the properties possessed by Ω , times between consecutive departures from the GI/GI/1 queue constitute a stochastic sequence with two-fold randomness.
Lemma 1. 
If a stable GI/GI/1 queue is in steady state, and if the server has a finite service capacity, then for an arbitrarilygiven j N ,
(a)
τ j + 1 τ j cannot be expressed by any single, fixed random variable, i.e., no random variable defined on Ω can describe τ j + 1 τ j , thus τ j + 1 τ j cannot be characterized by a marginal (i.e., unconditional) distribution;
(b)
( Q j , τ j + 1 τ j ) is not a random vector, and hence τ j + 1 τ j has no distributions conditional on values taken by Q j .
Corollary 1. 
The inter-departure times constitute a stochastic sequence ( τ j + 1 τ j ) j 1 with two-fold randomness.
To understand Lemma 1 and its corollary, the reader may consider how to answer the following question: if τ j + 1 τ j could be expressed by a random variable on Ω , say Z j , what would be the value of Z j at ω corresponding to Q j ( ω ) ? At any ω Ω , the value of Q j equals either zero or a positive integer. Whatever value Q j takes at ω , it is impossible to assign any value to Z j at ω corresponding to Q j ( ω ) . In contrast to τ j + 1 τ j , service times of customers, the queue size Q j , and times between consecutive arrivals are all random variables on Ω . There is a subtlety here, however. For example, when playing the role of an inter-departure time T 2 defined on Ω 2 ( j ) , S j + 1 is no longer a random variable on Ω .
Proof. 
For an arbitrarily given  j N , Ω 1 ( j ) Ω 2 ( j ) = Ω . Consequently, at an arbitrarily given  ω Ω , either ω Ω 1 ( j ) and
Q j ( ω ) = 0 τ j + 1 ( ω ) τ j ( ω ) = T 1 ( ω )
or ω Ω 2 ( j ) and
Q j ( ω ) > 0 τ j + 1 ( ω ) τ j ( ω ) = T 2 ( ω ) .
As shown by Equations (2.1) and (2.2), T 1 and T 2 are well-defined random variables; their distributions, P T 1 and P T 2 , are determined by the chronological order of events experienced by customers, as implied by Equations (2.3) and (2.4). Consequently, for an arbitrarily fixed B B ,
P T 1 ( B ) = P [ T 1 B | Ω 1 ( j ) ] , P T 2 ( B ) = P ( T 2 B | Ω 2 ( j ) ] .
Thus, either Equation (2.3) or Equation (2.4) must hold exclusively at the given sample point ω Ω , and for the given j, τ j + 1 τ j can only be described by T 1 or T 2 , i.e., any single, fixed random variable on Ω cannot express τ j + 1 τ j . To see this, suppose to the contrary: there is a random variable Z j = τ j + 1 τ j on Ω , which can take values at the same sample point ω corresponding to Q j ( ω ) . Consequently,
{ Ω 1 ( j ) , T 1 B } = { Ω 1 ( j ) , Z j B }
{ Ω 2 ( j ) , T 2 B } = { Ω 2 ( j ) , Z j B } .
Because { Ω 1 ( j ) } is independent of events concerning future departures after c j leaves, such as { T 1 B } and { Z j B } , Equation (2.5) implies
P [ Ω 1 ( j ) ] P T 1 ( B ) = P [ Ω 1 ( j ) ] P Z j ( B )
where P Z j is the distribution of Z j . Similarly, { Ω 2 ( j ) } is independent of { T 2 B } and { Z j B } , so Equation (2.6) implies
P [ Ω 2 ( j ) ] P T 2 ( B ) = P [ Ω 2 ( j ) ] P Z j ( B ) .
Because P [ Ω 1 ( j ) ] = 1 ρ > 0 and P [ Ω 2 ( j ) ] = ρ > 0 for all j, treating τ j + 1 τ j as Z j on Ω leads to
P T 1 ( B ) = P T 2 ( B ) = P Z j ( B ) .
This is absurd. The absurdity shows { Z j B } A . Consequently, τ j + 1 τ j cannot be described by any single, fixed random variable on Ω or characterized by a marginal distribution.
By definition, each component of a random vector (or each term of a stochastic sequence) is a random variable on Ω , and all the components (or terms) must take values at the same  ω Ω . Although Q j is a random variable on Ω , τ j + 1 τ j and Q j cannot form a random vector, thus τ j + 1 τ j is not eligible to have distributions conditional on values taken by Q j . Similarly, τ 2 τ 1 , τ 3 τ 2 , cannot form a sequence of random variables on Ω ; they constitute a stochastic sequence with two-fold randomness. □
The conditions required by Lemma 1 exclude two conceivable scenarios for τ j + 1 τ j to be a single, fixed random variable defined on Ω . One is the worst scenario, and the other is an ideal scenario.
The worst scenario is a queue with P [ Ω 1 ( j ) ] = 0 for all j, i.e., the queue is unstable. Because Ω 1 ( j ) Ω 2 ( j ) = Ω , and because P [ Ω 1 ( j ) ] = 0 ,
P [ Ω 1 ( j ) Ω 2 ( j ) ] = P [ Ω 2 ( j ) ] = 1 .
Thus, with probability one, when each customer departs from the system, the queue is not empty, which implies τ j + 1 τ j = S j + 1 for all j. In other words, the idle time I j vanishes identically on Ω , making the server always busy almost surely. Consequently, the queue will never become empty, and the queue size will approach infinity with probability one.
The ideal scenario is a queue with P [ Ω 2 ( j ) ] = 0 for all j, i.e., the queue is always empty almost surely, because P [ Ω 2 ( j ) ] = 0 for all j implies
P [ Ω 1 ( j ) Ω 2 ( j ) ] = P [ Ω 1 ( j ) ] = 1 .
In other words, when each customer departs from the system, the server becomes idle almost surely, which implies τ j + 1 τ j = I j for every j. That is, the server must have an infinite service capacity; this is the only way to make the service times of customers vanish identically. Consequently, I j becomes an inter-arrival time with probability one.
The scenarios above illustrate two extreme situations. If a stable queue is in steady state, then P [ Ω 1 ( j ) ] > 0 and P [ Ω 2 ( j ) ] > 0 for each j. Consequently, ( τ j + 1 τ j ) j 1 is a stochastic sequence with two-fold randomness and differs essentially from any sequence of random variables defined on the whole sample space Ω . To illustrate the difference, consider an arbitrarily given  ω Ω . At this sample point,
(i)
if j M 1 ( ω ) (i.e., immediately after c j departs, there is no unfinished work in the system), then τ j + 1 τ j equals T 1 ( ω ) ; otherwise the server is busy, and τ j + 1 τ j takes T 2 ( ω ) as its value;
(ii)
thus, ( τ j + 1 τ j ) j 1 can be divided into two subsequences randomly as follows.
[ τ j + 1 ( ω ) τ j ( ω ) ] j 1 = [ T 1 ( ω ) ] j M 1 ( ω ) [ T 2 ( ω ) ] j M 2 ( ω ) .
Having a finite service capacity, the server is either busy or idle with a positive probability, thus realistic inter-departure times always fall into two categories. In steady state, the probability for the server to be busy or idle is fixed, and the values of inter-departure times are given by either T 1 ( ω ) or T 2 ( ω ) according to whether ω Ω 1 ( j ) (i.e, the server is idle) or ω Ω 2 ( j ) (i.e., the server is busy) immediately after c j departs, see Eq.(2.1) and Eq.(2.2). In the literature, the distribution of T 1 or T 2 is interpreted as the distribution of τ j + 1 τ j conditional on values taken by Q j . According to such interpretation, a marginal distribution of the inter-departure times could be constructed based on the distributions of T 1 and T 2 .
However, the above interpretation is questionable. By Lemma 1, for each j N , τ j + 1 τ j cannot be expressed as a random variable defined on the whole sample space Ω or characterized by a marginal distribution; it is problematic to treat the distribution of T 1 or T 2 as the distribution of τ j + 1 τ j conditional on values taken by Q j , because ( Q j , τ j + 1 τ j ) is not a random vector; its components cannot take values at the same sample point ω . As shown in the proof of Lemma 1, treating ( Q j , τ j + 1 τ j ) as a random vector leads to a contradiction. According to Corollary 1, the terms of ( τ j + 1 τ j ) j 1 are not random variables on Ω ; they are terms of a stochastic sequence with two-fold randomness. The general results presented in this section may be better understood if a specific example is provided. A stable M/M/1 queue in steady state may serve as such an example, which will be considered in the following section.

3. Two-Fold Randomness in Counting Processes: Specific Examples

A stable M/M/1 queue in steady state is the simplest instance of the GI/GI/1 queue. Customers arrive at this system according to a Poison process; service times are mutually independent and follow a common exponential distribution. According to Burke’s theorem [2], times between successive departures from this queue in steady state are mutually independent and follow a marginal distribution identical to the distribution of inter-arrival times. Let λ be the parameter of this distribution. Clearly, Burke’s theorem contradicts Lemma 1 and Corollary 1. In the following, Burke’s original proof given in [2] (SubSection 3.1), Reich’s proof based on time reversibility given in [3] (SubSection 3.2), experimental results obtained by simulation concerning the departure process from the M/M/1 queue (SubSection 3.3), and as more complicated examples to illustrate the properties of two-fold randomness, Jackson networks of queues are scrutinized (SubSection 3.4). As can be readily seen, although Burke’s theorem is considered as a well-established result in the literature, an inevitable conclusion can be derived: Burke’s theorem is questionable.

3.1. Burke’s Proof Based on Differential Equations

Burke considered “the length of an arbitrary inter-departure interval” in his proof [2]. Expressed by the notation in the present paper, what Burke called “an arbitrary inter-departure interval” is ( τ j , τ j + 1 ) for an arbitrary j. Denote by Q ( t ) the state of the queue, which represents the queue size. Burke’s proof begins with calculating the probability of an event { Q ( t ) = k , τ j + 1 τ j > t } , where t is an instant after τ j , and τ j is taken to be the instant of the last previous departure. By comparing τ j + 1 τ j with t, Burke chose τ j as the origin on the time axis, namely, c j departs at t = 0 . During the interval ( τ j , τ j + 1 ) , there are at least one customer arrives. Denote by t 1 the first arrival instant in this interval, where 0 < t 1 < τ j + 1 .
Burke grounded his proof on an unjustified assumption unconsciously: τ j + 1 τ j is a single, fixed random variable Z j defined on the whole sample space and can take values together with Q ( t ) at any ω Ω . Burke did not mention this assumption in his paper; he took the assumption for granted. However, this assumption is false as shown in Section 2. Under this false assumption, Burke treated [ Q ( t ) , τ j + 1 τ j ] as a random vector. In Burke’s proof, a set of differential equations governing the probability of { Q ( t ) = k , Z j > t } is established. Solving the equations yields [2]
P [ Q ( t ) = k , Z j > t ] = P [ ( Q ( t ) = k ] P ( Z j > t ) , k = 0 , 1 , .
According to Burke’s interpretation of the above mathematical results, { Q ( t ) = k } and { Z j > t } are independent events for any k 0 , and P [ Q ( t ) = k ] is the equilibrium probability of { Q ( t ) = k } , which actually does not depend on t. Moreover, inter-departure times are interpreted by Burke as following the distribution of inter-arrival times.
P ( Z j > t ) = e λ t .
However, although the differential equations established by Burke and their solutions are mathematically correct, the false assumption underlying Burke’s proof makes his theorem questionable; the theorem is nothing but the interpretation of the mathematical results: the equations and their solutions. The mathematical results belong to pure mathematics; Burke’s theorem belongs to practical applications of the mathematical results. Interpretations of correct results in pure mathematics may be incorrect in practical applications, which may lead to questionable theorems in practice.
To see this, consider Q ( t ) = 0 first, i.e., the server is idle at time t > 0 . If Q ( t ) = 0 , then 0 < t < t 1 , and
{ Q ( t ) = 0 , Z j > t } { Q ( 0 ) = 0 , Z j > t } .
In other words, the occurrence of { Q ( 0 ) = 0 , Z j > t } implies the occurrence of { Q ( t ) = 0 , Z j > t } : if { Q ( t ) = 0 , Z j > t } occurs, then { Q ( 0 ) = 0 , Z j > t } also occurs. When the queue is empty at time t > 0 , the queue must be empty in the closed interval [ 0 , t ] , because c j has departed at τ j , which is the instant of the last previous departure taken to be the origin on the time axis by Burke, and because no customer has arrived as required by 0 < t < t 1 . According to Burke’s theorem, namely, the interpretation of the mathematical results obtained by solving the equations, Z j follows the exponential distribution of inter-arrival times with parameter λ . However, according to the chronological order of events experienced by customers, whenever Q ( t ) = 0 ,
Z j = I j + S j + 1 .
Clearly, I j + S j + 1 does not follow the distribution of inter-arrival times.
Now consider Q ( t ) > 0 , i.e., the queue is not empty at t, where t satisfies the inequality t 1 < t < τ j + 1 . However, whenever Q ( t ) > 0 ,
Z j = S j + 1
and S j + 1 is not distributed as an inter-arrival time.
According to Lemma 1 and its corollary, for any j, there are only two kinds of realistic inter-departure times, T 1 and T 2 , which are already defined on Ω 1 ( j ) and Ω 2 ( j ) , respectively (see Eq.(2.1) and Equation (2.2) in Section 2); they cannot be expressed by any random variable on the whole sample space Ω . Such inter-departure times are terms of a sequence with two-fold randomness and cannot be characterized by a marginal distribution; their distributions cannot be determined by joint distributions of random vectors defined on Ω . As shown above, Burke’s proof leads to a contradiction revealed in Section 2,
P T 1 ( B ) = P T 2 ( B ) = P Z j ( B ) .
The contradiction is due to the false assumption underlying Burke’s proof. Based on the assumption, τ j + 1 τ j is considered by Burke as a random variable Z j on Ω with a marginal distribution obtained by treating [ Q ( t ) , τ j + 1 τ j ] as a random vector, which leads to the following questionable interpretation of the mathematical results.
k = 0 P [ Q ( t ) = k , Z j > t ] = P ( Z j > t ) = e λ t .
However, Z j is not defined on Ω and fails to describe realistic inter-departure times. The interpretation given by Burke ignores completely the dependence of departures on the state of the server. Such dependence is part of the constraints imposed by physical systems to be studied based on the queuing model.

3.2. Reich’s Proof Based on Time-Reversible Markov Process

As a birth-death process in steady state (i.e., in statistical equilibrium), Q ( t ) is a time-reversible Markov process. If instants on the time axis are ordered in the reversed direction, Q ( t ) has a time-reversed counterpart Q * ( t ) , which is also a birth-death process. If Q ( t ) is used to model the size of a stable M/M/1 queue in steady state at time t, then “death” and “birth” represent “departure” and “arrival”, respectively. Consequently, intervals between consecutive deaths are inter-departure intervals. Reich provided a different proof of Burke’s theorem based on Q * ( t ) [3]. See also [4].
The proof based on Q * ( t ) is problematic. Although arrival instants when looking forwards in time constitute a Poisson process, the instant τ j for any j at which Q * ( t ) increases by 1 cannot form a Poisson process, because | τ j τ j + 1 | = τ j + 1 τ j . If the birth-death process Q ( t ) serves to model the queue size of the M/M/1 system, then times between consecutive deaths and times between consecutive births do not necessarily follow the same distribution in whatever direction of time.
During an open interval between two consecutive deaths (i.e., departures from the M/M/1 queue), the state of Q ( t ) may change m times, where m 0 . A change of Q ( t ) during ( τ j , τ j + 1 ) is due to an arrival in ( τ j , τ j + 1 ) . If m = 0 , then Q ( t ) remains unchanged in ( τ j , τ j + 1 ) , which implies Q ( t ) > 0 for τ j t < τ j + 1 , because at least one customer, c j + 1 is still in the system, and because no customer arrives in the interval.
Q ( t ) = k + 2 t < τ j k + 1 τ j t < τ j + 1 k t = τ j + 1 .
The inter-departure time τ j + 1 τ j = S j + 1 , because Q ( t ) decreases by 1 if and only if one customer has been served.
If m 1 , then Q ( t ) changes at least once during ( τ j , τ j + 1 ) . Let t i , i = 1 , 2 , represent the instants at which Q ( t ) changes; t i is the ith arrival instant in the interval. Thus, either a) Q ( t ) > 0 for τ j t < τ j + 1 , or b) Q ( t ) = 0 for τ j t < t 1 < τ j + 1 , and Q ( t ) > 0 for t 1 t < τ j + 1 .
Q ( t ) = k + 1 t < τ j k τ j t < t 1 k + 1 t 1 t < t 2 k + 2 t 2 t < t 3 k + m t m t < τ j + 1 k + m 1 t = τ j + 1 .
If k > 0 , τ j + 1 τ j = S j + 1 still holds. However, if k = 0 , then
Q ( t ) = 0 τ j t < t 1 1 t 1 t < τ j + 1
where t 1 is the arrival instant of c j + 1 , and τ j + 1 τ j consists of an idle time of the server (i.e., t 1 τ j = I j ) and a service time (i.e., τ j + 1 t 1 = S j + 1 ). Whatever value k takes, τ j + 1 τ j does not follow the distribution of inter-arrival times. Similarly, for Q * ( t ) , if m = 0 ,
Q * ( t ) = k t = τ j + 1 k + 1 τ j + 1 < t τ j k + 2 t > τ j
and | τ j τ j + 1 | = S j + 1 . When m 1 ,
Q * ( t ) = k + m 1 t = τ j + 1 k + m τ j + 1 < t t m k + 2 t 3 < t t 2 k + 1 t 2 < t t 1 k t 1 < t τ j k + 1 t > τ j
and | τ j τ j + 1 | = S j + 1 if k > 0 ; otherwise | τ j τ j + 1 | = I j + S j + 1 . In steady state, times between consecutive departures from the M/M/1 queue always follow two different distributions, P T 1 and P T 2 (see Equations (2.1) and (2.2) in Section 2), neither of which is identical to the distribution of inter-arrival times.
For a given t, Q ( t ) is independent of future arrivals after t. Time reversibility is used here to argue for “ Q ( t ) is also independent of past departures before t.” Based on an analogy between Q ( t ) and Q * ( t ) , the argument goes as follows: Q ( t ) is independent of future arrivals after t; departures are “arrivals” when looking backwards in time; because Q ( t ) and Q * ( t ) are statistically identical, Q ( t ) is independent of past departures before t.
However, the analogy between Q ( t ) and Q * ( t ) is not appropriate, and the argument based on the analogy is problematic. Although Q ( t ) is independent of arrivals after t, both arrivals and departures prior to t determine Q ( t ) : increase and decrease in Q ( t ) are due to arrivals and departures before t, respectively. For a stable M/M/1 queue in steady state, Q ( t ) depends on departures before t necessarily.
Any queuing model for solving problems in the real world must satisfy the constraints imposed by physical systems to be studied based on the model. The reversed process Q * ( t ) is constructed in pure mathematics without considering the chronological order of events experienced by customers and irrelevant to the departure process from the M/M/1 queue. By Corollary 1, ( τ j + 1 τ j ) j 1 is a stochastic sequence with two-fold randomness; its terms cannot be characterized by a marginal distribution. Time reversibility cannot change ( τ j + 1 τ j ) j 1 into a sequence of i.i.d. random variables on Ω . Thus, the arguments in queuing theory based on Q * ( t ) are questionable.
Questing the arguments in queuing theory based on Q * ( t ) is not to say “a birth-death process Q ( t ) in statistical equilibrium has no time-reversed counterpart Q * ( t ) .” The birth-death process Q ( t ) is still time-reversible and has its time-reversed counterpart Q * ( t ) . But Q * ( t ) is just not applicable to queuing models.

3.3. Experimental Results Concerning Counting Processes with Two-Fold Randomness

There are many experimental results obtained by simulation concerning departures from the M/M/1 queue. All of them are claimed to be in agreement with Burke’s theorem. However, as shown below, the experimental results are misinterpreted. Let μ < represent the parameter of the service-time distribution. The mean inter-arrival time and the mean service time are 1 / λ and 1 / μ , respectively. Using the notations in Section 2, for all j,
P [ Ω 1 ( j ) ] = 1 ρ
and
P [ Ω 2 ( j ) ] = ρ
where 0 < ρ = λ / μ < 1 . If Q j = 0 , then τ j + 1 τ j = T 1 , see Eq.(2.1), and its probability density function (pdf) is
f T 1 ( t ) = λ μ μ λ ( e λ t e μ t ) .
If Q j > 0 , then τ j + 1 τ j = T 2 , see Equation (2.2), and its pdf is f T 2 ( t ) = μ e μ t .
Let K t represent a time interval ( t , t + d t ] of an infinitesimal length d t . Write
H j = { Q j = 0 , T 1 K t } { Q j > 0 , T 2 K t } .
For a stable M/M/1 queue in steady state, a simple calculation yields
P ( H j ) = P ( Q j = 0 , T 1 K t ) + P ( Q j > 0 , T 2 K t ) = λ e λ t d t .
The probability obtained by the calculation is indeed in agreement with the experimental results, but it cannot be interpreted as
P ( Q j = 0 , Z j K t ) + P ( Q j > 0 , Z j K t ) = P ( Z j K t ) = λ e λ t d t .
In the above interpretation, Z j is treated, incorrectly, as a random variable representing τ j + 1 τ j at each ω Ω .
By Lemma 1, τ j + 1 τ j is not a random variable defined on the whole sample space Ω , thus ( Q j , τ j + 1 τ j ) is not a random vector. It is incorrect to interpret Equation (3.2) as identical to Equation (3.3), because { Z j K t } A (see Section 2).
Simulation studies are based on the strong law of large numbers, and the sequence studied by simulation should consist of i.i.d. random variables on Ω . As shown in Section 2, ( τ j + 1 τ j ) j 1 is a stochastic sequence with two-fold randomness; it can be randomly divided into two subsequences.
[ τ j + 1 ( ω ) τ j ( ω ) ] j 1 = [ T 1 ( ω ) ] j M 1 ( ω ) [ T 2 ( ω ) ] j M 2 ( ω )
which holds at any given ω Ω . Accordingly, ( H j ) j 1 can also be divided randomly into two subsequences at the sample point.
[ H j ( ω ) ] j 1 = [ Q j ( ω ) = 0 , T 1 ( ω ) K t ] j M 1 ( ω ) [ Q j ( ω ) > 0 , T 2 ( ω ) K t ] j M 2 ( ω ) .
Consequently, it is impossible to construct a sequence of i.i.d. random variables based on ( H j ) j 1 , and it is not legitimate to apply the strong law to study ( H j ) j 1 . Nevertheless, if I ( H j ) , the indicator of H j , is considered instead of H j , then I ( H 1 ) , I ( H 2 ) , constitute a sequence of i.i.d. random variables all defined on Ω . By the strong law,
lim n j = 1 n I ( H j ) n = E [ I ( H 1 ) ]
with probability one, where
E [ I ( H 1 ) ] = P ( Q 1 = 0 , T 1 K t ) + P ( Q 1 > 0 , T 2 K t ) = λ e λ t d t .
The above analysis not only explains why Equation (3.2) is in agreement with the experimental results of simulation but also reveals the essential difference between Equations (3.2) and (3.3); the interpretation given by Equation (3.3) does not make sense in practice. To see this, consider an arbitrarily given j. Were the interpretation meaningful in applications, there would be a sample point ω Ω ; at this ω , when c j departs, a value would be taken by Z j corresponding to both Q j ( ω ) = 0 and Q j ( ω ) > 0 simultaneously. In no sense can anything like this happen in the real world.

3.4. More Complicated Examples

Consider a Jackson network consisting of J single-server queues denoted by Q j , j = 1 , 2 , , J . At each queue, there is an infinite waiting room, and customers are served by a work-conserving server. The mean service time is 1 / μ j at Q j . By Jackson’s theorem [5], these queues are stable if λ j < μ j , where λ j is the total arrival rate of customers at Q j , determined by the following equations.
λ j = γ j + i = 1 J λ i P i j , i = 1 , 2 , , J .
In Equation (3.4), γ j is the arrival rate of customers at Q j from outside of the system, and P i j is the probability for a customer to join Q j immediately after leaving Q i . So the arrival rate of customers at Q j from Q i is λ i P i j . All the proofs of Jackson’s theorem, including Jackson’s original proof and the proof with time reversibility, depend on the stability condition λ j < μ j obtained by solving Equation (3.4).
According to Jackson’s interpretation (e.g, [5,6]), after the network has been in operation for an infinitely long time, each queue in the network behaved “as if” (i) its size were a random variable characterized by a marginal distribution, and (ii) all such random variables were independent, possessing a joint distribution equal to the product of their marginal distributions. However, the phrase “as if” in Jackson’s interpretation makes it questionable, because it is inconsistent with the standard notion of independent random variables. For example, if the joint distribution of two random variables can be expressed as the product of their marginal distributions, the random variables are indeed independent, rather than behaved as if they were independent. The inconsistency is due to λ j < μ j , j = 1 , 2 , , J , the stability condition required by Jackson’s theorem, which relies on an unjustified assumption (see below) and does not necessarily imply stability of every queue in the network. Consequently, the assumption makes Jackson’s theorem irrelevant to physical systems modeled by Jackson networks of queues, although the solution of Equation (3.4) may not be difficult to find.
Consider two queues Q 1 and Q 2 in tandem, where Q 1 is a stable M/M/1 queue in steady state. Customers arrive first at Q 1 according to a Poisson process with a rate λ . After being served at Q 1 , they join Q 2 immediately; Q 2 also has an infinite waiting room. Service times of a customer spent at Q 1 and Q 2 are mutually independent, following exponential distributions with finite service rates μ 1 > λ and μ 2 > λ . In addition, service times at Q 2 are not only mutually independent but also independent of arrivals both at Q 1 and at Q 2 .
According to Burke’s theorem, departures from Q 1 constitute a Poisson process with the rate λ . Thus, Q 2 behaved as if it were a stable M/M/1 queue isolated from Q 1 , and Burke’s theorem allows the two queues to be treated as a Jackson network of queues [6], which will be used here as a counterexample. This simple counterexample may help the reader to find out why Jackson’s theorem is questionable in a straightforward way, and why time reversibility cannot explain away the inconsistency mentioned above.
By Jackson’s theorem [5], the numbers of customers in the two queues in tandem are independent, and follow a product-form joint distribution after the network has been in operation for an infinitely long time. For such a network of two tandem queues, Equation (3.4) is simply
λ 1 = λ 2 = λ
which is also a consequence of Burke’s theorem.
In the literature, Q 2 is claimed to be stable; the claimed stability follows from Equation (3.5). However, because times between consecutive departures (i.e., τ j + 1 τ j ) from Q 1 are times between consecutive arrivals at Q 2 , and because τ j + 1 τ j cannot be described by any single, fixed random variable, the number of customers in Q 2 cannot be described by any single, fixed random variable either, even if Q 1 is stable and has reached its steady state.
The unjustified assumption underlying Equations (3.4) and (3.5) is this: times between successive departures from a stable queue in steady state follow a marginal distribution. This assumption is the basis to define the arrival rate at a queue for customers coming from inside of the system. By Lemma 1, for the network of the two queues in tandem, the assumption is problematic, unless the server at Q 1 has an infinite service capacity. Only in this unrealistic scenario, treating Q 2 as a stable M/M/1 queue isolated from Q 1 will not lead to the inconsistency. Because service capacities in the real world must be finite, it is illegitimate to use λ , the arrival rate at the first queue, to characterize the arrivals at the second queue. As terms of a stochastic sequence with two-fold randomness, τ j + 1 τ j do not have a marginal distribution; time reversibility cannot change this fact, see also SubSection 3.2.
In general, so long as inter-arrival times between customers at a queue are times between consecutive departures from another queue, any single, fixed random variable cannot describe such inter-arrival times. Consequently, any single, fixed random vector cannot describe the behavior of a Jackson network of queues, regardless of whether the structure of the network is simple or complex. In a Jackson network, with or without feedback paths, at least one queue is not stable. That is, statistical equilibrium with respect to the numbers of customers in all the queues in the network as a whole does not exist. Therefore, Jackson’s theorem is indeed questionable, and no Jackson network is stable.
By definition [1], if the number of customers in a queue remains finite after the queue has been in operation for an infinitely long time, the queue is sub-stable. A stable queue is of course sub-stable. But a sub-stable queue may not necessarily be stable. If a queue is sub-stable but not stable, the queue is properly sub-stable. If a queue is properly sub-stable, the number of customers in the queue is always finite, but its distribution will not converge to a limit. That is, the behavior of the queue cannot be described by any single, fixed random variable. The meaning of “not stable” is not “unstable”; the latter means “not sub-stable”. The number of customers in an unstable queue will become infinitely large as time approaches infinity. As shown above, in a Jackson network consisting of two queues at least, a properly sub-stable queue is mistaken for a stable queue.
It is possible to generalize the results concerning Jackson networks of queues. Consider a system of work-conserving, single-server queues in tandem. Each queue has a finite service capacity and an infinite waiting room. At each queue, service times are generally distributed, mutually independent, and independent of its arrivals. The first queue is a stable GI/GI/1 queue in steady state. All customers arrive from outside of the system at the first queue and leave the system from the last queue after being served there. Any queue, which is not the first queue or the last queue, is called a downstream queue. After receiving service at the first queue or a downstream queue, a customer goes immediately to the next downstream queue. All downstream queue and the last queue in the system are not stable; they are merely properly sub-stable. Consequently, this system of queues in tandem as a whole is not stable, in the sense that its behavior cannot be described by any single, fixed random vector. For the downstream queues and the last queue, the two different notions, proper sub-stability and stability, are confused in the literature; the confusion leads to mistaking properly sub-stable queues for stable queues. Such proper sub-stability due to the departure processes with two-fold randomness is entirely ignored in the existing literature.
Other possible generalization is to apply the above analysis to queuing networks with more general topological structures. For example, any queue in a network may have multiple work-conserving servers with finite service capacities; its waiting room may not necessarily be infinite. External arrivals at the queue may not necessarily form a renewal process. Service times may follow different distributions at different queues in the network; external arrivals and service times can also be dependent.

4. Conclusions

Phenomena with multiple randomness appear naturally in random experiments concerning practical applications of sciences and engineering disciples. Many interesting random phenomena are modeled by counting processes, which are widely applied to solve practical problems in the real world. However, counting processes with multiple randomness, which differ essentially from known stochastic processes in the existing literature, are all mistaken for usual counting processes, because correctly derived mathematical results are used to interpret, incorrectly, the corresponding experimental results, leading to questionable theorems in practice. In this article, the general definition of multiple randomness in counting processes is given, and specific examples in queuing theory are provided to illustrate the properties of two-fold randomness in counting processes. In addition to the examples in queuing theory, more counting processes with m-fold randomness ( m 2 ) may be found in other application fields of science and engineering disciples and in other stochastic processes used by scientists and engineers, which may help them to explain weired phenomena and solve puzzling problems in the real world.

Funding

This research received no funds or grants.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Loynes, R.M. The stability of a queue with non-independent inter-arrival and service times. Proc. Camb. Philos. Soc. 1962, 58, 497–520. [Google Scholar] [CrossRef]
  2. Burke, P.J. The output of a queuing system. Operations Research 1956, 4, 699–704. [Google Scholar] [CrossRef]
  3. Reich, E. Waiting times when queues are in tandem. Ann. Math. Statist. 1957, 28, 768–773. [Google Scholar] [CrossRef]
  4. Kelly, F.P. Reversibility and Stochastic Networks; John Wiley & Sons: New York, 1979. [Google Scholar]
  5. Jackson, J.R. Networks of waiting lines. Operations Research 1957, 5, 518–521. [Google Scholar] [CrossRef]
  6. Jackson, J.R. Job-shop like queueing systems. Management Science 1963, 10, 131–142. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated