Preprint
Article

This version is not peer-reviewed.

On Dobrushin’s Central Limit Theorem for Non-homogeneous Markov Chains in the Array Scheme, II

Submitted:

23 September 2025

Posted:

24 September 2025

You are already at the latest version

Abstract
This paper proposes a new sufficient condition for the Central Limit Theorem to hold for non-homogeneous Markov chains in an array scheme, even when the Markov-Dobrushin ergodicity coefficient violates the classical Dobrushin condition.
Keywords: 
;  ;  ;  

1. Introduction

An important central limit theorem (CLT) for non-homogeneous Markov chains was established by R.L. Dobrushin in 1956, a result that has since attracted considerable attention. This paper contributes a further extension of this celebrated theorem, which differs from the one proposed in our recent preprint (arXiv:2406.16156). The initial work in this direction by A.A. Markov was subsequently advanced by Bernstein, Sapogov, and Linnik, culminating in Dobrushin’s seminal publication. Dobrushin ultimately showed that, under an additional non-degeneracy condition on the variances, the sufficient condition
lim n n 1 / 3 α n = ,
is also very close to necessary. Here, α n denotes the Dobrushin (or Markov-Dobrushin) ergodic coefficient. The optimality of the rate n 1 / 3 was demonstrated by the Bernstein-Dobrushin example. While extensions exist for Markov chains with memory and the results have been simplified using martingale methods, this paper employs an adapted martingale approach to establish a CLT under new, slightly modified assumptions. Our motivation stems from the central role of the CLT in mathematical statistics.
The paper consists of 5 sections: Introduction, The setting, Main results, Auxiliary results, Proof of the theorem.

2. The Setting

Consider a non-homogeneous Markov chain defined on a probability space ( Ω , F , P ) . Let π i , i + 1 ( x , d y ) denote the Markov transition kernel governing the evolution from time i to i + 1 on a measurable space ( X , B ( X ) ) , which is assumed to satisfy the standard conditions for a Markov process (cf. [5]). The corresponding transition probabilities are defined by the relation:
P [ X i + 1 A | X i = x ] = a . s . π i , i + 1 ( x , A ) ,
where { X i : 1 i n } is the trajectory of the process on [ 1 , n ] . In particular, if the initial distribution is X 1 μ then the distribution at time k 2 is given by the measure μ π 1 , 2 π 2 , 3 π k 1 , k .
Due to the nature of conditional expectations, certain equalities and inequalities hold only almost surely (a.s.).
For i < j let us denote by π i , i + k the k-step transition kernel, which has a form due to the Chapman-Kolmogorov equation,
π i , j = π i , i + 1 π i + 1 , i + 2 π j 1 , j .
Definition 1. 
The Markov-Dobrushin (MD) coefficient1  δ ( π ) is defined by any of the three equivalent expressions,
δ ( π ) = sup x 1 , x 2 X , A B ( X ) | π ( x 1 , A ) π ( x 2 , A ) | = 1 2 sup x 1 , x 2 X , f B 1 | f ( y ) [ π ( x 1 , d y ) π ( x 2 , d y ) ] | = sup x 1 , x 2 X , u U | ( π u ) ( x 1 ) ( π u ) ( x 2 ) | , where U = { u ( · ) : sup y 1 , y 2 | u ( y 1 ) u ( y 2 ) | 1 } .
where U = { u ( · ) : sup y 1 , y 2 | u ( y 1 ) u ( y 2 ) | 1 } .
Here f B = sup x X | f ( x ) | . It follows from the definition that 0 δ ( π ) 1 , and that δ ( π ) = 0 if and only if the measure π ( x , d y ) does not depend on x, and in the latter case the second random variable does not depend on the first one. The measure π will be called non-degenerate of 0 δ ( π ) < 1 . To denote integrals, standard notations will be used,
( μ π ) ( A ) = μ ( d x ) π ( x , A ) , ( π u ) ( x ) = u ( y ) π ( x , d y ) .
Let
Osc ( u ) : = sup x 1 , x 2 | u ( x 1 ) u ( x 2 ) | .
This semi-norm will be applied to random variables (see lemmata 13, 14), interpreted in the sense of the essential supremum norm.That is, for any random variable X on ( Ω , F , P ) , we define:
Osc ( X ) : = ess P sup ω Ω X ( ω ) ess P inf ω Ω X ( ω ) .
Further, for any transition kernels π 1 , π 2 and two-step transition kernel π 1 π 2 (the first transition is according to the matrix π 1 , while the second – according to π 2 ), the following inequality holds true,
δ ( π 1 π 2 ) δ ( π 1 ) δ ( π 2 ) ,
Similarly,
δ ( π i , j ) δ ( π i , i + 1 ) δ ( π i + 1 , i + 2 ) δ ( π j 1 , j ) .
δ ( π i , j ) ( 1 α n ) j i .
Accordingly,
α ( π i , j ) = 1 δ ( π i , j ) .
In what follows, E Y and D [ Y ] stand for the expectation and for the variance of the random variable Y, respectively.
We work in an array scheme: for each n 1 , let X i ( n ) : 1 i n be a Markov chain on X with transition kernels π i , i + 1 ( n ) ( x , d y ) : 1 i n 1 and initial distribution μ ( n ) . For simplicity, all processes ( X ( n ) ) are defined on a common probability space ( Ω , F , P ) , though this assumption can be relaxed. Define the minimal ergodicity coefficient as:
α n = min 1 i n 1 α ( π i , i + 1 ( n ) ) .
Further, let { f i ( n ) : X R , 1 i n } be real-valued functions on X . For any n 1 let
S n : = i = 1 n f i ( n ) ( X i ( n ) ) .
The main result of Dobrushin’s paper, along with a key corollary, is presented in [13] as follows (we state it as a proposition to distinguish it from our main result, which will be termed a theorem). Here, “⇒” denotes weak convergence.
Proposition . 
Let C n : = sup 1 i n sup x X | f i ( n ) ( x ) | < . In this case, if
lim n C n 2 α n 3 i = 1 n D ( f i ( n ) ( X i ( n ) ) ) 1 = 0 ,
then the CLT holds true in the array scheme:
S n E ( S n ) D ( S n ) N ( 0 , 1 ) .
Corollary . 
Under the assumptions of the theorem, if sup n C n = C < , and inf i , n D ( f i ( n ) ( X i ( n ) ) ) c > 0 , and if
lim n n 1 / 3 α n = ,
then weak convergence (5) holds.

3. Main Results

Let fix n and define ( β k ( n ) , 1 k n ) to be a non-random sequence of ones and zeros, like ( 110010 01 ) , and let for 1 i j n ,
κ j β = k = 1 j 1 ( β k ( n ) = 1 )
Denote
α n β : = min 1 i n 1 : β i ( n ) = 1 α ( π i , i + 1 ( n ) ) .
Then (3) may be relaxed to
δ ( π i , j ) ( 1 α n β ) κ j β κ i β .
Definition . 
We say that the condition ( H β ) is met if there exist m 0 > 0 and c > 0 such that for any n the sequence ( β k ( n ) , 1 k n ) satisfies the following property: for any pair ( i < j ) with j i m 0 , the inequality
k j β k i β c ( j i )
is satisfied.
In what follows such a sequence ( β k ( n ) , 1 k n ) is fixed for each n. The next theorem is the main result of the paper.
Theorem . 
Let the assumption ( H β ) be met, and let also
sup 1 i n sup x X | f i ( n ) ( x ) | C n < .
>Then, if condition
lim n C n 2 α n 1 ( α n β ) 2 i = 1 n 1 β i ( n ) D ( f i + 1 ( n ) ( X i + 1 ( n ) ) ) 1 = 0
is satisfied, then C n 2 D ( S n ) and the CLT holds true,
S n E ( S n ) D ( S n ) N ( 0 , 1 ) .
Corollary . 
If sup n C n = C < and
inf i , n D ( f i ( n ) ( X i ( n ) ) ) c > 0 ,
and the conditions (8) along with
lim n n α n ( α n β ) 2 = ,
hold, then convergence (11) is valid.

4. Auxiliary Results

The following auxiliary results are collected in this section. Unless stated otherwise, they are reproduced without proof from [7] and [13]. However, proofs are provided for Lemmata 8, 12, and 14. Because, despite their simplicity, they involve new calculations that are crucial for proving the main result. The notation used is consistent with that in [13]. In particular, the essential supremum norm of a random variable Z is defined as:
Z L : = ess P sup ω Ω | Z ( ω ) | .
Proposition .([7,10]) Let for any n 1 the process { ( W i ( n ) : 0 i n } be a martingale with respect to a filtration ( G i ( n ) ) , W 0 ( n ) = 0 , and let ξ i ( n ) : = W i ( n ) W i 1 ( n ) . If
max 1 i n ξ i ( n ) L 0 , i = 1 n E [ ( ξ i ( n ) ) 2 | G i ( n ) ] 1 in L 2
Or in other words
max 1 i n ess P sup ω | ξ i ( n ) ( ω ) | 0 , E i = 1 n E [ ( ξ i ( n ) ) 2 | G i ( n ) ] 1 2 0
then the weak convergence takes place,
W n ( n ) N ( 0 , 1 ) .
Following [13], we assume without loss of generality that the functions f i ( n ) are centered:
E [ f i ( n ) ( X i ( n ) ) ] = 0 , 1 i n , n 1 .
Define
Z k ( n ) : = i = k n E [ f i ( n ) ( X i ( n ) ) | X k ( n ) ] ,
so that
Z k ( n ) = f k ( n ) ( X k ( n ) ) + i = k + 1 n E [ f i ( n ) ( X i ( n ) ) | X k ( n ) ] , 1 k n 1 , f n ( n ) ( X n ( n ) ) , k = n .
Then, for any 1 k n 1 , the representation holds,
f k ( n ) ( X k ( n ) ) = Z k ( n ) E [ Z k + 1 ( n ) | X k ( n ) ] .
Furthermore, for any 2 k n 1 , this expression can be equivalently rewritten as
( Z k ( n ) E [ Z k ( n ) | X k 1 ( n ) ] ) + ( E [ Z k ( n ) | X k 1 ( n ) ] E [ Z k + 1 ( n ) | X k ( n ) ] )
Consequently, the sum S n can be expressed in martingale form as follows:
S n = i = 1 n f i ( n ) ( X i ( n ) ) = Z 1 ( n ) + k = 2 n [ Z k ( n ) E [ Z k ( n ) | X k 1 ( n ) ] ] .
As emphasized in [13], this transformation originates from the seminal paper [6] on a "simple" CLT for Markov chains. Given that all terms on the right-hand side of this representation are uncorrelated, it follows that
D ( S n ) = k = 2 n D ( Z k ( n ) E [ Z k ( n ) | X k 1 ( n ) ] ) + D ( Z 1 ( n ) ) .
Let
ξ k ( n ) = 1 D ( S n ) [ Z k ( n ) E [ Z k ( n ) | X k 1 ( n ) ] ]
M k ( n ) = = 1 k ξ ( n )
Then the process M k ( n ) is a martingale with respect to the filtration F k ( n ) = σ { X ( n ) : 1 k } for 1 k n . The central limit theorem will be established by approximating the normalized sum S n D ( S n ) by the terminal martingale value M n ( n ) and then applying a martingale CLT, following the approach in [13] (see Proposition 7). It therefore suffices to verify the two conditions in (15).
Further, by f i ( n ) π i , j f j ( n ) we denote the product of the functions f i ( n ) and π i , j f j ( n ) . Notice that for any i < j , n, and function f j ( n ) ,
Osc ( π i , j ( f j ( n ) ) ) δ ( π i , j ) Osc ( f j ( n ) ) .
Lemma . 
Under the conditions of the theorem, for any 1 i j n ,
π i , j f j ( n ) B 2 C n ( 1 α n β ) κ j β κ i β , Osc ( π i , j ( f j ( n ) ) 2 ) 2 C n 2 ( 1 α n β ) κ j β κ i β ;
and also for 1 l < i j n ,
Osc ( π l , i ( f i ( n ) π i , j f j ( n ) ) ) 6 C n 2 ( 1 α n β ) κ i β κ l β ( 1 α n β ) κ j β κ i β = 6 C n 2 ( 1 α n β ) κ j β κ l β ;
Proof. 
Since f j ( n ) C n , then Osc ( f j ( n ) ) 2 C n . It follows from the definitions of the coefficients δ and α n ( 2 ) , and from (3) that
Osc ( π i , j f j ( n ) ) δ ( π i , j ) Osc ( f j ( n ) ) 2 C n ( 1 α n β ) κ j β κ i β .
Since E [ ( π i , j f j ( n ) ) ( X i ( n ) ) ] = E [ f j ( n ) ( X j ( n ) ) ] = 0 , the first inequality in (21) follows from the bounds
π i , j f j ( n ) B Osc ( π i , j f j ( n ) ) 2 C n ( 1 α n β ) κ j β κ i β .
The second inequality in (21) is established similarly. Indeed, we have due to (20),
Osc ( π i , j ( f j ( n ) ) 2 ) δ ( π i , j ) Osc ( ( f j ( n ) ) 2 ) 2 C n 2 ( 1 α n β ) κ j β κ i β .
To prove the third inequality of the lemma, we estimate using (21):
Osc ( π l , i ( f i ( n ) π i , j f j ( n ) ) ) ( 1 α n β ) κ l β κ i β Osc ( f i ( n ) π i , j f j ( n ) ) ( 1 α n β ) κ l β κ i β Osc ( f i ( n ) ) π i , j f j ( n ) B + f j ( n ) B Osc ( π i , j f j ( n ) ) 6 C n 2 ( 1 α n β ) κ i β κ l β ( 1 α n β ) κ j β κ i β ,
as required.
The following lemma provides a lower bound for the variance of the sum, a result that is crucial for proving the main theorem. We were unable to improve this bound using the coefficient α n β ; consequently, we state it exactly as in [13] and without proof. Nevertheless, even with the weaker coefficient α n , this bound remains highly useful in the subsequent analysis.
Proposition 
([13], proposition 3.2). Under the assumptions of the theorem 5, for any n 1 ,
D ( S n ) α n 4 i = 1 n D ( f i ( n ) ( X i ( n ) ) ) .
To prove this proposition, the following auxiliary statements are necessary.
Lemma 
(Lemma 4.1, [13]). Let λ be a probability measure on X × X with marginals α and β respectively. Let π x 1 , d x 2 and π ^ x 2 , d x 1 be the corresponding transition probabilities in the two directions so that α π = β and β π ^ = α . Let f x 1 and g x 2 be square integrable with respect to α and β respectively. If
f x 1 α d x 1 = g x 2 β d x 2 = 0
then,
f x 1 g x 2 λ d x 1 , d x 2 δ ( π ) f L 2 ( α ) g L 2 ( β ) .
Lemma 
(Lemma 4.2, [13]). Let f x 1 and g x 2 be square integrable with respect to α and β respectively. Then,
E f x 1 g x 2 2 α ( π ) V f x 1
as well as
E f x 1 g x 2 2 α ( π ) V g x 2 .
We now estimate the ratio | Z ( n ) k | L / D ( S n ) in the following lemma. Note that by the definition of the L norm in (14), the value Z k ( n ) L is nonrandom; see the definition of Z k ( n ) in (16).
Lemma 12. 
Under the assumptions of the theorem 5, the equality holds,
lim n sup 1 k n Z k ( n ) L D ( S n ) = 0 .
Proof. 
We have,
E E [ f i ( n ) ( X i ( n ) ) | X k ( n ) ] = 0 .
Therefore, because of the equality
E [ f i ( n ) ( X i ( n ) ) | X k ( n ) ] = a . s . π k , i f i ( n ) ( X k ( n ) ) ,
and by virtue of (21), we obtain,
E [ f i ( n ) ( X i ( n ) ) | X k ( n ) ] L = ess P sup ω Ω | E [ f i ( n ) ( X i ( n ) ) | X k ( n ) ] | = ess P sup ω | π k , i f i ( n ) ( X k ( n ) ) | π k , i f i ( n ) B 2 C n ( 1 α n β ) κ i β κ k β .
So, it follows,
Z k ( n ) L i = k n E [ f i ( n ) ( X i ( n ) ) | X k ( n ) ] L = i = k n π k , i f i ( n ) B i = k n 2 C n ( 1 α n β ) κ i β κ k β 2 C n m 0 + 2 C n i = k + m 0 + 1 n 2 C n ( 1 α n β ) κ i β κ k β ( ) 2 C n m 0 + 2 C n i = k + m 0 + 1 n ( 1 α n β ) c ( i k ) 2 C n m 0 + 2 C n i = k + m 0 + 1 ( 1 α n β ) c ( i k ) 2 C n m 0 + 2 C n i = k + m 0 + 1 n ( 1 α n β ) c ( i k )
The last inequality is due to the formula for the geometric sum with the common ratio 0 ( 1 α n β ) < 1 . Continue sequance of inequalities:
2 C n m 0 + 2 C n 1 ( 1 α n β ) c 2 C n m 0 + 2 C n c α n β = 2 C n ( c m 0 + 1 ) / c α n β .
And here we use inequality ( 1 α n β ) c 1 c α n β . Finally, applying the bound from the proposition 9, we get,
sup 1 k n Z k ( n ) L D ( S n ) 4 C n ( α n β ) 2 α n i = 1 n 1 ( β i ( n ) = 1 ) D ( f i ( n ) ( X i ( n ) ) ) 1 / 2 .
Due to the condition (9) this implies the inequality (23), as required. □
The proof of the following lemma is given in [13] and is therefore omitted here. Recall that the semi-norm Osc, when applied to a random variable, is defined as in (2).
Lemma . 
Let Y l ( n ) : 1 l n and G l ( n ) : 1 l n , for n 1 , are, respectively, the sequence of non-negative random variables and sigma-fields such that σ Y 1 ( n ) , , Y l ( n ) G l ( n ) . Suppose that
lim n E l = 1 n Y l ( n ) = 1 and sup 1 i n Y i ( n ) L ϵ n ,
where lim n ϵ n = 0 . Suppose in addition that
lim n sup 1 l n 1 Osc E j = l + 1 n Y j ( n ) G l ( n ) = 0 .
Then
lim n l = 1 n Y l ( n ) = 1 i n L 2 .
Further, denote random variables
v j ( n ) : = E [ ( ξ j ( n ) ) 2 F j 1 ( n ) ] , where ξ j ( n ) = 1 D ( S n ) [ Z k ( n ) E [ Z k ( n ) | X k 1 ( n ) ] ]
These random variables are measurable with respect to the sigma-fields G j ( n ) = F j 1 ( n ) for 2 j n , respectively.
Lemma . 
Under the assumptions of the theorem 5 the convergence
sup 2 l n 1 Osc E j = l + 1 n v j ( n ) F l 1 ( n ) ( ω ) = o ( 1 ) , n ,
holds true.
Note that in this paper, all o ( 1 ) terms are deterministic. This is a consequence of our convention for the oscillation semi-norm applied to random variables, as defined in (2).
Proof. 
By the martingale property and definitions (17) and (18), it follows that E ξ r ( n ) ξ s ( n ) F u ( n ) = 0 (a.s.) for all r > s > u . Therefore,
E j = l + 1 n v j ( n ) F l 1 ( n ) = E j = l + 1 n ξ j ( n ) 2 F l 1 ( n ) = E j = l + 1 n ξ j ( n ) 2 X l 1 ( n ) = D S n 1 E j = l + 1 n f j ( n ) X j ( n ) E Z l + 1 ( n ) X l ( n ) 2 X l 1 ( n ) = D S n 1 E j = l + 1 n f j ( n ) X j ( n ) 2 X l 1 ( n ) D S n 1 E E Z l + 1 ( n ) X l ( n ) 2 X l 1 ( n ) .
By Lemma 12, the second term in (24) is bounded above by sup 2 l n 1 D S n 1 Z l + 1 ( n ) L 2 , which is o ( 1 ) . Consequently, its oscillation is also o ( 1 ) as n , uniformly in ω . Recall that Z k ( n ) is defined as (cf. (16)):
Z k ( n ) : = i = k n E [ f i ( n ) ( X i ( n ) ) | X k ( n ) ]
We now estimate the oscillation of the first term in (24).
Osc D S n 1 E j = l + 1 n f j ( n ) X j ( n ) 2 X l 1 ( n ) D S n 1 l + 1 j , m n Osc E f j ( n ) X j ( n ) f m ( n ) X m ( n ) X l 1 ( n ) .
Let us highlight that our definition of “Osc” for r.v. in (2) allows us not to worry about the difference between sure and almost sure inequalities, despite conditional expectations in the expressions. We now apply Lemma 8:
Osc E f j ( n ) X j ( n ) f m ( n ) X m ( n ) X l 1 ( n ) 6 C n 2 1 α n β κ j β κ l β 1 α n β | κ m β κ j β | .
In the remaining cases the estimates are analogous, with at most one or two additional factors of ( 1 α n ) , each bounded above by 1. Summing over all terms then yields the bound
Osc ( · ) ( D S n ) 1 C n 2 ( α n β ) 2 ,
uniformly with respect to l. Combining this estimate with Proposition9 and condition 9 completes the proof of the lemma.

5. Proof of the Theorem 5

Proof. 
It follows from the lemma 12 that it suffices to show convergence
M n ( n ) / D S n N ( 0 , 1 ) .
This property follows from the proposition 7, because it was already verified that
(a)
sup 2 k n ξ k ( n ) L 0 , n ;
(b)
E k = 2 n E ξ k ( n ) 2 F k 1 ( n ) 2 1 , n .
Here, part (a) follows from the lemma 12 (negligibility). To verify condition (b), we apply 13, 14(LLN). This is valid since (a) holds and the variance decomposition (after 13) combined with Lemma 12 gives k = 2 n E [ ( ξ k ( n ) ) 2 ] = 1 + o ( 1 ) .

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The work was supported by the grant of the Theoretical Physics and Mathematics Advancement Foundation “BASIS”. The author is also thankful to scientific advisor Alexander Veretennikov for valuable remarks.

Conflicts of Interest

No conflict of interests.

References

  1. S. N. Bernstein, Sur l’extension du théorème du calcul de probabilités aux sommes de quantités dépendantes. Math. Ann. 1926, 97, 1–59. [Google Scholar]
  2. Russian transl.: S.N. Bernstein, Extension of a limit theorem of probability theory on sums of dependent random variables. Russian Math. Rev., 1944, 10, 65–114.
  3. O.A. Butkovsky, A.Yu. O.A. Butkovsky, A.Yu. Veretennikov, On asymptotics for Vaserstein coupling of Markov chains, Stochastic Processes and their Applications, 2013, 123(9), 3518–3541. [CrossRef]
  4. R. L. Dobrushin, Central limit theorems for non-stationary Markov chains I,II. Theory Probab. its Appl. 1956, 1, 65–80. [Google Scholar] [CrossRef]
  5. E.B. Dynkin, Markov processes, vol. 1, Springer-Verlag, Berlin, Heidelberg, 1965. [CrossRef]
  6. M. I. Gordin, The central limit theorem for stationary processes. Soviet Math. Dokl. 1969, 10, 1174–1176. [Google Scholar]
  7. P. Hall, C.C. P. Hall, C.C. Heyde, Martingale Limit Theory and Its Application. Academic Press, New York, 1980. [CrossRef]
  8. Yu. V. Linnik, On the theory of nonuniform Markov chains. Izv. Akad. Nauk SSSR Ser. Mat. 1949, 13, 65–94. (in Russian). [Google Scholar]
  9. Yu. V. Linnik, N.A. Sapogov, Multiple integral and local laws for inhomogeneous Markov chains. Izv. Akad. Nauk SSSR Ser. Mat. 1949, 13, 533–566. [Google Scholar]
  10. R.Sh. Liptser, A.N. R.Sh. Liptser, A.N. Shiryayev, Theory of Martingales, Kluwer Acad. Publ., Dordrecht, The Netherlands, 1989.
  11. A. A. Markov, Investigation of the general case of trials associated into a chain (in Russian), Zapiski. Akad. Nauk (St. Petersburg) Fiz.-Matem. Otd., (8th Ser.) 25(3)(1910), 33 pp.; also in: Selected works, 1951, Moscow, Acad. Sci. USSR, 465—509 (in Russian); Eng. transl. in: O.B.Sheynin, ed., Probability and Statistics. Russian Papers, Selected and Translated by Oscar Sheynin. NG Verlag, Berlin, 2004. [Google Scholar]
  12. A. N. Shiryaev, Central limit theorem for non-homogeneous Markov chains with memory, Summary of Papers Presented at the Sessions of the Scientific Research Seminar on Probability Theory, (Moscow, February–May, 1957). Theory of Probability and its Applications 1957, 2, 470–480. [Google Scholar] [CrossRef]
  13. S. Sethuraman, S.R.S. Varadhan, A Martingale Proof of Dobrushin’s Theorem for Non-Homogeneous Markov Chains. Electron. J. Probab. 2005, 10, 1221–1235. [Google Scholar] [CrossRef]
  14. A. Yu. Veretennikov, M.A. Veretennikova, On improved bounds and conditions for the convergence of Markov chains. Izvestiya Mathematics 2022, 86, 92–125. [Google Scholar] [CrossRef]
1
In [13] it is called just the contraction coefficient.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated