Preprint
Article

This version is not peer-reviewed.

The Brooks-Chacon Biting Lemma and the Baum-Katz Theorem Along Subsequences

A peer-reviewed article of this preprint also exists.

Submitted:

10 September 2025

Posted:

11 September 2025

You are already at the latest version

Abstract
We prove Baum-Katz type theorems along subsequences of random variables under Koml\'os-Saks and Mazur-Orlicz type boundedness hypotheses.
Keywords: 
;  ;  

1. Introduction

The following result (see [7]) quantifies the rate of convergence in the strong law of large numbers, through a complete convergent numerical Baum-Katz-type series along subsequences of norm bounded random variables, without any supplementary probabilistic hypothesis on their (in)dependence or on their distributions:
Theorem 0. On a complete probability space ( Ω , F , P ) we consider a sequence of random variables ( X n ) n 1 that is uniformly bounded in L p for some 0 < p < 2 , i.e.,
sup n 1 | | X n | | p C for some C > 0 .
Then there exists a subsequence ( Y n ) n 1 of ( X n ) n 1 such that, for all 1 < r p , we have
n = 1 n p / r 2 P ω Ω : j = 1 n Y j ( ω ) > ε n 1 / r < for any ε > 0 .
Remarks. The complete convergence of the series in formula (1) implies that the subsequence ( Y n ) n 1 satisfies the strong law of large numbers, i.e., Y n / n 1 / p 0 P -a.s. The parameter r ( 1 , 2 ) is keeping Theorems 0 within the realm of laws of large numbers; indeed, if p r 2 then by the central limit theorem for subsequences, the series in formula (1) diverges for all ε > 0 even if ( X n ) n 1 is an i.i.d. sequence with mean zero and finite variance. Also note that formula (1) trivially holds if 0 < p < r < 2 .
There are situations (see, e.g., the recent papers [4] and [5] and their references) when the sequence of random variables in question is not (uniformly) bounded in L p and yet satisfies a law of large numbers. In this case, Theorem 0 is not appropriate to quantify the rate of convergence in that law of large numbers. In addition, the examples in [6,8] and [3] show that Theorem 0 fails if one drops the L p -uniform boundedness hypothesis, for any 0 < p < 2 . Inspired by the celebrated Komlós-Saks-type extension of the law of large numbers (cf. [4,5]), in Section 2 we shall prove a version of the Baum-Katz theorem under a special boundedness hypothesis, different from the L p -boundedness condition required in Theorem 0. The idea is to construct a rich family of uniformly integrable subsequences of ( X n ) n 1 as in [2], for which condition (1) holds; note that the hypotheses in [8] and [3] cannot produce Baum-Katz type theorems, as the families of subsequences therein are no longer uniformly integrable. Instead, we shall employ the methodology given by the celebrated Bitting Lemma of Brooks and Chacon (cf. [1]). A modification of this methodology is presented in Section 3; it will produce a second version of the Baum-Katz theorem under a Mazur-Orlicz-type hypothesis (cf. [4,5]).

2. Main Result

Theorem 1. Let 0 < p < 2 . On a complete probability space ( Ω , F , P ) we consider a sequence of random variables ( X n ) n 1 such that
lim sup n 1 | X n ( ω ) | p < for all ω Ω .
Then there exists a subsequence ( Y n ) n 1 of ( X n ) n 1 such that Equation (1) holds for all 1 < r p . In particular, the subsequence ( Y n ) n 1 satisfies the strong law of large numbers, i.e., Y n / n 1 / p 0 P -a.s.
Examples. (i) Note that the working hypothesis in Theorem 1 is equivalent to
sup n 1 | X n ( ω ) | p < for all ω Ω .
In particular, one can see that Theorems 0 and 1 do not overlap and do not imply each other. Indeed, uniformly L p -bounded sequences of functions can still have an infinite limsup, and vice-versa: there are sequences of functions, with finite limsup, that are not (uniformly) bounded in L p (see also [9]).
(ii) The hypotheses in Theorem 1 are satisfied, e.g., by the working condition in the motivational papers [4] and [5], namely:
lim N N · sup n 1 P ( | X n | > N ) = 0 .
This condition implies uniform boundedness in L 0 (tightness), i.e.,
lim N sup n 1 P ( | X n | > N ) = 0
and is implied by uniform integrability, i.e.,
lim N sup n 1 E ( X n · 1 { | X n | > N } ) = 0 ,
where E denotes the expectation with respect to P . Also note that L p -boundedness condition in Theorem 0 is stronger than the last three conditions, provided p [ 1 , 2 ) (see, e.g., Example 4.2 in [4]).
Proof of Theorem 1. For any natural number m 1 , let us define the F -measurable sets
A m : = ω Ω : sup n 1 | X n ( ω ) | p m .
Assume that r < p and fix a > p / r 1 . As P ( A m ) 1 as m by hypothesis, we can choose an index m 1 1 such that P ( A m 1 ) > 1 2 a . By Fatou’s lemma we obtain:
sup n 1 A m 1 | X n ( ω ) | p d P ( ω ) m 1 .
We now apply the Bitting Lemma (cf. [1]) to the sequence ( X n ) n 1 and the subset A m 1 above, to obtain: a non-decreasing sequence ( B k 1 ) k 1 of subsets in F with P ( B k 1 ) 1 as k , and a subsequence ( X n 1 ) n 1 of ( X n ) n 1 such that ( X n 1 ) n 1 is uniformly integrable on each of the subsets A m 1 B k 1 , k 1 . This latter fact together with estimate (2) show that Theorem 0 applies to the sequence ( X n 1 ) n 1 and gives:
n = 1 n p / r 2 P ω A m 1 B k 1 : j = 1 n X j 1 ( ω ) > ε n 1 / r < for any ε > 0 and k 1 .
Next, we choose a natural number m 2 1 such that P ( A m 2 ) > 1 3 a , and another application of the Bitting Lemma, this time to ( X n 1 ) n 1 , produces: a non-decreasing sequence ( B k 2 ) k 1 of subsets in F with P ( B k 2 ) 1 as k , and a subsequence ( X n 2 ) n 1 of ( X n 1 ) n 1 , therefore a subsequence of ( X n ) n 1 as well, such that ( X n 2 ) n 1 is uniformly integrable on each of the subsets A m 2 B k 2 , k 1 , and
n = 1 n p / r 2 P ω A m 2 B k 2 : j = 1 n X j 2 ( ω ) > ε n 1 / r < for any ε > 0 and k 1 .
Thus, by induction, we constructed for each i 1 : an F -measurable set A m i with P ( A m i ) > 1 ( i + 1 ) a , a non-decreasing sequence ( B k i ) k 1 of subsets in F with P ( B k i ) 1 as k , and a subsequence ( X n i ) n 1 of ( X n i 1 ) n 1 such that ( X n i ) n 1 is uniformly integrable on each of the subsets A m i B k i , k 1 , and
n = 1 n p / r 2 P ω A m i B k i : j = 1 n X j i ( ω ) > ε n 1 / r < for any ε > 0 and k , i 1 .
(the convention is that ( X n 0 ) n 1 is precisely ( X n ) n 1 ).
Now define Y n : = X n n , n 1 ; it follows that ( Y n ) n 1 is a subsequence of ( X n ) n 1 and, using a diagonal argument in the previous formula, we obtain that
n = 1 n p / r 2 P ω A m n B k n : j = 1 n Y j ( ω ) > ε n 1 / r < for any ε > 0 and k 1 .
As P ( B k n ) 1 as k for all n 1 , formula (3) and the dominated convergence theorem imply that
n = 1 n p / r 2 P ω A m n : j = 1 n Y j ( ω ) > ε n 1 / r < for any ε > 0 .
Therefore, to prove that our series (1) converges for this particular subsequence ( Y n ) n 1 , it suffices to prove formula (4) with A m n replaced by its set complement, i.e.,
n = 1 n p / r 2 P ω Ω A m n : j = 1 n Y j ( ω ) > ε n 1 / r < for any ε > 0 .
Indeed, as P ( A m n ) > 1 ( n + 1 ) a > 1 n a and a > p / r 1 > 0 , the series in (5) is
n = 1 n p / r 2 P ω Ω A m n n = 1 n p / r 2 a < ,
so the proof is achieved in the case r < p .
If r = p , then we modify the above methodology as follows: by induction we now choose F -measurable sets A m i with P ( A m i ) > i / ( i + 1 ) for all i 1 ; as such, the Biting Lemma and the diagonal argument produce the subsequence ( Y n ) n 1 and the following replacement of Equation (4):
n = 1 1 n P ω A m n : j = 1 n Y j ( ω ) > ε n 1 / r < for any ε > 0 .
To show that the series (1) converges for along the subsequence ( Y n ) n 1 , it suffices to prove the following replacement of Equation (5):
n = 1 1 n P ω Ω A m n : j = 1 n Y j ( ω ) > ε n 1 / r < for any ε > 0 .
Indeed, by the new choice of A m n , the latter series is
n = 1 1 n P ω Ω A m n n = 1 1 n ( n + 1 ) < ,
and this achieves the proof in the case r = p .

3. A Variant of the Main Result

Proposition 2. Let 0 < p < 2 . On a complete probability space ( Ω , F , P ) we consider a sequence of random variables ( X n ) n 1 satisfying the following condition: for every subsequence ( X ˜ n ) n 1 of ( X n ) n 1 and n 1 , there exists a convex combination Z n of { | X ˜ n | p , | X ˜ n + 1 | p , } , such that sup n 1 | Z n ( ω ) | p < for all ω Ω . Then there exists a subsequence ( Y n ) n 1 of ( X n ) n 1 such that Equation (1) holds for all 1 < r p and Y n / n 1 / p 0 P -a.s.
Examples. On [ 0 , 1 ] endowed with the Lebesgue measure, the sequence X n ( ω ) = n 2 if 0 < ω < 1 / n and 0 otherwise, satisfies Theorem 2 because X n 0 Lebesgue-a.s., yet it does not satisfy Theorem 1 with p = 1 because it is not bounded in L 1 [ 0 , 1 ] . As a matter of fact, both Theorems 1 and 2 may fail for unbounded sequences, e.g., X n ( ω ) = n .
Proof of Proposition 2. By hypothesis we can write
Z n = i I n λ i n | X ˜ n + i | p for some λ i n 0 with i I n λ i n = 1 ,
where I n are finite subsets of { 0 , 1 , 2 } . In addition, the sequence ( Z n ) n 1 satisfies the condition sup n 1 | Z n ( ω ) | p < for all ω Ω . For any natural number m 1 , let us define the F -measurable sets
A m : = ω Ω : sup n 1 | Z n ( ω ) | p m .
We have P ( A m ) 1 as m , so we can choose an index m 1 1 such that P ( A m 1 ) > 1 2 a for some a > p / r 1 fixed, or 1 / 2 , according to p > r or p = r , respectively. In both cases, by applying Fatou’s lemma we obtain:
sup n 1 i I n λ i n A m 1 | X ˜ n + i ( ω ) | p d P ( ω ) m 1 .
Hence there is a subsequence ( X ¯ n ) n 1 of ( X ˜ n ) n 1 , therefore a subsequence of ( X n ) n 1 as well, such that
sup n 1 A m 1 | X ¯ n ( ω ) | p d P ( ω ) m 1 .
The remainder of the proof goes exactly as in the proof of Theorem 1, with Equation (2) replaced by the equation above, and applied to the subsequence ( X ¯ n ) n 1 .

Data Availability Statement

No data was used for the research described in the article.

Conflicts of Interest

The author declares that he has no known competing financial interests that could have appeared to influence the work reported in this paper.

References

  1. J.K. Brooks, R.V. Chacon, Continuity and compactness of measures. Advances in Mathematics 1980, 37, 16–26. [CrossRef]
  2. C. Castaing, M. Saadoune, Komlós type convergence for random variables and random sets with applications to minimization problems. Advances in Mathematical Economics 2007, 10, 1–29.
  3. S.J. Dilworth, Convergence of series of scalar- and vector-valued random variables and a subsequence principle in L2. Transactions of the American Mathematical Society 1987, 301, 375–384.
  4. I. Karatzas, W. Schachermayer, A weak law of large numbers for dependent random variables. Theory of Probability and Its Applications 2023, 68, 501–509. [CrossRef]
  5. I. Karatzas, W. Schachermayer, A strong law of large numbers for positive random variables. Illinois Journal of Mathematics 2023, 67, 517–528.
  6. E. Lesigne, D. Volný, Large deviations for martingales. Stochastic Processes and their Applications 2001, 96, 143–159. [CrossRef]
  7. G. Stoica, The Baum-Katz theorem for bounded subsequences. Statistics and Probability Letters 2008, 78, 924–926. [CrossRef]
  8. H. von Weizsäcker, Can one drop the L1-boundedness in Komlós subsequence theorem? American Mathematical Monthly 2004, 111, 900–903.
  9. https://mathoverflow.net/questions/168221/uniform-boundedness-in-l10-1-implies-finite-limsup-almost-everywhere-for.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated