Preprint
Article

This version is not peer-reviewed.

The Distribution and Quantiles of the Sample Autocovariance and Other Functions of Sample Moments from a Stationary Process

Submitted:

29 September 2025

Posted:

29 September 2025

You are already at the latest version

Abstract
We give expansions for the distribution, density and quantiles of any smooth function of the sample cross-moments of a stationary process. We do this by showing that the sample cross-moments are standard estimates. The 8 examples include the sample autocovariance and the sample autocorrelation.
Keywords: 
;  ;  ;  

1. Introduction and Summary

In Withers and Nadarajah (2012), we gave the extended Edgeworth-Cornish-Fisher expansions, the eECF expansions, for smooth functions of the sample cross-moments of a linear process. Here we extend these results to a stationary process.
Suppose that θ ^ is a standard estimate of an unknown parameter θ R q of a statistical model, based on a sample of size n. That is, its cross cumulants can be expanded in the form (3.3), or if q = 1 , in the form (2.1). Then eECF expansions are available for its distribution that improve upon the Central Limit Theorem. In order to be self-contained, Section 2 summarises these expansions to O ( n 3 / 2 ) , and Section 3 gives the leading cumulant coefficients for functions of an unbiased standard estimate. In Section 4 we show that the sample moments of a stationary process are standard estimates. (In fact their cumulant expansions have exactly 2 terms!) So by Withers (1982, 2024), smooth functions of them are also standard estimates, and so have the eECF expansions of Section 2 and 3.
Theorem 4.1 gives 2 choices for the cumulant coefficients needed for these expansions. The examples of Section 5 include the distributions of the sample mean, the sample autocovariance and the sample autocorrelation. Section 6 shows how to extend these results to a multivariate stationary process.
This nonparametric approach removes the need for modelling time series parametrically as a moving average or autoregressive or ARIMA process. One weakness of such approaches is that a wrong model will have the wrong asymptotic variance, so that inference will not be even asymptotically correct. Another is the difficulty of estimating their parameters. A vast collection of software has been developed for economists and others for this purpose, all now unnecessary if one moves to our non-parametric method.
Note 1.1. 
Withers and Nadarajah (2010c, 2012) considered the general stationary linear process
X i = j = 0 ρ j e i j
where { ρ j } are constants, and { e i } are independent and identically distributed random variables from a distribution on R with finite cumulants { τ j } . We showed that its cross-cumulants are
κ i 1 i r = α ( i 1 i 2 i r ) τ r , w h e r e α ( i 1 i 2 i r ) = α ( I 1 I 2 I r ) = j = 0 ρ j + I 1 ρ j + I 2 ρ j + I r ,
for I k of (4.3), and that α ( i 1 i 2 i r ) is finite for processes like ARMA processes where ρ j 0 exponentially as n . See there for a simulation study. For { X j } an autoregressive process, see p158 of Withers and Nadarajah (2012).

2. eECF Expansions for Standard Estimates

This section summarises results to be found in Withers (1984, 2025a).
Univariate estimates. Suppose that θ ^ is a standard estimate of an unknown θ R with respect to n, typically the sample size. That is, E θ ^ θ as n , and its cumulants can be expanded as
κ r ( θ ^ ) j = r 1 n j a r j f o r r 1 ,
where the cumulant coefficients  a r j may depend on n but are bounded as n , and a 21 is bounded away from 0. Here and below ≈ indicates an asymptotic expansion that need not converge. So (2.1) holds in the sense that
κ r ( θ ^ ) = j = r 1 I 1 a r j n j + O ( n I ) f o r I r 1 ,
where y n = O ( x n ) means that y n / x n is bounded in n. For non-lattice estimates, the distribution and quantiles of
Y n = ( n / a 21 ) 1 / 2 ( θ ^ θ )
have asymptotic expansions in powers of n 1 / 2 of the form
P n ( x ) = P r o b ( Y n x ) Φ ( x ) ϕ ( x ) r = 1 n r / 2 h r ( x ) ,
p n ( x ) = d P n ( x ) / d x ϕ ( x ) [ 1 + r = 1 n r / 2 h ¯ r ( x ) ] ,
Φ 1 ( P n ( x ) ) x r = 1 n r / 2 f r ( x ) , P n 1 ( Φ ( x ) ) x + r = 1 n r / 2 g r ( x ) ,
where Φ ( x ) = P r o b ( N x ) , N N ( 0 , 1 ) is a unit normal random variable with density ϕ ( x ) = ( 2 π ) 1 / 2 e x 2 / 2 , and h r ( x ) , h ¯ r ( x ) , f r ( x ) , g r ( x ) are polynomials in x and { A r i } where A r i = a r i / a 21 r / 2 . The expansions (2.3)–(2.5) are given in Withers (1984) to O ( n 5 / 2 ) and Withers and Nadarajah (2010a) to O ( n 3 ) , starting
h 1 ( x ) = f 1 ( x ) = g 1 ( x ) = A 11 + A 32 H 2 / 6 , h ¯ 1 ( x ) = A 11 H 1 + A 32 H 3 / 6 , h 2 ( x ) = ( A 11 2 + A 22 ) H 1 / 2 + ( A 11 A 32 + A 43 / 4 ) H 3 / 6 + A 32 2 H 5 / 72 , f 2 ( x ) = ( A 22 / 2 A 11 A 32 / 3 ) H 1 + A 43 H 3 / 24 A 32 2 ( 4 x 3 7 x ) / 36 , g 2 ( x ) = A 22 H 1 / 2 + A 43 H 3 / 24 A 32 2 ( 2 x 3 5 x ) / 36 ,
h ¯ 2 ( x ) = ( A 11 2 + A 22 ) H 2 / 2 + ( A 11 A 32 + A 43 / 4 ) H 4 / 6 + A 32 2 H 6 / 72 ,
where for k 0 , H k is the kth Hermite polynomial,
H k = H k ( x ) = ϕ ( x ) 1 ( d / d x ) k ϕ ( x ) = E ( x + i N ) k f o r i = 1 , N N ( 0 , 1 ) . S o , H k + 1 = ( x d / d x ) H k , ( d / d x ) H k + 1 = ( k + 1 ) H k , H 0 = 1 , H 1 = x , H 2 = x 2 1 , H 3 = x 3 3 x , H 4 = x 4 6 x 2 + 3 ,
So (2.3)–(2.8) give the eECF expansions to O ( n 3 / 2 ) for the distribution and quantiles of Y n of (2.2) in terms of a 21 , a 11 , a 32 , a 22 , a 43 , while . a 21 , a 11 , a 32 alone only give the eECF expansions to O ( n 1 ) , and a 21 alone gives the eECF expansions to O ( n 1 / 2 ) .
Note 2.1. 
Cornish and Fisher (1937) and Cornish and Fisher (1960) gave (2.5) for r 6 for the special case when a r i = 0 for i r 1 . This was extended to standard estimates in Withers (1984). For (2.8), see Withers (2000).
We now reserve i 1 , i 2 , for any sequence from 1 , 2 , , p . To avoid double subscripts, we introduce the bar notation.
The multivariate Hermite polynomial, H ¯ 1 k = H ¯ 1 k ( x , V ) , is
H ¯ 1 k ( x , V ) = ϕ V ( x ) 1 ( ¯ 1 ) ( ¯ k ) ϕ V ( x ) , w h e r e i = / x i a n d ¯ k = i k .
By Withers (2020), for i = 1 ,
H ¯ 1 k ( x , V ) = E Π j = 1 k ( y ¯ j + i Y ¯ j ) w h e r e y ¯ j = y i j , y = V 1 x , Y ¯ j = Y i j , Y N p ( 0 , V 1 ) . S o , H 1 = y 1 , H ¯ 1 = y ¯ 1 , H 12 = y 1 y 2 V 12 , H ¯ 12 = y ¯ 1 y ¯ 2 V ¯ 12 , H 1 3 = y 1 y 2 y 3 3 V 12 y 3 , w h e r e 3 V 12 y 3 = V 12 y 3 + V 13 y 2 + V 23 y 1 ,
V i 1 i 2 is the ( i 1 , i 2 ) element of V 1 , and V ¯ j 1 j 2 is the ( i j 1 , i j 2 ) element of V 1 . For their dual form see Withers and Nadarajah (2014). Their integrated form, H ¯ * 1 k = H ¯ * 1 k ( x , V ) , is
H ¯ * 1 k = ( ¯ 1 ) ( ¯ k ) Φ V ( x ) = x H ¯ 1 k ϕ V ( x ) d x .
Multivariate estimates. Suppose that w ^ is a standard estimate of
w R p with respect to n. That is, E w ^ w as n , and for r 1 , 1 i 1 , , i r p , the rth order cumulants of w ^ can be expanded as
κ ( w ^ i 1 , , w ^ i r ) j = r 1 k ¯ j 1 r n j w h e r e k ¯ j 1 r = k j i 1 i r ,
and the cumulant coefficients  k ¯ j 1 r = k j i 1 i r may depend on n but are bounded as n . So k ¯ 0 1 = w ¯ 1 = w i 1 . Then
X n = n 1 / 2 ( w ^ w ) L N p ( 0 , V ) w h e r e V = ( k 1 i 1 i 2 ) , p × p ,
with density and distribution
ϕ V ( x ) = ( 2 π ) q / 2 ( d e t ( V ) ) 1 / 2 exp ( x V 1 x / 2 ) , Φ V ( x ) = x ϕ V ( x ) d x .
V may depend on n, but we assume that d e t ( V ) is bounded away from 0. By Withers and Nadarajah (2010b) or Withers (2024), the distribution and density of X n = n 1 / 2 ( w ^ w ) can be expanded as
P r o b . ( X n x ) r = 0 n r / 2 P r ( x ) , p X n ( x ) r = 0 n r / 2 p r ( x ) , x R p ,
w h e r e P 0 ( x ) = Φ V ( x ) , p 0 ( x ) = ϕ V ( x ) , a n d f o r r 1 ,
P r ( x ) = k = 1 3 r [ P ¯ r 1 k H ¯ * 1 k ( x , V ) : k r e v e n ] ,
p r ( x ) / ϕ V ( x ) = k = 1 3 r [ P ¯ r 1 k H ¯ 1 k ( x , V ) : k r e v e n ] = p ˜ r ( x ) s a y .
(2.14) and (2.15) use the tensor summation convention of implicitly summing i 1 , , i r over their range 1 , , p and P ¯ r 1 k are the Edgeworth coefficients. These are polynomials in the cumulant coefficients k ¯ j 1 r of (2.10). See Withers (2025a) for their definition. The Edgeworth coefficients needed for Edgeworth expansions to O ( n 3 / 2 ) are
P ¯ 1 1 = k ¯ 1 1 , P ¯ 1 1 3 = k ¯ 2 1 3 / 6 , P ¯ 2 12 = k ¯ 1 1 k ¯ 1 2 + k ¯ 2 12 / 2 , P ¯ 2 1 4 = k ¯ 3 1 4 / 24 + S k ¯ 1 1 k ¯ 2 2 4 / 6 + S k ¯ 1 4 k ¯ 2 1 3 / 6 , P ¯ 2 1 6 = S k ¯ 2 1 3 k ¯ 2 4 6 / 36 ,
where S is the operator that symmetrizes f ¯ 1 k = f i 1 , i k . So,
P 1 ( x ) = k ¯ 1 1 H ¯ * 1 ( x , V ) + k ¯ 2 1 3 H ¯ * 1 3 ( x , V ) / 6 , p ˜ 1 ( x ) = k ¯ 1 1 H ¯ 1 ( x , V ) + k ¯ 2 1 3 H ¯ 1 3 ( x , V ) / 6 , P 2 ( x ) = k = 2 , 4 , 6 P ¯ 2 1 k ( ¯ 1 ) ( ¯ k ) Φ V ( x ) , p ˜ 2 ( x ) = k = 2 , 4 , 6 P ¯ 2 1 k H ¯ 1 k ( x , V ) .
For P ¯ 3 1 k see Withers (2025a). These give the Edgeworth expansions for the distribution of X n = n 1 / 2 ( w ^ w ) to O ( n 2 ) . See Withers (2025a) for more details.
eECF expansions for parametric and nonparametric standard estimates were first given in Withers (1984), and extended to functions of them in Withers (1982, 1983).
In Section 4 we show that for θ ^ and w ^ sample cross-moments, only the first 2 terms in (2.1) and (2.10) are non-zero for r 2 .

3. Cumulant Coefficients for Functions of Unbiased Standard Estimates

This section summarises the results needed in Section 5 from Withers (1982, 2024).
Suppose that w ^ R p is a standard estimate satisfying (2.10), and that
θ = t ( w ) : R p R q
is a smooth function with finite derivatives
t a 1 a 2 = a 1 a 2 t ( w ) w h e r e a = / w a .
For b = 1 , , q , let θ b be the bth component of θ . Then by Withers (1982, 2024), θ ^ = t ( w ^ ) R is a standard estimate of θ = t ( w ) . So its rth order cross-cumulants are of magnitude n 1 r with an expansion of the form
κ ( θ ^ b 1 , , θ ^ b r ) j = r 1 n j K n j b 1 b r
where b 1 , b r lie in 1 , , q , K n 0 b = θ b , and the cumulant coefficients for θ ^ , K n j b 1 b r , are given in terms of the cumulant coefficients for w ^ , { k ¯ j 1 r = k j a 1 a r } of (2.10) and the derivatives t . a 1 a 2 , by the appendix to Withers (1982), and in more detail in Theorem 2 of Withers (2024), where we now assume that E w ^ = w . So,
Z n = n 1 / 2 ( θ ^ θ ) L N q ( 0 , V ˜ ) a s n w h e r e V ˜ = ( K n 1 b 1 b 2 ) ,
K n 1 b 1 b 2 = t a 1 b 1 k 1 a 1 a 2 t a 2 b 2 , K n 1 b 1 = t a 1 a 2 b 1 k 1 a 1 a 2 / 2 ,
where we use the tensor summation convention of implicit summation of the repeated pairs, in this case a 1 , a 2 , over their range 1 , , p . That is,
K n 1 b 1 b 2 = a 1 , a 2 = 1 p t a 1 b 1 k 1 a 1 a 2 t a 2 b 2 , K n 1 b 1 = a 1 , a 2 = 1 p t a 1 a 2 b 1 k 1 a 1 a 2 / 2 .
Similarly,
K n 2 b 1 b 2 b 3 = t a 1 b 1 t a 2 b 2 t a 3 b 3 k 2 a 1 a 2 a 3 + b 1 b 2 b 3 3 s a 1 b 1 t a 1 a 3 b 2 s a 3 b 3 w h e r e s a 1 b 1 = k 1 a 1 a 2 t a 2 b 1 , K n 2 b 1 b 2 = t a 1 b 1 t a 2 b 2 k 2 a 1 a 2 + b 1 b 2 2 [ t a 1 a 2 b 1 t a 3 b 2 k 2 a 1 a 2 a 3 + t a 1 a 2 a 3 b 1 s a 3 b 2 k 1 a 1 a 2 ] / 2
+ v a 3 b 1 a 2 v a 2 b 2 a 3 / 2 w h e r e v a 3 b 1 a 2 = t a 1 a 3 b 1 k 1 a 1 a 2 , K n 3 b 1 b 4 = t a 1 b 1 t a 4 b 4 k 3 a 1 a 4 + k 2 a 1 a 2 a 3 12 t a 2 b 2 t a 3 b 3 u a 1 b 1 b 4 + 4 t a 1 a 3 a 5 b 1 s a 1 b 2 s a 3 b 3 s a 5 b 4
+ 12 u a 1 b 1 b 3 k 1 a 1 a 2 u a 2 b 2 b 4 w h e r e u a 1 b 1 b 4 = t a 1 a 4 b 1 s a 4 b 4 ,
These are the coefficients needed for P r ( x ) , p r ( x ) of (2.12) for r = 1 , 2 when X n of (2.11) is replaced by Z n of (3.4).
The case q = 1 . By (2.1), we can replace K n j b 1 b r by a r j where by (3.3)–(3.6),
a 10 = θ , a 21 = t a 1 k 1 a 1 a 2 t a 2 = t a 1 s a 1 f o r s a 1 = k 1 a 1 a 2 t a 2 , ,
a 11 = t a 1 a 2 k 1 a 1 a 2 / 2 , a 32 = t a 1 t a 2 t a 3 k 2 a 1 a 2 a 3 + 3 s a 1 t a 1 a 3 s a 3 ,
a 22 = t a 1 k 2 a 1 a 2 t a 3 + t a 1 a 2 k 2 a 1 a 2 a 3 t a 3 + s a 3 t a 1 a 2 a 3 k 1 a 1 a 2 + v a 3 a 2 v a 2 a 3 , a 43 = t a 1 t a 4 k 3 a 1 a 4 + 12 u a 1 k 2 a 1 a 2 a 3 t a 2 t a 3 + 4 t a 1 a 3 a 5 s a 1 s a 3 s a 5
+ 12 u a 1 k 1 a 1 a 2 u a 2 f o r v a 3 a 2 = t a 1 a 3 k 1 a 1 a 2 , u a 1 = t a 1 a 2 s a 2 .
This gives the a r i needed for { h r , h ¯ r , f r , g r , r = 1 , 2 } of Section 2. So if a 21 is bounded away from 0 as n , then Y n = ( n / a 21 ) 1 / 2 ( θ ^ θ ) has the eECF expansions (2.3)–(2.5).
F o r a n y f 12 , s e t 12 2 f 12 = f 12 + f 21 , a n d 123 3 f 12 = f 12 + f 23 + f 31 .
If p = 1 , then s 1 = t 1 k 1 11 , u 1 = t 11 s 1 = t 1 t 11 k 1 11 , v 11 = t 11 k 1 11 ,
a 21 = t 1 2 k 1 11 = t 1 s 1 , a 11 = t 11 k 1 11 / 2 , a 32 = t 1 3 k 2 111 + 3 s 1 2 t 11 , a 22 = t 1 2 k 2 11 + t 1 t 11 k 2 111 + s 1 t 111 k 1 11 + ( v 1 1 ) 2 / 2 , a 43 = t 1 4 k 3 1111 + 12 k 2 111 t 1 2 t 11 s 1 + 4 t 111 s 1 3 + 12 k 1 11 u 1 2 .
If p = 2 , as in Examples 5.3 and 5.4 below, then s j = i = 1 2 k 1 j i t i ,
u k = j = 1 2 t k j s j , and (3.9)–(3.12) can be written using (3.13) as
a 21 = i = 1 2 t i 2 k 1 i i + 2 t 1 t 2 k 1 12 , a 11 = i = 1 2 t i i k 1 i i / 2 + t 12 k 1 12 ,
a 32 = i = 1 2 t i 3 k 2 i i i + 3 12 2 t 1 2 t 2 k 2 112 + 3 j = 1 2 s j 2 t j j + 6 s 1 s 2 t 12 , a 22 = t i k 2 i j t j + i = 1 2 t i ( t i i k 2 i i i + 2 t 12 k 2 12 i ) + 12 2 t 11 k 2 112 t 2 + i = 1 2 s i t i i i k 1 i i
+ 12 2 s 1 ( 2 t 112 k 1 12 + t 122 k 1 22 ) + v j i v i j / 2 = k = 1 6 a 22 k s a y ,
a 43 = k = 1 4 a 43 k w h e r e a 431 = i = 1 2 t i 4 k 3 1111 + 4 12 2 t 1 3 t 2 k 3 1112 + 6 t 1 2 t 2 2 k 3 1122 ,
a 432 = 12 u a 1 t a 2 t a 3 k 2 a 1 a 2 a 3 , a 433 / 4 = i = 1 2 s i 3 t i i i + 3 s 1 2 s 2 t 112 + 3 s 1 s 2 2 t 122 , a 434 / 12 = i = 1 2 u i 2 k 1 i i + 2 u 1 u 2 k 1 12 .
If p = 3 , as in Examples 5.5 and 5.6 below, then s j = i = 1 3 k 1 j i t i , and (3.9) and (3.10) can be written using (3.13), as
a 21 = i = 1 3 t i 2 k 1 i i + 2 123 3 t 1 t 2 k 1 12 , a 11 = i = 1 3 t i i k 1 i i / 2 + 123 3 t 12 k 1 12 , a 32 = i = 1 3 t i 3 k 2 i i i + 3 123 3 t 1 2 t 2 k 2 112 + 6 t 1 t 2 t 3 k 2 123 + 3 j = 1 3 s j 2 t j j + 2 123 3 s 1 s 2 t 12 .

4. The Cumulants of the Sample Cross-Moments

This section uses the notation of Section 2 of Withers (2025b). Let , X 1 , X 0 , X 1 , be any real stationary process with finite mean, cross-moments, and cross-cumulants,
μ = E X 0 , M i 1 i r = E X i 1 X i r , μ i 1 i r = E ( X i 1 μ ) ( X i r μ ) ,
k ( i 1 , , i r ) = κ ( X i 1 , , X i r ) . S o , μ 0 r = μ r ( X 0 ) , k ( 0 r ) = κ r ( X 0 ) ,
where 0 r denotes a string of r zeros. For relationships between them see Section 3.30 of Stuart and Ord (1987). Multivariate relations can be written down from their univariate versions. For example for M r , μ r and κ r the rth central moment and cumulant of X R ,
M 2 = κ 2 + κ 1 2 M 11 = κ 11 + κ 10 κ 01 , κ 4 = μ 4 3 μ 2 2 k ( 1234 ) = μ 1234 μ 12 μ 34 μ 13 μ 24 μ 14 μ 23 .
Given a sequence of integers i 1 , , i r , set
i 0 = min k = 1 r i k , I k = i k i 0 0 , I 0 = I 0 ( i 1 i r ) = max k = 1 r I k = max k = 1 r i k i 0 .
S o , M i 1 i r = M I 1 I r , μ i 1 i r = μ I 1 I r , κ i 1 i r = κ I 1 I r .
These are not changed by permuting subscripts. Also at least one I k is zero. For a 0 and k ( . ) of (4.2), the ath autocovariance and autocorrelation are
c o v a r ( X 0 , X a ) = k ( 0 , a ) , a n d c o r r ( X 0 , X a ) = k ( 0 , a ) / k ( 0 2 ) . A l s o , k ( 0 , T ) = k ( 0 , | T | ) , a n d i f T 1 < T 2 < 0 o r T 1 < 0 < T 2 , t h e n k ( 0 , T 1 , T 2 ) = k ( 0 , T 2 T 1 , T 1 ) = k ( 0 , | T 2 T 1 | , | T 1 | ) .
Now suppose that we observe only X 1 , , X n . For I 1 , , I r and n > I 0 = I 0 ( i 1 i r ) of (4.3), define the sample non-central cross-moment
M ^ i 1 i r = M ^ I 1 I r = N 1 t = 1 N X t + I 1 X t + I r w h e r e N = n I 0 .
This is an unbiased estimate of M i 1 i r of (4.1). For example μ = E X 0 and M 0 a = E X 0 X a have unbiased estimates
μ ^ = X ¯ = M ^ 0 = n 1 j = 1 n X j ,
M ^ 0 a = N 1 j = 1 N X j X j + a f o r N = n a > 0 ,
and a 0 an integer. We can write (4.6) as
M ^ π = N 1 t = 1 N X t ( π ) f o r N = n I 0 ( π ) > 0 , t h a t   i s , f o r n > I 0 ( π ) .
Given p 1 and sequences of integers π 1 , , π p , for j = 1 , , p , set
w j = M π j , w ^ j = M ^ π j , w = ( w 1 , , w p ) , w ^ = ( w ^ 1 , , w ^ p ) , N j = n I 0 ( π j ) .
We shall see that (2.10) holds with k ¯ j 1 r = 0 for j > r 2 . So w ^ is a standard estimate.
κ ( w ^ 1 , , w ^ r ) = ( N 1 N r ) 1 t 1 = 1 N 1 t r = 1 N r K π 1 π r t 1 t r
w h e r e K π 1 π r t 1 t r = κ ( X t 1 ( π 1 ) , , X t r ( π r ) ) , a n d X t ( i 1 i r ) = X t + I 1 X t + I r .
Example 4.1. 
If π 1 = ( 0 ) and π 2 = ( 0 , a ) , then I 0 ( π 1 ) = 0 , I 0 ( π 2 ) = a , N 1 = n , N 2 = n a ,
X t ( π 1 ) = X t , X t ( π 2 ) = X t X t + a , K π 1 π 2 t 1 t 2 = κ ( X t 1 , X t 2 X t 2 + a ) , κ 2 ( w ^ 1 ) = n 2 t 1 , t 2 = 1 n κ ( X t 1 , X t 2 ) , κ 2 ( w ^ 2 ) = N 2 2 t 1 , t 2 = 1 N 2 κ ( X t 1 X t 1 + a , X t 2 X t 2 + a ) , κ ( w ^ 1 , w ^ 2 ) = ( n N 2 ) 1 t 1 = 1 n t 2 = 1 N 2 κ ( X t 1 , X t 2 X t 2 + a ) .
K π 1 π r t 1 t r can be written in terms of the cross-cumulants of { X j } using p254–265 of McCullagh (1987) if L = L 1 + + L r 8 where L i the length of the sequence π i . See his p58 and the appendix below for some examples. So this covers κ 4 ( M ^ i 1 i 2 ) , which has L = 8 , but not κ 3 ( M ^ i 1 i 2 i 3 ) , which has L = 9 . So at present we can obtain the eECF expansions for M ^ i 1 i 2 to O ( n 3 / 2 ) , but the eECF expansions for M ^ i 1 i 2 i 3 only to O ( n 1 ) .
Under mild conditions, n r 1 κ ( w ^ i 1 , , w ^ i r ) is bounded as n increases, as the examples show.
Theorem 4.1. 
Take r 2 . Let K ( t 1 , , t r ) be any symmetric function of integers t 1 , , t r such that
f o r   a n y   i n t e g e r , K ( t 1 , , t r ) K ( t 1 T , , t r T ) .
Take n 1 and integers a 1 < n , , a r < n . Then for N j = n a j ,
t 1 = 1 N 1 t r = 1 N r K ( t 1 , , t r ) = N 1 < T j < N j , j = 2 , , r D r NT K ( 0 , T 2 , , T r ) ,
w h e r e D r NT = min ( N 1 , N 2 T 2 , , N r T r ) + m r T
= n δ r T = N 1 δ r T , m r T = min ( 0 , T 2 , , T r ) ,
δ r T = max ( a 1 , T 2 + a 2 , , T r + a r ) m r T ,
δ r T = δ r T a 1 = max ( 0 , T 2 + a 2 a 1 , , T r + a r a 1 ) m r T .
S o   L H S ( 4.12 ) = n a r , r 1 K + a r r K = N 1 a r , r 1 K + a r r K w h e r e
( a r , r 1 K , a r r K , a r r K ) = N 1 < T j < N j , j = 2 , , r ( 1 , δ r T , δ r T ) K ( 0 , T 2 , , T r )
( a r , r 1 , a r r , a r r ) = | T j | < , j = 2 , , r ( 1 , δ r T , δ r T ) K ( 0 , T 2 , , T r )
as n , when finite, and
a r r K = a r r K a 1 a r , r 1 K , a r r = a r r a 1 a r , r 1 ,
a 21 K = N 1 < T < N 2 K ( 0 , T ) a 21 = T = K ( 0 , T ) ,
a 32 K = N 1 < T j < N j , j = 2 , 3 K ( 0 , T 2 , T 3 ) a 32 = T 2 , T 3 = K ( 0 , T 2 , T 3 ) ,
a 22 K = N 1 < T < N 2 δ 2 T K ( 0 , T ) a 22 = T = δ 2 T K ( 0 , T ) , w h e r e δ 2 T = max ( 0 , T + a 2 a 1 ) min ( 0 , T ) ,
a 43 K = N 1 < T j < N j , j = 2 , 3 , 4 K ( 0 , T 2 , T 3 , T 4 ) a 43 = T 2 , T 3 , T 4 = K ( 0 , T 2 , T 3 , T 4 ) .
Now suppose that a i a . Then N i N = n a ,
δ r T = max ( 0 , T 2 , , T r ) min ( 0 , T 2 , , T r ) , δ 2 T = | T 2 | , δ 3 T = T 3 I ( 0 T 2 < T 3 ) + ( T 3 T 2 ) I ( T 2 0 < T 3 ) T 2 I ( T 2 < T 3 0 ) ,
( a r , r 1 K , a r r K , a r r K ) = | T j | < N , j = 2 , , r ( 1 , δ r T , δ r T ) K ( 0 , T 2 , , T r ) .
PROOF Transform from t i to T i = t i t 1 for i = 2 , , r . So (4.12) holds with
D r NT = t 1 = 1 N 1 I ( T j < t 1 N j T j , j = 2 , , r ) = min ( N 1 , N 2 T 2 , , N r T r ) max ( 0 , T 2 , , T r ) .
For i = r 1 , r this will give us two choices for ( a r , r 1 , a r r ) needed for (2.1)–(2.5), namely ( a r , r 1 K , a r r K ) of (4.18) and its limit, ( a r , r 1 , a r r ) of (4.19). When a i a , we have another 2 choices for a r r , namely a r r K of (4.18) and its limit, a r r of (4.19). These limits can be used when the differences, for example a r r K a r r , are exponentially small in n, that is, they have magnitude O ( e n λ ) for some λ > 0 . When a i a , δ r T of (4.25) is simpler than δ r T = a + δ r T , so a r r K is simpler than a r r K , and the expansions of Section 2 may be simpler if n is replaced by N 1 as in Example 5.2.
Corollary 4.1. 
Take a j = I 0 ( π j ) , N j = n a j , w j , w ^ j of (4.9), and D r NT , δ r T , δ r T of (4.10)–(4.16). Then for Kof (4.11),
κ ( w ^ 1 , , w ^ r ) = j = r 1 r n j k j 1 r w h e r e k j 1 r = n r ( N 1 N r ) 1 a r j K ,
( a r , r 1 K , a r r K ) = N 1 < T j < N j , j = 2 , , r ( 1 , δ r T ) K π 1 π 2 π r 0 T 2 T r .
So D r NT = n δ r T = N i 1 δ r T * , and (4.17)–(4.24) hold with K ( 0 , T 2 , , T r ) replaced by K π i 1 π i 2 π i r 0 T 2 T r .
PROOF By (4.10), (4.12), and (4.18), for k j 1 r of (4.27), L H S ( 4.10 ) =
( N 1 N r ) 1 R H S ( 4.12 ) = ( N 1 N r ) 1 ( n a r , r 1 K + a r r K ) = j = r 1 r n j k j 1 r .
For its univariate version, we have the option of expanding in N 1 / 2 rather than in n 1 / 2 as in Section 2.
Corollary 4.2. 
Given a sequence of integers π, set
N = n I 0 ( π ) , π r = ( π , , π ) .
T h e n κ r ( M ^ π ) = N 1 r a r , r 1 K + N r a r r K
w h e r e ( a r , r 1 K , a r r K ) = | T j | < N , j = 1 , , r ( 1 , δ r T ) K π π π 0 T 2 T r .
for a r j K of (4.28). So the eECF expansions for Y n of (2.2), that is, (2.3)–(2.5), hold with ( n , Y n , a r , r 1 , a r r ) replaced by ( N , Y N , a r , r 1 K , a r r K ) ,
w h e r e Y N = ( N / a 21 K ) 1 / 2 ( M ^ π M π ) .

5. Examples

Example 5.1. 
The eECF expansions to O ( n 3 / 2 ) for the sample mean, μ ^ of (4.7).
Take p = 1 , π = ( 0 ) so that M ^ 0 = μ ^ = X ¯ . For r 2 and K ( . ) = k ( . ) of (4.2),
κ r ( X ¯ ) = n 1 r a r , r 1 k + n r a r r k o f ( 4.18 ) w i t h N 1 = n = n 1 r a r , r 1 + n r a ˜ r r + O ( e n λ ) o f ( 4.19 ) w h e r e λ > 0 ,
under mild conditions. (2.1)–(2.5) hold for
w ^ = X ¯ w i t h a 10 = μ , a 1 j = 0 f o r j 1 .
So (2.3)–(2.7) hold for Y n = ( n / a 21 ) 1 / 2 ( X ¯ μ ) where for r 2 ,
( a r , r 1 , a r r ) = ( a r , r 1 k , a r r k ) o f ( 4.18 ) o r ( a r , r 1 , a r r ) o f ( 4.19 ) ,
and a r i = 0 for i > r . This example was the subject of Withers (2025b). See there for extensions to missing values and weighted means.
Example 5.2. 
The eECF expansions to O ( n 3 / 2 ) for the sample autocovarianceassuming that μ = 0 .(This assumption is common in the literature on the grounds that the series can be adjusted by subtracting the estimated mean. But by Example 5.3, it gives the wrong variance if μ 0 !) In this case, p = 1 and for a 0 , w = M 0 a = E X 0 X a has unbiased estimate w ^ = M ^ 0 a of (4.8). So π = ( 0 , a ) , N = n a , and by Corollary 4.2,
κ r ( M ^ 0 a ) = N 1 r a r , r 1 K + N r a r r K
N 1 r a r , r 1 + N r a r r o f ( 4.18 ) a n d ( 4.19 ) ,
w h e r e K ( t 1 , , t r ) = κ ( X t 1 ( π ) , , X t r ( π ) ) f o r X t ( π ) = X t X t + a .
So (2.3)–(2.5) hold with ( n , Y n ) replaced by ( N , Y N ) where
Y N = ( N / a 21 ) 1 / 2 ( M ^ 0 a M 0 a ) ,
for a r i = a r i K of (4.26), or its limit when the difference is exponentially small. Alternatively, by (4.27),
κ r ( M ^ 0 a ) = n 1 r a r , r 1 K + n r a r r K n 1 r a r , r 1 + n r a r r
so that (2.3)–(2.5) hold for Y n = ( n / a 21 ) 1 / 2 ( M ^ 0 a M 0 a ) . We now give K ( . ) needed for (4.21)–(4.24), that is, K ( 0 , T 2 , , T r ) of (5.3) for 2 r 4 , in terms of the cross-cumulants of the process, k ( . ) of (4.2). By (A2), since μ = 0 , A 2 = A 4 = 0 , K ( 0 , T ) needed by (4.21) and (4.23) for a 21 and a 22 , is
K ( 0 , T ) = κ ( X 0 X a , X T X T + a ) = A 1 + A 3 w h e r e A 1 = k ( 0 , a , T , T + a ) , A 3 = k ( 0 , T ) 2 + k ( 0 , T + a ) k ( a , T ) .
a 32 of (4.22) needs
K ( 0 , T 2 , T 3 ) = κ ( X 0 X a , X T 2 X T 2 + a , X T 3 X T 3 + a ) = j = 1 11 C j
is given by identifying ( 0 , a , T 2 , T 2 + a , T 3 , T 3 + a ) with ( 1 , , 6 ) in (A13). So C j = 0 for j = 2 , 4 , 7 , 8 , 9 , 11 ,   C 1 = k ( 0 , a , T 2 , T 2 + a , T 3 , T 3 + a ) ,
C 3 = 23 2 [ k ( 0 , T 2 ) k ( 0 , T 2 , T 3 , T 3 a ) + k ( 0 , T 2 + a ) k ( 0 , T 2 a , T 3 , T 3 a ) + k ( 0 , T 2 a ) k ( 0 , T 2 + a , T 3 , T 3 + a ) + k ( 0 , T 2 ) k ( 0 , T 2 , T 3 , T 3 + a ) + k ( 0 , T 3 T 2 + a ) k ( 0 , a , T 2 + a , T 3 ) ] + k ( 0 , T 3 T 2 ) [ k ( 0 , a , T 2 , T 3 ) + k ( 0 , a , T 2 , T 3 ) ] b y ( A 14 ) .
C 5 = 23 2 [ k ( 0 , a , T 2 ) k ( 0 , a , T 2 T 3 + a ) + k ( 0 , a , T 2 + a ) k ( 0 , a , T 2 T 3 ) + k ( 0 , T 2 , T 2 + a ) k ( 0 , T 3 , T 3 a ) ] b y ( A 15 ) . C 6 = k ( 0 , T 2 , T 3 ) 2 + k ( 0 , T 2 + a , T 3 + a ) k ( 0 , T 2 a , T 3 a ) + 23 2 k ( 0 , T 2 , T 3 + a ) k ( 0 , T 2 , T 3 a ) b y ( A 16 ) . B y ( A 17 ) , C 10 = k ( T 2 , T 3 ) 23 2 k ( 0 , T 2 ) [ k ( a , T 3 ) + k ( 0 , T 3 + a ) ] + k ( 0 , T 2 ) k ( 0 , T 3 ) 23 2 k ( T 2 , T 3 + a ) + 23 2 k ( 0 , T 2 + a ) k ( 0 , T 3 a ) k ( T 2 , T 3 + a ) .
This completes K ( 0 , T 2 , T 3 ) needed for a 32 = a 32 K of (4.22) or its limit. Lastly a 43 of (4.24) needs
K ( 0 , T 2 , T 3 , T 4 ) = κ ( X 0 X a , X T 2 X T 2 + a , X T 3 X T 3 + a , X T 4 X T 4 + a ) .
This is given by identifying ( 0 , a , T 2 , T 2 + a , T 3 , T 3 + a , T 4 , T 4 + a ) with ( 1 , , 8 ) in (A23), starting with D 1 = κ ( X 0 , X a , X T 2 , X T 2 + a , X T 3 , X T 3 + a , X T 4 , X T 4 + a ) .
Example 5.3. 
The eECF expansions to O ( n 3 / 2 ) for the sample autocovariancewithoutassuming that μ = 0 .
In this case we take p = 2 in (3.1), a 0 ,
w 1 = μ , w 2 = M 0 a , θ = t ( w ) = κ ( X 0 , X a ) = w 2 w 1 2 .
The non-zero derivatives are t 1 = 2 μ , t 2 = 1 , t 11 = 2 . For i = r 1 and r, k i 1 r = a r i k of Example 5.1, and k i 2 r = a r i K of Example 5.2. So,
( k 1 11 , k 2 11 , k 2 111 , k 3 1111 ) = ( a 21 k , a 22 k , a 32 k , a 43 k ) o f ( 4.18 ) , a n d ( k 1 22 , k 2 22 , k 2 222 , k 3 2222 ) = ( a 21 K , a 22 K , a 32 K , a 43 K ) o f ( 5.4 ) ( 5.7 ) .
By (3.15) and (3.16),
a 21 = 4 μ 2 k 1 11 4 μ k 1 12 + k 1 22 , a 11 = k 1 11 ,
a 32 = 8 μ 3 k 2 111 + 12 μ 2 k 2 112 6 μ k 2 122 + k 2 222 2 s 1 2 , s j = 2 μ k 1 1 j + k 1 2 j .
By (4.27) with r = 2 , N 1 = n , N 2 = N = n a ,
κ ( μ ^ , M ^ 0 a ) = ( n N ) 1 ( n a 21 K + a 22 K ) = j = 1 2 n j k j 12 w h e r e
k j 12 = ( n / N ) a 2 j K , a 21 K = T = 1 n N 1 K ( 0 , T ) , a 22 K = T = 1 n N 1 δ 2 T K ( 0 , T ) , δ 2 T = max ( 0 , T + a ) min ( 0 , T ) , a n d
K ( 0 , T ) = κ ( X 0 , X T X T + a ) = k ( 0 , T , T + a ) + μ [ k ( 0 , T + a ) + k ( 0 , T ) ] ,
by (A1). This gives a 21 of (5.8). By (3.15), a 11 = k 1 11 .
a 32 of (5.9) needs k 2 112 and k 2 122 .
By (4.27) with r = 3 , N 1 = N 2 = n , N 3 = N = n a ,
κ ( μ ^ , μ ^ , M ^ 0 a ) = ( n 2 N ) 1 ( n a 32 K 1 + a 33 K 1 ) = j = 2 3 n j k j 112 w h e r e k j 112 = ( n / N ) a 3 j K 1 , δ 3 T = max ( 0 , T 2 , T 3 + a ) m 3 T , a n d b y ( A 3 ) , K 1 ( 0 , T 2 , T 3 ) = κ ( X 0 , X T 2 , X T 3 X T 3 + a ) = k ( 0 , T 2 , T 3 , T 3 + a ) + μ k ( 0 , T 2 , T 3 + a ) + μ k ( 0 , T 2 , T 3 ) + k ( 0 , T 3 ) k ( T 2 , T 3 + a ) + k ( 0 , T 3 + a ) k ( T 2 , T 3 ) .
By (4.27) with r = 3 , N 1 = n , N 2 = N 3 = N = n a ,
κ ( μ ^ , M ^ 0 a , M ^ 0 a ) = ( n N 2 ) 1 ( n a 32 K 2 + a 33 K 2 ) = j = 2 3 n j k j 122 w h e r e k j 122 = ( n / N ) a 3 j K 2 , a n d δ 3 T = max ( a , T 2 , T 3 ) min ( 0 , T 2 , T 3 ) ,
a n d   b y ( A 6 ) , K 2 ( 0 , T 2 , T 3 ) = κ ( X 0 , X T 2 X T 2 + a , X T 3 X T 3 + a ) = j = 1 6 B j w h e r e B 1 = k ( T 2 , T 2 + a , T 3 , T 3 + a , 0 ) , B 2 / μ = k ( 0 , T 2 , T 2 + a , T 3 ) + k ( 0 , T 2 , T 2 + a , T 3 + a ) + k ( 0 , T 2 , T 3 , T 3 + a ) + k ( 0 , T 2 + a , T 3 , T 3 + a ) , B 3 = k ( T 2 , T 2 + a , T 3 ) k ( 0 , T 3 + a ) + k ( T 2 , T 2 + a , T 3 + a ) k ( 0 , T 3 ) + k ( T 2 , T 3 , T 3 + a ) k ( 0 , T 2 + a ) + k ( T 2 + a , T 3 , T 3 + a ) k ( 0 , T 2 ) , B 4 + B 5 = k ( 0 , T 2 , T 3 ) ( k ( T 2 , T 3 ) + μ 2 ) + k ( 0 , T 2 , T 3 + a ) ( k ( T 2 + a , T 3 ) + μ 2 ) + k ( 0 , T 2 + a , T 3 ) ( k ( T 2 , T 3 + a ) + μ 2 ) + k ( 0 , T 2 + a , T 3 + a ) ( k ( T 2 , T 3 ) + μ 2 ) ,
B 6 / μ = k ( T 2 , T 3 ) [ k ( 0 , T 2 + a ) + k ( 0 , T 3 + a ) ] + k ( T 2 , T 3 + a ) [ k ( 0 , T 2 + a ) + k ( 0 , T 3 ) ] + k ( 0 , T 2 ) [ k ( T 2 + a , T 3 ) + k ( T 2 , T 3 ) ] + k ( T 2 + a , T 3 ) k ( 0 , T 3 + a ) + k ( T 2 , T 3 ) k ( 0 , T 3 ) .
This completes a 32 of (5.9), giving h 1 , h ¯ 1 , f 1 , g 1 of (2.3)–(2.5).
Next we give a 22 . By (3.17),
a 22 = ( a 22 j : j = 1 , 2 , 3 , 6 ) w h e r e a 221 = 4 μ 2 k 2 11 4 μ k 2 12 + k 2 22 , a 222 = 4 μ k 2 11 , a 223 = 2 k 2 112 , a 226 = 2 ( k 1 11 ) 2 .
Finally we give a 43 . By (3.18), a 43 is the sum of
a 431 = 16 μ 4 k 3 1111 32 μ 3 k 3 1112 + 24 μ 2 k 3 1122 8 μ k 3 1222 + k 3 2222 , a 432 = 24 s 2 ( 4 μ 2 k 2 111 4 μ k 3 112 + k 2 122 , s j = 2 μ k 1 j 1 + k 1 j 2 , u 1 = 2 s 2 , u 2 = 0 , a 433 = 0 , a 434 = 12 u 1 2 k 1 11 = 4 s 2 2 k 1 11 .
So we need k 3 i j k l . Set
K 31 = κ ( μ ^ , μ ^ , μ ^ , M ^ 0 a ) , K 22 = κ ( μ ^ , μ ^ , M ^ 0 a , M ^ 0 a ) , K 13 = κ ( μ ^ , M ^ 0 a , M ^ 0 a , M ^ 0 a ) .
By (4.27) with r = 3 , N 1 = N 2 = N 3 = n , N 4 = N = n a ,
K 31 = n 3 N 1 ( n a 43 K + a 44 K ) = j = 3 4 n j k j 1112 w h e r e k j 1112 = ( n / N ) a 4 j K , δ 4 T = max ( 0 , T 2 , T 3 , T 4 + a ) m 4 T f o r m 4 T o f ( 4.14 ) . B y ( A 10 ) , K ( 0 , T 2 , T 3 , T 4 ) = κ ( X 0 , X T 2 , X T 3 , X T 4 X T 4 + a ) = j = 1 3 E j w h e r e
E 1 = k ( 0 , T 2 , T 3 , T 4 , T 4 + a ) , E 2 / μ = k ( 0 , T 2 , T 3 , T 4 + a ) + k ( 0 , T 2 , T 3 , T 4 ) , E 3 = k ( 0 , T 4 ) k ( T 2 , T 3 , T 4 + a ) + k ( T 2 , T 4 ) k ( 0 , T 3 , T 4 + a ) + k ( 0 , a ) k ( 0 , T 2 , T 3 ) + k ( 0 , T 4 + a ) k ( T 2 , T 3 , T 4 ) + k ( T 2 , T 4 + a ) k ( 0 , T 3 , T 4 ) + k ( T 3 , T 4 + a ) k ( 0 , T 2 , T 4 ) .
By (4.27) with r = 3 , N 1 = N 2 = n , N 3 = N 4 = N = n a ,
K 22 = ( n N ) 2 ( n a 43 K + a 44 K ) = j = 3 4 n j k j 1122 w h e r e k j 1112 = ( n / N ) 2 a 4 j K , δ 4 T = max ( 0 , T 2 , T 3 + a , T 4 + a ) m 4 T ,
and by (A20),
K ( 0 , T 2 , T 3 , T 4 ) = j = 1 10 F j w h e r e   f o r   e x a m p l e F 1 = k ( 0 , T 2 , T 3 , T 3 + a , T 4 , T 4 + a ) , F 2 / μ = 34 2 [ k ( 0 , T 2 , T 3 + a , T 4 , T 4 + a ) + k ( 0 , T 2 , T 3 , T 4 , T 4 + a ) ] .
By (4.27) with r = 3 , N 1 = n , N 2 = N 3 = N 4 = N = n a ,
K 13 = ( n N 3 ) 1 ( n a 43 K + a 44 K ) = j = 3 4 n j k j 1222 w h e r e k j 1122 = ( n / N ) 3 a 4 j K , δ 4 T = max ( 0 , T 2 + a , T 3 + a , T 4 + a ) m 4 T ,
and by (A21),
K ( 0 , T 2 , T 3 , T 4 ) = j = 1 9 G j w h e r e   f o r   e x a m p l e G 1 = k ( 0 , T 2 , T 3 , T 3 + a , T 4 , T 4 + a ) , G 2 / μ = 234 3 [ k ( 0 , T 2 ) k ( T 2 + a , T 3 , T 3 + a , T 4 , T 4 + a ) + k ( 0 , T 2 + a ) k ( T 2 , T 3 , T 3 + a , T 4 , T 4 + a ) ] .
This completes a 43 , giving h 2 , h ¯ 2 , f 3 , g 3 of (2.3)–(2.5).
Example 5.4. 
The eECF expansions to O ( n 1 ) for the ath sample autocorrelationassuming that μ = 0 .
Take w 1 = M 00 , w 2 = M 0 a at a 1 = 0 , a 2 = a . Then θ = t ( w ) = w 2 / w 1 is the ath autocorrelation and θ ^ = M ^ 0 a / M ^ 00 is the ath sample autocorrelation. So M ^ 00 = X 2 ¯ ,   p = 2 , a 21 , a 11 , a 32 are given by (3.15) and (3.16), and
t 1 = w 2 / w 1 2 , t 2 = 1 / w 1 , t 11 = 2 w 2 / w 1 3 , t 12 = 1 / w 1 2 , t 22 = 0 . S o , a 21 = ( t 1 2 k 1 11 2 t k 1 12 + k 1 22 ) w 1 2 , a 11 = ( θ k 1 11 k 1 12 ) , w 1 2 , a 32 = ( θ 3 k 2 111 + k 2 222 + 3 t 2 k 2 112 3 t k 2 122 + 6 w 2 s 1 2 6 w 1 s 1 s 2 ) w 1 3 , s 1 = ( θ k 1 11 + k 1 12 ) w 1 1 , s 2 = ( θ k 1 21 + k 1 22 ) w 1 1 .
These need k 1 i j and k 2 i j k . k i 2 r is a r i K of Example 5.2. So,
k 1 22 = | T | < N K ( 0 , T ) o f ( 5.5 ) , k 2 222 = | T i | < N , i = 2 , 3 K ( 0 , T 2 , T 3 ) o f ( 5.6 ) .
k i 1 r is k i 2 r at a = 0 . So,
k 1 11 = | T | < N K ( 0 , T ) w h e r e K ( 0 , T ) = k ( 0 , 0 , T , T ) + 2 k ( 0 , T ) 2 , k 2 111 = | T i | < N , i = 2 , 3 K ( 0 , T 2 , T 3 ) w h e r e K ( 0 , T 2 , T 3 ) = j = 1 , 3 , 5 , 6 , 10 C j , C 1 = k ( 0 , 0 , T 2 , T 2 , T 3 , T 3 ) , C 3 = 4 23 2 [ k ( 0 , T 2 ) k ( 0 , T 2 , T 3 , T 3 ) + k ( 0 , T 3 T 2 ) k ( 0 , 0 , T 2 , T 3 ) ] . C 5 = 23 2 [ 2 k ( 0 , 0 , T 2 ) k ( 0 , 0 , T 2 T 3 ) + k ( 0 , T 2 , T 2 ) k ( 0 , T 3 , T 3 ) ] . C 6 = k ( 0 , T 2 , T 3 ) 2 + 3 k ( 0 , T 2 , T 3 ) 2 . C 10 = 8 k ( T 2 , T 3 ) k ( 0 , T 2 ) k ( 0 , T 3 ) .
This completes K ( 0 , T 2 , T 3 ) needed for a 32 = a 32 K or its limit. We now give k 1 12 , k 2 112 , k 2 122 . By (4.27) with r = 2 , N 1 = n , N 2 = N = n a ,
κ ( M ^ 00 , M ^ 0 a ) = ( n N ) 1 ( n a 21 K + a 22 K ) = j = 1 2 n j k j 12 w h e r e k j 12 = ( n / N ) a 2 j K , K ( 0 , T ) = j = 1 4 A j b y ( A 2 ) w h e r e A 1 = k ( 0 , 0 , T , T + a ) , A 2 = A 4 = 0 , A 3 = 2 k ( 0 , T ) k ( 0 , T + a )
By (4.27) with r = 3 , N 1 = N 2 = n , N 3 = N = n a ,
κ ( M ^ 00 , M ^ 00 , M ^ 0 a ) = ( n 2 N ) 1 ( n a 32 K + a 33 K ) = j = 2 3 n j k j 112 w h e r e k j 112 = ( n / N ) a 3 j K , a n d K ( 0 , T 2 , T 3 ) = κ ( X 0 2 , X T 2 2 , X T 3 X T 3 + a ) = j = 1 11 C j
of Example 5.5. By (4.27) with r = 3 , N 1 = n , N 2 = N 3 = N = n a ,
κ ( M ^ 00 , M ^ 0 a , M ^ 0 a ) = ( n N 2 ) 1 ( n a 32 K + a 33 K ) = j = 2 3 n j k j 122 w h e r e k j 122 = ( n / N ) a 3 j K , a n d K ( 0 , T 2 , T 3 ) = κ ( X 0 2 , X T 2 X T 2 + a , X T 3 X T 3 + a ) = j = 1 11 C j o f ( A 13 )
with ( 123456 ) identified with ( 0 , 0 , T 2 , T 2 + a , T 3 , T 3 + a ) . Similarly a 22 , a 43 needed for the eECF expansions to O ( n 3 / 2 ) , can be obtained from (3.17) and (3.18).
Example 5.5. 
The eECF expansions to O ( n 1 ) for the ath sample autocorrelationwithoutassuming that μ = 0 . Take
w 1 = μ , w 2 = M 00 , w 3 = M 0 a , w ^ 1 = X ¯ , w ^ 2 = X 2 ¯ , w ^ 3 = M ^ 0 a o f ( 4.8 ) , D = v a r ( X 0 ) = w 2 w 1 2 , E = c o v a r ( X 0 , X a ) = w 3 w 1 2 , θ = t ( w ) = c o v a r ( X 0 , X a ) / v a r ( X 0 ) = E / D .
So p = 3 and a 21 , a 11 , a 32 are given by (3.19) with
t 1 = 2 μ ( θ 1 ) / D , t 2 = D 1 , t 3 = θ D 1 , t 11 = 2 ( θ 1 ) w 3 D 2 , t 12 = 2 w 1 D 2 , t 13 = 2 w 1 ( 1 2 θ ) D 2 , t 22 = 0 , t 23 = D 2 , t 33 = 2 θ D 2 .
k 1 11 and k 2 111 are a 21 k and a 32 k of (5.1). k 1 33 and k 2 333 are a 21 k and a 32 k of (5.2). k 1 22 and k 1 22 are just k 1 33 and k 2 333 with a = 0 . k 1 13 is k 1 12 of (5.10), k 1 12 is k 1 13 with a = 0 . k 2 113 is k 2 112 of (5.12). k 2 112 is k 2 113 with a = 0 . k 2 133 is k 2 122 of (5.13). k 2 122 is k 2 133 with a = 0 . k 1 23 is k 1 0 a of Example 4.6 of Withers and Nadarajah (2012). (This corrects the derivatives of t given in Example 4.5 there.) a 32 also needs k 2 123 , k 2 223 , k 2 233 . Set N = n a . By (4.27) with r = 3 , N 1 = N 2 = n , N 3 = N = n a ,
κ ( w ^ 1 , w ^ 2 , w ^ 3 ) = ( n 2 N ) 1 ( n a 32 K + a 33 K ) = j = 2 3 n j k j 112 w h e r e k j 123 = ( n / N ) a 3 j K , a n d K ( 0 , T 2 , T 3 ) = κ ( X 0 , X T 2 2 , X T 3 X T 3 + a ) .
Identifying ( 12 , 34 , 5 ) of (A6) with ( T 2 , T 2 , T 3 , T 3 + a , 0 ) gives
K ( 0 , T 2 , T 3 ) = j = 1 6 B j w h e r e B 1 = k ( 0 , T 2 , T 2 , T 3 , T 3 + a ) , B 2 = μ [ k ( 0 , T 2 , T 2 , T 3 ) + k ( 0 , T 2 , T 2 , T 3 + a ) + 2 k ( 0 , T 2 , T 3 , T 3 + a ) ] , B 3 = k ( T 2 , T 2 , T 3 ) k ( 0 , T 3 + a ) + k ( T 2 , T 2 , T 3 + a ) k ( 0 , T 3 ) + 2 k ( T 2 , T 3 , T 3 + a ) k ( 0 , T 2 ) , B 4 = 2 k ( T 2 , T 3 ) k ( 0 , T 2 , T 3 + a ) + 2 k ( T 2 , T 3 + a ) k ( 0 , T 2 , T 3 ) , B 5 = 2 μ 2 [ k ( 0 , 0 , T 3 + a ) + k ( T 2 , T 3 , T 3 + a ) ] , B 6 = 2 μ k ( 0 , T 2 ) [ k ( T 2 , T 3 ) + k ( T 2 , T 3 + a ) ] + μ k ( 0 , T 3 ) [ k ( 0 , 0 ) + k ( T 2 , T 3 + a ) ] + μ k ( 0 , T 3 + a ) [ k ( 0 , 0 ) + k ( T 2 , T 3 ) ] .
This gives k 2 123 . Next we give k 2 223 . By (4.27) with r = 3 , N 1 = N 2 = n , N 3 = N = n a ,
κ ( w ^ 2 , w ^ 2 , w ^ 3 ) = ( n 2 N ) 1 ( n a 32 K + a 33 K ) = j = 2 3 n j k j 223 w h e r e k j 223 = ( n / N ) a 3 j K , a n d K ( 0 , T 2 , T 3 ) = κ ( X 0 2 , X T 2 2 , X T 3 X T 3 + a ) .
Identifying ( 12 , 34 , 56 ) of (A13) with ( 0 , 0 , T 2 , T 2 , T 3 , T 3 + a ) gives
K ( 0 , T 2 , T 3 ) = κ ( X 0 2 , X T 2 2 , X T 3 X T 3 + a ) = j = 1 11 C j , C 1 = k ( 0 , 0 , T 2 , T 2 , T 3 , T 3 + a ) , C 2 = μ [ k ( 0 , 0 , T 2 , T 2 , T 3 ) + k ( 0 , 0 , T 2 , T 2 , T 3 + a ) + 2 k ( 0 , 0 , T 2 , T 3 , T 3 + a ) + 2 k ( 0 , T 2 , T 2 , T 3 , T 3 + a ) ] , C 3 = 2 k ( 0 , T 2 ) k ( 0 , T 2 , T 3 , T 3 + a ) + 2 k ( 0 , T 3 ) k ( 0 , T 2 , T 2 , T 3 + a ) + 2 k ( 0 , T 3 + a ) k ( 0 , T 2 , T 2 , T 3 ) + 2 k ( 0 , T 2 ) k ( 0 , T 2 , T 3 , T 3 + a ) + 2 k ( T 2 , T 3 ) k ( 0 , 0 , T 2 , T 3 + a ) + 2 k ( T 2 , T 3 + a ) k ( 0 , 0 , T 2 , T 3 ) ,
C 4 = μ 2 [ 2 k ( 0 , T 2 , T 3 , T 3 + a ) + k ( 0 , T 2 , T 2 , T 3 + a ) + k ( 0 , T 2 , T 2 , T 3 ) + k ( 0 , 0 , T 2 , T 3 + a ) + k ( 0 , 0 , T 2 , T 3 ) , C 5 = 2 23 2 k ( 0 , 0 , T 2 ) k ( T 2 , T 3 , T 3 + a ) + k ( 0 , 0 , T 3 ) k ( T 2 , T 2 , T 3 + a ) + k ( 0 , 0 , T 3 + a ) k ( T 2 , T 2 , T 3 ) , C 6 = 4 k ( 0 , T 2 , T 3 ) k ( 0 , T 2 , T 3 + a ) , C 7 = μ [ 2 k ( T 2 , T 3 ) k ( 0 , 0 , T 2 ) + 2 k ( 0 , T 3 ) k ( 0 , T 2 , T 2 ) + 2 k ( T 2 , T 3 + a ) k ( 0 , 0 , T 2 ) + k ( 0 , T 3 + a ) ( 2 k ( 0 , T 2 , T 2 ) + k ( 0 , T 2 , T 3 ) ) + k ( T 3 , T 3 + a ) k ( 0 , 0 , T 2 ) + k ( T 2 , T 3 + a ) ( k ( 0 , 0 , T 3 ) + k ( 0 , T 2 , T 3 ) ) ] . C 8 / 4 μ = k ( 0 , T 2 , T 3 ) [ k ( 0 , T 2 ) + k ( 0 , T 3 + a ) + k ( T 2 , T 3 + a ) ] + k ( 0 , T 2 , T 3 + a ) [ k ( 0 , T 2 ) + k ( 0 , T 3 ) + k ( T 2 , T 3 ) ] , C 9 = 4 μ 3 [ k ( 0 , T 2 , T 3 ) + k ( 0 , T 2 , T 3 + a ) ] , C 10 = 4 k ( 0 , T 2 ) [ k ( 0 , T 3 ) k ( T 2 , T 3 + a ) + k ( 0 , T 3 + a ) k ( T 2 , T 3 ) ] , C 11 / 4 μ 2 = k ( 0 , T 2 ) [ k ( 0 , T 3 ) + k ( 0 , T 3 + a ) + k ( T 2 , T 3 ) + k ( T 2 , T 3 + a ) ] + k ( 0 , T 3 ) k ( T 2 , T 3 + a ) + k ( 0 , T 3 + a ) k ( T 2 , T 3 ) .
Finally a 32 needs k 2 233 . By (4.27) with r = 3 , N 1 = n , N 2 = N 3 = N = n a ,
κ ( w ^ 2 , w ^ 3 , w ^ 3 ) = ( n N 2 ) 1 ( n a 32 K + a 33 K ) = j = 2 3 n j k j 233 w h e r e k j 233 = ( n / N ) 2 a 3 j K , a n d K ( 0 , T 2 , T 3 ) = κ ( X 0 2 , X T 2 X T 2 + a , X T 3 X T 3 + a ) .
Identifying ( 12 , 34 , 56 ) of (A13) with ( 0 , 0 , T 2 , T 2 + a , T 3 , T 3 + a ) gives
K ( 0 , T 2 , T 3 ) = κ ( X 0 2 , X T 2 X T 2 + a , X T 3 X T 3 + a ) = j = 1 11 C j , C 1 = k ( 0 , 0 , T 2 , T 2 + a , T 3 , T 3 + a ) , C 2 = μ [ k ( 0 , 0 , T 2 , T 2 + a , T 3 ) + k ( 0 , 0 , T 2 , T 2 + a , T 3 + a ) + k ( 0 , 0 , T 2 , T 3 , T 3 + a ) + k ( 0 , 0 , T 2 + a , T 3 , T 3 + a ) + 2 k ( 0 , T 2 , T 2 + a , T 3 , T 3 + a ) ] , C 3 = 2 k ( 0 , T 2 ) k ( 0 , T 2 + a , T 3 , T 3 + a ) + 2 k ( 0 , T 2 + a ) k ( 0 , T 2 , T 3 , T 3 + a ) + 2 k ( 0 , T 3 ) k ( 0 , T 2 , T 2 + a , T 3 + a ) + 2 k ( 0 , T 3 + a ) k ( 0 , T 2 , T 2 + a , T 3 ) + k ( T 2 , T 3 ) k ( 0 , 0 , T 2 + a , T 3 + a ) + k ( T 2 , T 3 + a ) k ( 0 , 0 , T 2 + a , T 3 ) + k ( T 2 + a , T 3 ) k ( 0 , 0 , T 2 , T 3 + a ) + k ( T 2 , T 3 ) k ( 0 , 0 , T 2 , T 3 ) ,
C 4 / μ 2 = 2 k ( 0 , T 2 + a , T 3 , T 3 + a ) + 2 k ( 0 , T 2 , T 3 , T 3 + a ) + 2 k ( 0 , T 2 , T 2 + a , T 3 + a ) + 2 k ( 0 , T 2 , T 2 + a , T 3 ) + k ( 0 , 0 , T 2 + a , T 3 ) + k ( 0 , 0 , T 2 + a , T 3 ) + k ( 0 , 0 , T 2 , T 3 + a ) + k ( 0 , 0 , T 2 , T 3 ) ,
and similarly for C 5 , , C 11 .
Example 5.6. 
Fix a 1 0 and a 2 0 . For i = 1 , 2 , set N i = n a i > 0 . We give a 21 needed for the eECF expansions for θ ^ = t ( w ^ ) when
θ = t ( w ) = κ ( X 0 X a 1 , X 0 X a 2 ) = w 3 w 1 w 2 , w 1 = M 0 a 1 , w 2 = M 0 a 2 , w 3 = M 00 a 1 a 2 = E X 0 2 X a 1 X a 2 . S o , w ^ 3 = N 3 1 j = 1 N 3 X j 2 X j + a 1 X j + a 2 , w h e r e N 3 = min ( N 1 , N 2 ) .
So p = 3 , and a 21 , a 11 , a 32 are given by (3.19) with non-zero derivatives
t 1 = w 2 , t 2 = w 1 , t 3 = 1 , t 12 = 1 .
The eECF expansions for θ ^ = t ( w ^ ) to O ( n 1 / 2 ) is given by a 21 . By (3.19), a 21 needs k 1 i j . For i = 1 , 2 , ( k 1 i i , k 2 i i i ) = ( a 21 K , a 32 K ) of (5.4) with a = a i . By (4.27) with r = 2 ,
κ ( w ^ 1 , w ^ 2 ) = ( N 1 N 2 ) 1 ( n a 21 K + a 22 K ) = j = 1 2 n j k j 12 w h e r e k j 12 = ( n 2 / N 1 N 2 ) a 2 j K , K ( 0 , T ) = κ ( X 0 X a 1 , X T X T + a 2 ) = j = 1 4 A j b y ( A 2 ) w h e r e A 1 = k ( 0 , a 1 , T , T + a 2 ) , A 2 / μ = k ( a 1 , T , T + a 2 ) + k ( 0 , T , T + a 2 ) + k ( 0 , a 1 , T + a 2 ) + k ( 0 , a 1 , T ) , A 3 = k ( 0 , T ) k ( a 1 , T + a 2 ) + k ( 0 , T + a 2 ) k ( a 1 , T ) , A 4 / μ 2 = k ( a 1 , T + a 2 ) + k ( a 1 , T ) + k ( 0 , T + a 2 ) + k ( 0 , T ) .
By (4.27), for i = 1 , 2 ,
κ ( w ^ i , w ^ 3 ) = ( N i N 3 ) 1 ( n a 21 K + a 22 K ) = j = 1 2 n j k j i 3 w h e r e k j i 3 = ( n 2 / N i N 3 ) a 2 j K , K ( 0 , T ) = κ ( X 0 X a i , X T 2 X T + a 1 X T + a 2 ) = j = 1 19 A j b y ( A 11 ) w h e r e   f o r   e x a m p l e
A 1 = k ( 0 , a i , T , T , T + a 1 , T + a 2 ) , A 2 / μ = k ( 0 , T , T , T + a 1 , T + a 2 ) + k ( a i , T , T , T + a 1 , T + a 2 ) , A 3 / μ = k ( 0 , a i , T , T , T + a 1 ) + k ( 0 , a i , T , T , T + a 2 ) + 2 k ( 0 , a i , T , T + a 1 , T + a 2 ) .
By (4.27),
κ 2 ( w ^ 3 ) = N 3 2 ( n a 21 K + a 22 K ) = j = 1 2 n j k j 33 w h e r e k j 33 = ( n / N 3 ) 2 a 2 j K , a n d   b y ( A 22 ) , K ( 0 , T ) = κ ( X 0 2 X a 1 X a 2 , X T 2 X T + a 1 X T + a 2 ) = j = 1 16 H j w h e r e f o r e x a m p l e H 1 = k ( 0 , 0 , a 1 a 2 , T , T , T + a 1 , T + a 2 ) .
This gives k 1 33 . a 21 is now given by (3.19). However a 32 needs the joint cumulants of length L = 9 , which are not given in McCullagh (1987).
Example 5.7. 
Fix a 1 , a 2 . Let us find a 21 for w = M 0 a 1 a 2 , w ^ = M ^ 0 a 1 a 2 of (4.6):
w ^ = N 1 t = 1 N X t + I 1 X t + I r w h e r e N = n I 0 ,
and I 0 = max ( 0 , a 1 , a 2 ) min ( 0 , a 1 , a 2 ) . By Corollary 4.2, we can take
a 21 K = ( n / N ) 2 { K * ( 0 , T ) : | T | < N } w h e r e K * ( 0 , T ) = κ ( X 0 X a 1 X a 2 , X T X T + a 1 X T + a 2 ) = i = 1 15 A i
is given by identifying 0 , a 1 , a 2 , T , T + a 1 , T + a 2 with 1 , , 6 in (A12). But it has 1+6+6.4+9.5+18.6=178 terms, so software is needed.
Example 5.8. 
Take p = 5 ,
w 1 = μ , w 2 = M 0 a 1 , w 3 = M 0 a 2 , w 4 = M a 1 a 2 , w 5 = M 0 a 1 a 2 , θ = t ( w ) = κ ( X 0 , X a 1 , X a 2 ) = w 5 w 1 ( w 2 + w 3 + w 4 ) + 2 w 1 3 . S o , t 1 = 6 w 1 2 w 2 w 3 w 4 , t 2 = t 3 = t 4 = w 1 , t 5 = 1 .
a 21 is given by (3.9) in terms of k 1 a 1 a 2 , a 1 , a 2 = 1 , , 5 . By Example 5.1, k 1 11 = { k ( 0 , T ) : | T | < n } . For a 0 , set k 1 22 ( a ) = a 21 of (5.8). Then for i = 2 , 3 , k 1 i i = k 1 22 ( | a i | ) , k 1 44 = k 1 22 ( | a 1 a 2 | ) . k 1 55 = a 21 of Example 5.8. Set k 1 12 ( a ) = k 1 12 = ( n / N ) { K ( 0 , T ) : n < T < N } , for N = n | a | and K ( 0 , T ) of (5.11). Then k 1 12 = k 1 12 ( | a 1 | ) , k 1 13 = k 1 12 ( | a 2 | ) , k 1 14 = k 1 12 ( | a 1 a 2 | ) . By (4.27), for N of (5.15), k 1 15 = ( n / N ) { K ( 0 , T ) : n < T < N } , where now K ( 0 , T ) = κ ( X 0 , X T X T + a 1 X T + a 2 ) = M 0 , T , T + a 1 , T + a 2 μ M 0 a 1 a 2 .
k 1 23 = k 1 12 of Example 5.6. k 1 24 needs κ ( M ^ 0 a 1 , M ^ 0 a 2 ) , a special case of
κ ( M ^ 0 a 1 , M ^ a 2 a 3 ) = j = 1 2 n j k j 24 , w h e r e k j 24 = ( n 2 / N 2 N 4 ) a 2 j K , N 2 = n | a 1 | , N 4 = n | a 2 a 3 | ,
K ( 0 , T ) = κ ( X 0 X a 1 , X T + a 2 X T + a 3 ) = j = 1 4 A j
by (A2) is given by identifying ( 0 , a 1 , T + a 2 , T + a 3 ) with ( 1 , 2 , 3 , 4 ) . By (4.27),
κ ( μ ^ , M ^ 0 a 1 a 1 ) = ( n / N ) n < T < N K * ( 0 , T ) w h e r e N = n max ( 0 , a 1 , a 2 ) + min ( 0 , a 1 , a 2 ) , K * ( 0 , T ) = κ ( X 0 , X T X T + a 1 X T + a 2 ) ,
given by identifying 0 , T , T + a 1 , T + a 2 with 1 , 2 , 3 , 4 in (A4).
Next we obtain k 1 34 , k 1 35 , k 1 45 .   k 1 34 is a special case of (5.16). k 1 35 is covered by k 1 25 of (5.18). k 1 45 is a special case of the lead coefficient in κ ( M ^ a 1 a 2 , M ^ a 3 a 4 a 5 ) . In this case, for N 1 = max ( a 1 , a 2 ) min ( a 1 , a 2 ) , N 2 = max ( a 3 , a 4 , a 5 ) min ( a 3 , a 4 , a 5 ) ,
k 1 45 = ( n 2 / N 1 N 2 ) N 1 < T < N 2 K * ( 0 , T ) w h e r e K * ( 0 , T ) = κ ( X a 1 X a 2 , X T + a 3 X T + a 4 X T + a 5 ) ,
is given by identifying a 1 , a 2 , T + a 3 , T + a 4 , T + a 5 with 12345 in (A5). This completes the terms needed for a 21 .
Note how Example 5.5 built on Example 5.1, Example 5.6 built on Example 5.5, and Example 5.9 built on Example 5.7. So the above results form a catalogue upon which other examples can build.
Example 5.1 gave a 21 = k 1 11 , a 32 = k 2 111 , a 22 = k 2 11 , a 43 = k 3 1111 for μ ^ .
Example 5.3 gave a 21 = k 1 11 , a 32 = k 2 111 for M ^ 0 a .
Example 5.5 gave k 1 i j , k 2 i j k for w ^ 1 = X ¯ , w ^ 2 = X 2 ¯ , w ^ 3 = M ^ 0 a .
Example 5.6 gave k 1 i j for w ^ 1 = M ^ 0 a 1 , w ^ 2 = M ^ 0 a 2 , w ^ 3 = M ^ 00 a 1 a 2 .
Example 5.7 gave a 21 = k 1 11 for w ^ 1 = M ^ 0 a 1 a 2 .
Example 5.8 gave a 21 = k 1 11 for κ ( X 0 , X a 1 , X a 2 ) .

6. Multivariate Stationary Processes

Suppose that , X 1 , X 0 , X 1 , lie in R p and are stationary with finite moments. For j = 1 , , p , denote the jth component of X i by X i j and the cross-moments,and the cross-cumulants, by
μ = E X 0 , μ j = E X 0 j , M i 1 i r j 1 j r = E X i 1 j 1 X i r j r , μ i 1 i r j 1 j r = E ( X i 1 j 1 μ j 1 ) ( X i r j r μ j r ) , k j 1 j r i 1 , , i r = κ ( X i 1 j 1 , , X i r j r ) .
Given a sequence of integers i 1 , , i r , define i 0 , I k as in (4.3). (4.4) becomes
M i 1 i r j 1 j r = M I 1 I r j 1 j r , μ i 1 i r j 1 j r = μ I 1 I r j 1 j r , k j 1 j r i 1 i r = k j 1 j r I 1 I r .
However in general k j 1 j 2 0 , I k j 1 j 2 0 , I . An unbiased estimate of M j 1 j r i 1 i r is
M ^ i 1 i r j 1 j r = N 1 t = 1 N X t + I 1 j 1 X t + I r j r f o r N = n I 0 > 0 .
We can abbreviate 2 sequences of integers of the same length r, as π = ( i 1 i r ) , τ = ( j 1 j r ) . Given P 1 and finite sequences of integers π 1 , , π P and τ 1 , , τ P not depending on n with τ i of the same length as π i ,
s e t w a = M π a τ a , w ^ a = M ^ π a τ a , w = ( w 1 , , w p ) , w ^ = ( w ^ 1 , , w ^ p ) . T h e n κ ( w ^ a 1 , , w ^ a r ) = j = r 1 r n j k j a 1 a r f o r a 1 , , a r { 1 , , p } .
Let θ = t ( w ) : R p R q be a function with finite derivatives (3.2) at w = E X 0 . Then θ ^ = t ( w ^ ) satisfies (3.3), and Y n = n 1 / 2 ( w ^ w ) and Z n = n 1 / 2 ( θ ^ θ ) have multivariate Edgeworth expansions. The results of Section 5 apply with K ( t 1 t r ) replaced by its extension K j 1 j r t 1 t r .

7. Discussion

For the mean of a random sample, the coefficients of the Edgeworth expansions are functions of the cumulants. However when the observations are dependent, the usual unbiased estimate of v a r ( X 0 ) fails, so we use empirical estimates. The role of the empirical distribution is eclipsed by having to deal with the cross-cumulants.
Winterbottom (1979, 1980, 1984) showed that Cornish-Fisher expansions gave substantial improvements to the Central Limit Theorem, even for lattice estimates like the binomial and the Poisson. Phillips (1987) gave an extension to nonstationary processes. Bose (1988) obtained an Edgeworth correction by bootstrap in autoregressions. Loh (1996) gave an Edgeworth expansion for U-statistics with dependent observations. Kitamura (1997) used an Edgeworth expansion to study empirical likelihood methods for dependent processes. Lieberman et al. (2001) gave an Edgeworth expansion for the sample autocorrelation function under long range dependence. For a review of a paper by Taniguchi and Kakizawa on Edgeworth and saddlepoint expansions for time series, see Lieberman (2002). Andrews and Lieberman (2005) gave an Edgeworth expansion for maximum likelihood estimator for stationary long-memory Gaussian time series. Lieberman et al. (2003) gave expansions for the maximum likelihood estimator for a stationary Gaussian process.
However until now, there has been no non-parametric theory for the distribution of the general standard estimate based on a sample from a stationary process. Consequently, econometrics, the foremost user of time series, has been dominated by software contructing estimates for parametric autoregressive (AR), moving average (MA), ARMA, and integrated ARMA processes. But parametric models suffer from a severe drawback: if the model is wrong, the results will generally not even give 1st order (Central Limit Theorem) accuracy. This is where a non-parametric method is far superior. Software is still needed when the required a r i of (2.1) have a large numbers of terms.
Future directions.
1. These results can easily be extended to both weighted estimates, and estimates based on missing data, as done in Withers (2025b).
2. Lahiri (2003) gave a 21 and a Central Limit Theorem for weighted sums of a spatial process. It should not be too hard to extend this to give the other leading a r j .
3. Software to implement the results of Section 3 for arbitrary t ( w ) would also be very useful.
4. There is a need to extend the results of McCullagh (1987), and to spell out his succinct notation as done in the appendix below. As noted, his results cover a 43 for κ 4 ( M ^ i 1 i 2 ) but not a 32 for κ 3 ( M ^ i 1 i 2 i 3 ) .
5. To extend these results to confidence intervals for a general function of cross-cumulants, say t ( w ) , would be a huge advance. The 1st step is to extend them to Edgeworth expansions for its Studentized form, θ ^ = ( t ( w ^ ) t ( w ) ) / a ^ 21 1 / 2 , where a ^ 21 = { K ^ ( 0 , T ) : | T | < l n is a consistent estimate of a 21 of (4.21), and one can take l n = K n 1 / 2 for some K > 0 . Lahiri (2010) gave Edgeworth expansions for the case t ( w ) = μ , so that K ( 0 , T ) = κ ( X 0 , X T ) and K ^ ( 0 , T ) = M ^ 0 T of Example 5.3. The expansions are a superposition of three distinct series, respectively, given by one in powers of n 1 / 2 , one in powers of ( n / l n ) 1 / 2 (resulting from the standard error of the studentizing factor), and one in powers of the bias of a ^ 21 . But this is typicallly exponentially small, taking us to the choice l n = K n 1 / 2 .
6. It may be useful to replace our empirical estimate of the cross-cumulant, by estimates with lower bias. While a jack-knife or bootstrap could be used, analytic results are more easily obtained by adjusting it with an estimate of its bias.

Appendix A. Some results of McCullagh (1987)

Here we spell out some results from p254–265 of McCullagh (1987). His results are not specified clearly for application. For example the 12 terms that we have written out in full for C 3 below, are merely denoted as 1235 | 46 [ 4 ] [ 3 ] . For applications like ours, all such terms need to be made specific. McCullagh and Wilks (1988) gives the software used to obtain his results. It should be possible to extend these using Matlab in a way that they can be easily applied to examples like ours. We have not been able to access McCullagh and Wilks (1985). Set
κ 1 , 2 , , r = κ ( X 1 , X 2 , , X r ) , κ 12 , 345 , = κ ( X 1 X 2 , X 3 X 4 X 5 , ) ,
and so on. McCullagh denotes these by 1 | 2 | | r and 12 | 345 | His formulas give the cross-cumulants like κ 12 , 345 , in terms of the raw cumulants κ 1 , , r .
B y 12 | 3 p 254 , κ 1 , 23 = κ 1 , 2 , 3 + κ 1 , 3 κ 2 + κ 1 , 2 κ 3 . B y 12 | 34 p 254 , κ 12 , 34 = κ 1 , 2 , 3 , 4 + 4 κ 1 κ 2 , 3 , 4 + 2 κ 1 , 3 κ 2 , 4 + 4 κ 1 κ 3 κ 2 , 4
= 1234 + 4 1.234 + 2 13.24 + 4 1.3 . 24 = j = 1 4 A j   s a y .
B y 12 | 3 | 4 p 254 , κ 12 , 3 , 4 = κ 1 , 2 , 3 , 4 + 2 ( κ 2 κ 1 , 3 , 4 + κ 1 , 2 κ 2 , 4 ) .
B y 123 | 4 p 254 , κ 123 , 4 = i = 1 4 A i   w h e r e   A 1 = κ 1 , 2 , 3 , 4 , A 2 = 3 κ 1 , 2 , 4 κ 3 , A 3 = 3 κ 1 , 2 κ 3 , 4 , A 4 = 3 κ 1 , 4 κ 2 κ 3 .
B y 123 | 45 p 255 , κ 123 , 45 = i = 1 10 A i   w h e r e   A 1 = κ 1 , , 5 , A 2 = 2 κ 1 , 2 , 3 , 4 κ 5 , A 3 = 3 κ 1 , 2 , 4 , 5 κ 3 , A 4 = 6 κ 1 , 2 , 4 κ 3 , 5 , A 5 = 3 κ 1 , 4 , 5 κ 2 , 3 , A 6 = 6 κ 1 , 2 , 4 κ 3 κ 5 , A 7 = 3 κ 1 , 4 , 5 κ 2 κ 3 , A 8 = 6 κ 1 , 2 κ 3 , 4 κ 5 , A 9 = 6 κ 1 , 4 κ 2 , 5 κ 3 , A 10 = 6 κ 1 , 4 κ 2 κ 3 κ 5 .
B y p 255 , κ 12 , 34 , 5 = 12345 + 1 4 + 2 4 + 3 4 + 4 4 + 1 8 = j = 1 6 B j , w h e r e B 1 = 12345 = κ 1 , 2 , 3 , 4 , 5 , a n d f o r 1235.4 = κ 1 , 2 , 3 , 5 κ 4 , B 2 = 1 4 = 4 1235.4 = 1235.4 + 1245.3 + 1345.2 + 2345.1 ,
B 3 = 2 4 = 4 123.45 = 123.45 + 124.35 + 134.25 + 234.15 , B 4 = 3 4 = 4 135.24 = 13.245 + 14.235 + 23.145 + 24.135 ,
B 5 = 4 4 = 4 135.2 . 4 = 1.3 . 245 + 1.4 . 235 + 2.3 . 145 + 2.4 . 135 , B 6 = 1 8 = 8 13.25 . 4 = 15 ( 23.4 + 24.3 ) + 25 ( 13.4 + 14.3 ) + 35 ( 12.4 + 24.1 )
+ 45 ( 13.2 + 12.3 ) .
B y   p 255 , κ 1 , 2 , 3 , 45 = j = 1 3 E j w h e r e E 1 = 12345 , E 2 = 4.1235 + 5.1234 ,
E 3 = 41.235 + 42.135 + 45.231 + 51.234 + 52.134 + 53.124 . B y 1234 | 56 p 256 , κ 1234 , 56 = j = 1 19 A j w h e r e   f o r   e x a m p l e A 1 = 123456 , A 2 = 12345.6 + 12346.5 ,
A 3 = 4 12356.4 = 1.23456 + 2.13456 + 3.12456 + 4.12356 .
B y 123 | 456 p 255 , κ 123 , 456 = i = 1 15 A i w h e r e A 1 = κ 1 , , 6 , A 2 = 6 κ 1 , 2 , 3 , 4 , 5 κ 6 , A 3 = 6 κ 1 , 2 , 3 , 4 κ 5 , 6 , A 4 = 9 κ 1 , 2 , 4 , 5 κ 3 , 6 , a n d   s o o n .
B y p 256 , κ 12 , 34 , 56 = j = 1 11 C j , w h e r e C 1 = 123456 = κ 1 , 2 , 3 , 4 , 5 , 6 , C 2 = 6 12345.6 , C 3 = 12 1235.46 = 13.2456 + 14.2356 + 15.2346 + 16.2345 + 23.1456
+ 24.1356 + 25.1356 + 26.1345 + 35.1246 + 36.1245 + 45.1236 + 46.1235 , C 4 = 12 1235.4 . 6 = 1.3 . 2456 + 1.4 . 2356 + 1.5 . 2346 + 1.6 . 2345 + 2.3 . 1456 + 2.4 . 1356 + 2.5 . 1346 + 2.6 . 1345 + 3.5 . 1246 + 3.6 . 1245 + 4.5 . 1236 + 4.6 . 1235 , C 5 = 6 123.456 = 123.456 + 124.356 + 125.346 + 126.345 + 341.256
+ 342.156 ,
C 6 = 4 135.246 = 135.246 + 136.245 + 145.236 + 146.235 ,
C 7 = 24 123.45 . 6 = 6 . ( 54.123 + 53.124 + 52.134 + 51.234 ) + 5 . ( 64.123 + 63.124 + 62.134 + 61.234 ) + 4 . ( 65.123 + 63.125 + 62.135 + 61.235 ) + 3 . ( 46.125 + 45.126 + 42.156 + 41.256 ) + 2 . ( 16.345 + 15.346 + 14.356 + 13.456 ) , C 8 = 24 135.46 . 2 = 135 . ( 2.46 + 4.26 + 6.24 ) + 136 . ( 2.45 + 4.25 + 5.24 ) + 145 . ( 2.36 + 3.26 + 6.23 ) + 146 . ( 2.35 + 3.25 + 5.23 ) + 235 . ( 1.46 + 4.16 + 6.14 ) + 236 . ( 1.45 + 4.15 + 5.14 ) + 245 . ( 1.36 + 3.16 + 6.13 ) + 246 . ( 1.35 + 3.15 + 5.13 ) , C 9 = 8 135.24 . 6 = 135 . ( 24.6 + 2.46 ) + 136 . ( 24.5 + 2.45 ) + 235 . ( 14.6 + 1.46 ) + 236 . ( 14.5 + 1.45 ) , C 10 = 8 κ 1 , 3 κ 2 , 5 κ 4 , 6 = 8 13.25 . 46 s a y , = 13 ( 25.46 + 26.45 )
+ 14 ( 25.36 + 26.35 ) + 15 ( 23.46 + 24.63 ) + 16 ( 23.45 + 24.35 ) . C 11 = 13 . ( 25.4 . 6 + 26.4 . 5 ) + 14 . ( 25.3 . 6 + 26.3 . 5 ) + 15 . ( 23.4 . 6 + 24.3 . 6 ) + 16 . ( 23 . 4.5 + 24 . 3.5 ) + 31 . ( 45 . 2.6 + 46 . 2.5 ) + 32 . ( 45 . 1.6 + 46 . 1.5 ) + 35 . ( 41 . 2.6 + 42.44 . 6 ) + 36 . ( 41 . 2.5 + 42 . 1.5 ) + 51 . ( 63 . 2.4 + 64 . 2.3 )
+ 52 . ( 63 . 1.4 + 64 . 1.3 ) + 53 . ( 61 . 2.4 + 62 . 1.4 ) + 54 . ( 61 . 2.3 + 62 . 1.3 ) .
B y   p 256 , κ 1 , 2 , 34 , 56 = j = 1 10 F j   w h e r e   f o r   e x a m p l e   F 1 = 123456 ,
F 2 = 1 4 = 3.12346 + 4.12356 + 5.12346 + 6.12345 .
B y   p 258 , κ 1 , 23 , 45 , 67 = j = 1 9 G j   w h e r e   f o r   e x a m p l e   G 1 = 1234567 , G 2 = 1 6 = 12.34567 + 13.24567 + 14.23567 + 15.23467 + 16.23457 + 17.23456 .
B y   p 259 , κ 1234 , 5678 = j = 1 16 H j   w h e r e   f o r   e x a m p l e   H 1 = 12345678 , H 2 = 12 123456.78 .
B y   p 263 , κ 12 , 34 , 56 , 78 = j = 1 12 D j   w h e r e   D 1 = 12345678 ,
D 2 = 1 24 123457.68 , D 3 = 2 24 12345.678 , D 4 = 32 13578.246 , D 5 = 3 24 1235.4678 , D 6 = 8 1357.2468 , D 7 = 1 72 1235.47 . 68 , D 8 = 1 48 1357.24 . 68 , D 9 = 2 72 123 , 457.68 , D 10 = 2 48 134.567 . 28 , D 11 = 96 137.258 . 466 , D 12 = 3 48 13.25 . 47.68 .

References

  1. Andrews, D.W.K.; Lieberman, O. Valid Edgeworth expansions for the Whittle maximum likelihood estimator for stationary long-memory Gaussian time series. Econometric Theory 2005, 21, 710–734. [Google Scholar] [CrossRef]
  2. Bose, A. Edgeworth correction by bootstrap in autoregressions. Annals of Statistics 1988, 16, 1709–1722. [Google Scholar] [CrossRef]
  3. Cornish, E.A. and Fisher, R. A. (1937) Moments and cumulants in the specification of distributions. Rev. de l’Inst. Int. de Statist. 5, 307-322. Reproduced in the collected papers of R.A. Fisher, 4.
  4. Fisher, R. A.; Cornish, E.A. The percentile points of distributions having known cumulants. Technometrics 1960, 2, 209–225. [Google Scholar] [CrossRef]
  5. Kitamura, Y. Empirical likelihood methods with weakly dependent processes. Annals Statist 1997, 25, 2084–2102. [Google Scholar] [CrossRef]
  6. Lahiri, S N. Central limit theorems for weighted sums of a spatial process under a class of stochastic and fixed designs. Sankhya: The Indian Journal of Statistics 2003, 65, 356–388. [Google Scholar]
  7. Lahiri, S N. Edgeworth expansions for Studentized statistics under weak dependence. Annals Statist 2010, 38, 388–434. [Google Scholar] [CrossRef]
  8. Lieberman, O. Asymptotic theory of statistical inference for time series, by M. Taniguchi, Y. Kakizawa. Review. Econometric Theory 2002, 18, 993–999. [Google Scholar] [CrossRef]
  9. Lieberman, O.; Rousseau, J.; Zucker, D.M. Valid Edgeworth expansion for the sample autocorrelation function under long range dependence. Econometric Theory 2001, 17, 257–275. [Google Scholar] [CrossRef]
  10. Lieberman, O.; Rousseau, J.; Zucker, D.M. Valid asymptotic expansions for the maximum likelihood estimator of the parameter of a stationary, Gaussian, strongly dependent process. Annals Statist 2003, 31, 586–612. [Google Scholar] [CrossRef]
  11. Loh, Wei-Liem. An Edgeworth expansion for U-statistics with weakly dependent observations. Statistica Sinica 1996, 6, 171–186. [Google Scholar]
  12. McCullagh, P. and Wilks, A.R. (1985) Extended tables of complementary set partitions. AT& T Bell Labs Tech. Memorandum 11214-850328-8.
  13. McCullagh, P., (1987) Tensor methods in statistics. Chapman and Hall, London.
  14. McCullagh, P.; Wilks, A.R. Complementary set partitions. Proc. Royal Soc. London, Series A, Math. and Physical Sciences 1988, 415, 347–362. [Google Scholar]
  15. Phillips, P.C.B. Asymptotic expansions in nonstationary vector autoregressions. Econometric Theory 1987, 3, 45–68. [Google Scholar] [CrossRef]
  16. Stuart, A. and Ord, K. (1987). Kendall’s advanced theory of statistics, 1. 5th edition. Griffin, London.
  17. Winterbottom, A. Cornish-Fisher expansions for confidence limits. J.R. Statist. Soc. B 1979, 41, 69–531. [Google Scholar] [CrossRef]
  18. Winterbottom, A. Asymptotic expansions to improve large sample confidence intervals for system reliability. Biometrika 1980, 67, 351–357. [Google Scholar] [CrossRef]
  19. Winterbottom, A. The interval estimation of system reliability component test data. Operations Research 1984, 32, 628–640. [Google Scholar] [CrossRef]
  20. Withers, C.S. (1982) The distribution and quantiles of a function of parameter estimates. Ann. Inst. Statist. Math. A, 34, 55–68. A correction to a22 is given in Appendix A of Withers (2024).
  21. Withers, C.S. Expansions for the distribution and quantiles of a regular functional of the empirical distribution with applications to nonparametric confidence intervals. Annals Statist 1983, 11, 577–587. [Google Scholar] [CrossRef]
  22. Withers, C.S. (1984) Asymptotic expansions for distributions and quantiles with power series cumulants. Journal Royal Statist. Soc. B, 46, 389–396. Corrigendum (1986) 48, 256.
  23. Withers, C.S. (2024) 5th-Order multivariate Edgeworth expansions for parametric estimates. Mathematics, 12,905, Advances in Applied Probability and Statistical Inference. https://www.mdpi.com/2227-7390/12/6/905/pdf.
  24. Withers, C.S. (2025a) Edgeworth coefficients for standard multivariate estimates. Axioms, 2025, Special Issue: New Perspectives in Mathematical Statistics, 2nd edition. https://www.mdpi.com/2075-1680/14/8/632 PDF Version: https://www.mdpi.com/2075-1680/14/8/632/pdf.
  25. Withers, C.S. (2025b) The distribution and quantiles of the sample mean from a stationary process. Axioms, Special Issue: New Perspectives in Mathematical Statistics, https://www.mdpi.com/2075-1680/14/6/406.
  26. Withers, C.S. and Nadarajah, S.N. (2010a) Expansions for log densities of asymptotically normal estimates, Statistical Papers, 51, 247–257, Springer. [CrossRef]
  27. Withers, C.S.; Nadarajah, S. Tilted Edgeworth expansions for asymptotically normal vectors. Ann. Institute Statist. Math. 2010b, 62, 1113–1142. [Google Scholar] [CrossRef]
  28. Withers, C.S.; Nadarajah, S. The joint cumulants of a linear process. International Journal of Agric. and Statist. Sciences 2010c, 6, 343–347. [Google Scholar]
  29. Withers, C.S.; Nadarajah, S. Cornish-Fisher expansions for sample autocovariances and other functions of sample moments of linear processes. Brazilian Journ. Prob. and Statist 2012, 26, 149–166. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated