Download Statistics and Probability Letters Limiting behaviour of moving

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Statistics and Probability Letters 79 (2009) 105–111
Contents lists available at ScienceDirect
Statistics and Probability Letters
journal homepage: www.elsevier.com/locate/stapro
Limiting behaviour of moving average processes under
ϕ -mixing assumption
Pingyan Chen a , Tien-Chung Hu b,∗ , Andrei Volodin c
a
Department of Mathematics, Jinan University, Guangzhou, 510630, PR China
b
Department of Mathematics, National Tsing Hua University, Hsinchu 300, Taiwan, ROC
c
Department of Mathematics and Statistics, University of Regina, Regina, Saskatchewan S4S 0A2, Canada
article
info
Article history:
Received 20 October 2006
Received in revised form 11 June 2008
Accepted 23 July 2008
Available online 5 August 2008
MSC:
60F15
a b s t r a c t
Let {Yi , −∞ < i < ∞} be a doubly infinite sequence of identically distributed ϕ -mixing
random variables, {ai , −∞ < i < ∞} be an absolutely summable sequence of real
numbers. In this paper we prove the complete convergence and Marcinkiewicz–Zygmund
strong law of large numbers for the partial sums of moving average processes {Xn =
P
∞
i=−∞ ai Yi+n , n ≥ 1} based on the sequence {Yi , −∞ < i < ∞} of ϕ -mixing random
variables, improving the result of [Zhang, L., 1996. Complete convergence of moving
average processes under dependence assumptions. Statist. Probab. Lett. 30, 165–170].
© 2008 Elsevier B.V. All rights reserved.
1. Introduction and formulation of the main results
Let {Yi , −∞ < i < +∞} be a doubly infinite sequence of identically distributed random variables and {ai , −∞ < i <
+∞} be an absolutely summable sequence of real numbers. Let
∞
X
ai Yi+n , n ≥ 1
Xn =
i=−∞
be the moving average process based on the sequence {Yi , −∞ < i < +∞}. As usual, we denote Sn =
k=1 Xk , n ≥ 1, the
sequence of partial sums.
Under the assumption that {Yi , −∞ < i < +∞} is a sequence of independent identically distributed random variables,
many limiting results have been obtained for the moving average process {Xn , n ≥ 1}. For example, Ibragimov (1962)
established the central limit theorem, Burton and Dehling (1990) obtained a large deviation principle, and Li et al. (1992)
obtained the complete convergence result for {Xn , n ≥ 1}.
Certainly, even if {Yi , −∞ < i < +∞} is the sequence of independent identically distributed random variables, the
moving average random variables {Xn , n ≥ 1} are dependent. This kind of dependence is called weak dependence. The partial
sums of weakly dependent random variables {Xn , n ≥ 1} have similar limiting behaviour properties in comparison with the
limiting properties of independent identically distributed random variables.
For example, we could present some of the previous results connected with complete convergence. The following was
proved in Hsu and Robbins (1947).
Pn
2
Theorem
P∞ A. Suppose {Xn , n ≥ 1} is a sequence of independent identically distributed random variables. If EX1 = 0, E |X1 | < ∞,
then n=1 P {|Sn | ≥ ε n} < ∞ for all ε > 0.
The above result was extended by Li et al. (1992) for moving average processes.
∗
Corresponding author. Tel.: +886 3 5742642; fax: +886 3 5723888.
E-mail address: tchu@math.nthu.edu.tw (T.-C. Hu).
0167-7152/$ – see front matter © 2008 Elsevier B.V. All rights reserved.
doi:10.1016/j.spl.2008.07.026
106
P. Chen et al. / Statistics and Probability Letters 79 (2009) 105–111
Theorem B. Suppose {Xn , n ≥ 1} is the moving average process based P
on a sequence {Yi , −∞ < i < ∞} of independent
∞
identically distributed random variables with EY1 = 0, E |Y1 |2 < ∞. Then n=1 P {|Sn | ≥ ε n} < ∞ for all ε > 0.
Very few results for a moving average process based on a dependent sequence are known. In this paper, we provide two
results on the limiting behaviour of a moving average process based on a ϕ -mixing sequence.
Let {Yi , −∞ < i < ∞} be a sequence of random variables defined on a probability space (Ω , F , P ) and denote σ algebras Fnm = σ (Yi , n ≤ i ≤ m), −∞ ≤ n ≤ m ≤ +∞.
Recall that a sequence of random variables {Yi , −∞ < i < ∞} is called ϕ -mixing if the mixing coefficient
k
ϕ(m) = sup sup{|P {B|A} − P {B}|, A ∈ F−∞
, P {A} 6= 0, B ∈ Fk∞
+m } → 0
k≥1
as m → ∞.
Recall that a function h is said to be slowly varying at infinity if it is real valued, positive and measurable on [0, ∞), and
if for each λ > 0
h(λx)
lim
= 1.
x→∞ h(x)
We refer to Seneta (1976) for other equivalent definitions and for a detailed and comprehensive study of properties of slowly
varying functions.
In the following, we frequently use the following properties of slowly varying functions (cf. Seneta (1976)).
If h is a function slowly varying at infinity, then for any 0 ≤ a ≤ b ≤ ∞ and s 6= −1
b
Z
xs h(x) dx ≤ Cxs+1 h(x) |ba ,
a
where C does not depend on a and b, and for any λ > 0
max h(x) ≤ C (λ)h(λa).
a≤x≤λa
Of course, these two inequalities take place only if the right hand sides make sense.
The following result of partial sums of ϕ -mixing random variables was proved in Shao (1988), Remarks 3.2 and 3.3.
Theorem C. Let h be a function slowly varying at infinity, 1 ≤ p < 2, r ≥ 1, and {Xi , i ≥ 1} be a sequence of identically
distributed ϕ -mixing random variables with EX1 = 0 and E |X1 |rp h(|X1 |p ) < ∞.
(i) If r > 1 then
∞
X
nr −2 h(n)P
max |Sk | ≥ ε n1/p
< ∞,
for all ε > 0.
< ∞,
for all ε > 0.
1≤k≤n
n =1
and
∞
X
nr −2 h(n)P sup Sk /k1/p ≥ ε
k≥n
n =1
(ii) If r = 1 and
∞
X
h(n)
n =1
n
P
ϕ 1/2 (2m ) < ∞, then
1/p
max |Sk | ≥ ε n
< ∞, for all ε > 0.
P∞
m=1
1≤k≤n
For moving average processes, Zhang (1996) obtained the following result.
Theorem D. Let h be a function slowly varying at infinity, 1 ≤ p < 2, and r ≥ 1. Suppose that {Xn , n ≥ 1P
} is a moving average
∞
process based on a sequence {Yi , −∞ < i < ∞} of identically distributed ϕ -mixing random variables with m=1 ϕ 1/2 (m) < ∞.
If EY1 = 0 and E |Y1 |rp h(|Y1 |p ) < ∞, then
∞
X
nr −2 h(n)P |Sn | ≥ ε n1/p < ∞,
for all ε > 0.
n =1
Keeping in mind the above mentioned analogy between the ‘‘usual’’ limiting behaviour of random variables and limiting
behaviour of the moving average process (cf. Theorems A and B), we note that a substantial gap between Theorems C and D
is distinct. Firstly, when r > 1, Theorem C provides the result without any mixing rate, even when r = 1 Theorem C requires
a weaker condition on mixing rate than Theorem D. Secondly, Theorem D does not discuss the complete convergence for
the case of the maximums and supremums of the partial sums as it is done in Theorem C. Note that by the method of Zhang
(1996) it is impossible to eliminate these differences. The main goal of the present investigation is to obtain the results
similar to Theorem C, but for the moving average processes and using different methods from those in Zhang (1996).
Now we state the main results. Theorems 1 and 2 improve Theorem D and extend Theorem C on the case of moving
average processes. The proofs will be detailed in the next section.
P. Chen et al. / Statistics and Probability Letters 79 (2009) 105–111
107
Theorem 1. Let h be a function slowly varying at infinity, 1 ≤ p < 2, and r > 1. Suppose that {Xn , n ≥ 1} is a moving
average process based on a sequence {Yi , −∞ < i < ∞} of identically distributed ϕ -mixing random variables. If EY1 = 0 and
E |Y1 |rpP
h(|Y1 |p ) < ∞, then
r −2
(i) ∞
h(n)P max1≤k≤n |Sk | ≥ ε n1/p < ∞, for all ε > 0.
n =1 n
and P
r −2
(ii) ∞
h(n)P supk≥n Sk /k1/p ≥ ε < ∞, for all ε > 0.
n=1 n
The second theorem treats the case r = 1.
θ
Theorem 2. Let h be a function slowly varying at infinity and 1 ≤ p < 2. Assume that
i=−∞ |ai | < ∞, where θ belong
to (0, 1) if p = 1 and θ = 1 if 1 < p < 2. Suppose that {Xn , n ≥ 1} is a moving
average
process
based on a sequence
P
1/2
m
{Yi , −∞ < i < ∞} of identically distributed ϕ -mixing random variables with ∞
ϕ
(
2
)
<
∞
. If EY1 = 0 and
m=1
E |Y1 |p h(|Y1 |p ) < ∞, then
P∞
∞
X
h(n)
n
n =1
P
max |Sk | ≥ ε n1/p
1≤k≤n
< ∞,
for all ε > 0.
In particular, the assumptions EY1 = 0 and E |Y1 |p < ∞ imply the following Marcinkiewicz–Zygmund strong law of large numbers
Sn /n1/p → 0 almost surely as n → ∞.
2. Few technical lemmas
The following five lemmas will be useful. The first two lemmas can be found in Shao (1988) Lemma 3.1 and Corollary
2.1, hence we omit their proofs. For the first two lemmas we assume that {Yn , n ≥ 1} is a ϕ -mixing sequence and
Pk+n
Sk (n) =
i=k+1 Yi , n ≥ 1, k ≥ 0.
Lemma 1. Let EYi = 0, EYi2 < ∞ for all i ≥ 1. Then for all n ≥ 1 and k ≥ 0 we have
(
ESk2 (n) ≤ 8000n exp 6
[X
log n]
)
ϕ 1/2 (2i )
i=1
max
k+1≤i≤k+n
EYi2 .
Lemma 2. Suppose that there exists an array {Ck,n , k ≥ 0, n ≥ 1} of positive numbers such that max1≤i≤n ESk2 (i) ≤
Ck,n for every k ≥ 0, n ≥ 1. Then for any q ≥ 2, there exists C = C (q, ϕ(·)) such that for any k ≥ 0, n ≥ 1
E max |Sk (i)|q ≤ C
1≤i≤n
q/2
Ck,n + E
max |Yi |q
k<i≤k+n
.
The next two lemmas seem to be known (cf., for example the proof of Theorem G in Chen et al. (2006)), but we include
their short and simple proofs for the interested reader. Here we let h be a function slowly varying at infinity.
Lemma 3. If r > 1 and 1 ≤ p < 2, then for any ε > 0
∞
X
nr −2 h(n)P sup k−1/p Sk ≥ ε
≤C
k≥n
n =1
∞
X
nr −2 h(n)P
max |Sk | ≥ (ε/21/p )n1/p .
1≤k≤n
n =1
Proof. We have the following estimations:
∞
X
n =1
n
r −2
1/p
h(n)P sup |Sk |/k
k≥n
>ε
=
m −1
∞ 2X
X
n
r −2
h(n)P sup |Sk |/k
≤C
(
≤C
1/p
∞
X
m=1
X
2m(r −2) h(2m )
n=2m−1
(
m(r −1)
2
)
1/p
h( 2 ) P
m
sup |Sk |/k
>ε
k≥2m−1
m=1
=C
2 m −1
>ε
k≥2m−1
m=1
∞
X
)
sup |Sk |/k
P
>ε
k≥n
m=1 n=2m−1
∞
X
1/p
2m(r −1) h(2m )P sup
max
l≥m 2l−1 <k≤2l
|Sk |/k1/p > ε
108
P. Chen et al. / Statistics and Probability Letters 79 (2009) 105–111
≤C
∞
X
2m(r −1) h(2m )
∞
X
P
≤C
n
r −2
h(n)P
≤C
n
h(n)P
n =1
1≤k≤2l
l=1 n=2l−1
r −2
2m(r −1) h(2m )
max |Sk | > ε 2(l−1)/p
2 l −1
∞
X
X
l
m=1
2l(r −1) h(2l )P
∞ X
X
1≤k≤2l
max |Sk | > ε 2(l−1)/p
l =1
≤C
max |Sk | > ε 2(l−1)/p
1≤k≤2l
l =1
∞
X
P
l=m
m=1
=C
∞
X
1/p
max |Sk | > (ε/2
1≤k≤n
max |Sk | > (ε/2
1≤k≤n
1/p
)n
1/p
)n
1/p
. Lemma 4. Let Y be a random variable with E |Y |rp h(|Y |p ) < ∞, where r ≥ 1 and p ≥ 1. If q > rp, then
∞
X
nr −1− q/p h(n)E |Y |q I {|Y | ≤ n1/p } ≤ CE |Y |rp h(|Y |p ).
n =1
Proof. Since r − q/p < 0, we have that
∞
X
nr −1−q/p h(n)E |Y |q I {|Y | ≤ n1/p } =
n =1
∞
X
nr −1−q/p h(n)
n =1
=
∞
X
n
X
E |Y |q I {m − 1 < |Y |p ≤ m}
m=1
E |Y |q I {m − 1 < |Y |p ≤ m}
∞
X
nr −1−q/p h(n)
n =m
m=1
≤C
∞
X
mr −q/p h(m)E |Y |q I {m − 1 < |Y |p ≤ m}
m=1
≤C
∞
X
Emr −q/p h(m)|Y |q I {m − 1 < |Y |p ≤ m}
m=1
≤C
∞
X
E (|Y |p )r −q/p h(|Y |p )|Y |q I {m − 1 < |Y |p ≤ m}
m=1
≤C
∞
X
E |Y |rp h(|Y |p )I {m − 1 < |Y |p ≤ m}
m=1
≤ CE |Y |rp h(|Y |p ). The last lemma presents a technical fact that is important in the proofs of Theorems 1 and 2.
Lemma 5. Let h be a function slowly varying at infinity and p ≥ 1. Suppose that {Xn , n ≥ 1} is a moving average process based
on a sequence {Yi , −∞ < i < ∞} of mean zero identically distributed random variables such that E |Y1 |p < ∞. For any ε > 0
denote
I =:
∞
X
)
∞
i +k
X
X
1/p 1/p
h(n)P max ai
Yj I {|Yj | > n } ≥ ε n /2
1≤k≤n i=−∞
j=i+1
(
n
r −2
n=1
and
J =:
∞
X
n =1
)
∞
i+k
X
X
1/p
h(n)P max ai
Ynj ≥ ε n /4 ,
1≤k≤n i=−∞
j=i+1
(
n
r −2
where
Ynj = Yj I {|Yj | ≤ n1/p } − EYj I {|Yj | ≤ n1/p }.
P. Chen et al. / Statistics and Probability Letters 79 (2009) 105–111
109
If I < ∞ and J < ∞, then
∞
X
n
r −2
h(n)P
max |Sk | ≥ ε n
1/p
1≤k≤n
n =1
≤ I + J < ∞.
Proof. Note that
n
X
Xk =
n
∞
X
X
∞
X
ai Yi+k =
k=1 i=−∞
k=1
ai
i=−∞
i+n
X
Yj
j=i+1
i=−∞ |ai | < ∞,
∞
i+n
∞
i+n
X
X
X
X
−1/p 1/p −1/p 1/p n
ai
Yj I {|Yj | ≤ n } = n
ai
Yj I {|Yj | > n }
E
E
i=−∞ j=i+1
i=−∞ j=i+1
and since
P∞
∞
X
≤ n−1/p
|ai |
i=−∞
≤n
∞
X
1−1/p
i +n
X
(EYj = 0)
E |Yj |I {|Yj | > n1/p }
j=i+1
!
|ai | E |Y1 |I {|Y1 | > n1/p }
i=−∞
1/p p−1
≤ CE (n ) |Y1 |I {|Y1 | > n1/p }
≤ CE |Y1 |p I {|Y1 | > n1/p } → 0, as n → ∞.
Hence for n large enough we have
n
1/p
∞
i+n
X
X
1/p ai
Yj I {|Yj | ≤ n } < ε/4.
E
i=−∞ j=i+1
Then
∞
X
n
r −2
h(n)P
n =1
max |Sk | ≥ ε n
1≤k≤n
1/p
)
i+k
∞
X
X
1/p
1/p ai
Yj I {|Yj | > n } ≥ ε n /2
≤C
n h(n)P max 1≤k≤n i=−∞
j =i +1
n =1
)
(
i+k
∞
∞
X
X
X
1/p
r −2
ai
Ynj ≥ ε n /4
+C
n h(n)P max 1≤k≤n i=−∞
j =i +1
n =1
∞
X
(
r −2
= I + J. 3. Proof of main results
With all the prerequisites accounted before, we could now prove the main results of the paper. We start with Theorem 1.
Proof. According to Lemma 3 it is enough to show that (i) holds. According to Lemma 5 it is enough to prove that I < ∞
and J < ∞.
For I, by Markov inequality we have
I ≤ C
∞
X
n
r −2
h(n)n
−1/p
n=1
≤C
∞
X
∞
i+k
X
X
1/p E max ai
Yj I {|Yj | > n }
1≤k≤n i=−∞
j=i+1
nr −1−1/p h(n)E |Y1 |I {|Y1 | > n1/p }
n=1
=C
∞
X
n=1
=C
∞
X
nr −1−1/p h(n)
∞
X
E |Y1 |I {m < |Y1 |p ≤ m + 1}
m=1
≤C
∞
X
E |Y1 |I {m < |Y1 |p ≤ m + 1}
m=n
m
X
nr −1−1/p h(n)
n =1
mr −1/p h(m)E |Y1 |I {m < |Y1 |p ≤ m + 1}
m=1
≤ CE |Y1 |rp h(|Y1 |p ) < ∞.
110
P. Chen et al. / Statistics and Probability Letters 79 (2009) 105–111
For J, by Markov and Hölder inequalities, Lemmas 1 and 2, we have that for any q ≥ 2
q
∞
i+k
X
X
J ≤ C
nr −2 h(n)n−q/p E max ai
Ynj 1≤k≤n i=−∞
n=1
j=i+1
∞
X
!!q
i+k
X
≤ C
n h(n)n
E
|ai |
|ai | max Ynj 1≤k≤n i=−∞
n=1
j=i+1
q
! q −1
∞
∞
∞
i+k
X
X
X
X
≤ C
nr −2−(q/p) h(n)
|ai |
|ai |E max Ynj 1≤k≤n i=−∞
i=−∞
n=1
j=i+1
(
)!q/2
[X
log n]
∞
X
≤ C
nr −2−(q/p) h(n) n exp 6
ϕ 1/2 (2i )
(E |Y1 |2 I {|Y1 | ≤ n1/p })q/2
∞
X
−q / p
r −2
∞
X
1− 1/q
n=1
+C
1/q
i=1
∞
X
nr −1−(q/p) h(n)E |Y1 |q I {|Y1 | ≤ n1/p }
n=1
=: J1 + J2 .
P[log n]
Note that ϕ(m) → 0 as m → ∞, hence
i=1
n P
o
ϕ 1/2 (2i ) = o(log n). Furthermore, exp A [i=log1 n] ϕ 1/2 (2i ) = o(nt ) for
any A > 0 and t > 0.
We consider two separate cases. If rp < 2, take q = 2. Note that in this case r − (2r /p) < 0. Take t > 0 small enough
such that r − (2r /p) + t < 0. We have
J1 = C
∞
X
(
n
r −2−(2/p)
h(n) n exp 6
[X
log n]
≤C
(
n
r −(2/p)−1
h(n) exp 6
n =1
≤C
∞
X
ϕ
(2 )
i
E |Y1 |2 I {|Y1 | ≤ n1/p }
i =1
n =1
∞
X
)!
1/2
[X
log n]
)
ϕ
1/2
(2 ) E |Y1 |2 I {|Y1 | ≤ n1/p }
i
i =1
nr −2/p+t −1 h(n)E |Y1 |rp |Y1 |2−rp I {|Y1 | ≤ n1/p }
n =1
≤C
∞
X
nr −(2r /p)+t −1 h(n)E |Y1 |rp < ∞.
n =1
2p(r −1)
.
2−p
If rp ≥ 2, take q >
We have that r − (q/p) + (q/2) < 1. Next, take t > 0 small enough such that
r − (q/p) + (q/2) + t < 1. Note that in this case E |Y1 |2 < ∞. We have
J1 = C
∞
X
(
n
r −2−(q/p)
h(n) n exp 6
n =1
≤C
∞
X
[X
log n]
)!q/2
ϕ
1/2
(2 )
i
(E |Y1 |2 I {|Y1 | ≤ n1/p })q/2
i =1
nr −(q/p)+(q/2)+t −2 h(n) < ∞.
n =1
By Lemma 4 we have that J2 < ∞.
Next, we prove Theorem 2.
Proof. By Lemma 5 we only need to show that I < ∞ and J < ∞ with r = 1.
For I, by Markov and Cr -inequalities (note that θ ≤ 1)
I ≤ C
∞
X
n
−1
h(n)n
−θ/p
n =1
≤C
∞
X
θ
∞
i +k
X
X
1/p E max ai
Yj I {|Yj | > n }
1≤k≤n i=−∞
j=i+1
n−θ/p h(n)E |Y1 |θ I {|Y1 | > n1/p }
n =1
=C
∞
X
n =1
n−θ/p h(n)
∞
X
m=n
E |Y1 |θ I {m < |Y1 |p ≤ m + 1}
P. Chen et al. / Statistics and Probability Letters 79 (2009) 105–111
=C
∞
X
E |Y1 |θ I {m < |Y1 |p ≤ m + 1}
m=1
≤C
∞
X
m
X
111
n−θ/p h(n)
n =1
m1−θ/p h(m)E |Y1 |θ I {m < |Y1 |p ≤ m + 1}
m=1
≤ CE |Y1 |p h(|Y1 |p ) < ∞.
For J, by Markov and Hölder inequalities, and Lemma 1
J ≤ C
∞
X
n
−1
h(n)n
−2/p
n=1
2
∞
i+k
X
X
E max ai
Ynj 1≤k≤n i=−∞
j=i+1
!!2
i+k
X
≤C
n h(n)n
E
| ai |
|ai | max Ynj 1≤k≤n i=−∞
n=1
j=i+1
!
∞
∞
∞
i+k
X
2
X
X
X
−1−2/p
h(n)
| ai |
|ai |E max ≤C
n
Ynj 1≤k≤n i=−∞
i=−∞
n=1
j=i+1
(
)!
[X
log n]
∞
X
≤C
n−1−2/p h(n) n exp 6
ϕ 1/2 (2i )
E |Y1 |2 I {|Y1 | ≤ n1/p }
∞
X
∞
X
−2/p
−1
n=1
≤C
∞
X
1/2
1/2
i =1
n−2/p h(n)E |Y1 |2 I {|Y1 | ≤ n1/p } < ∞.
n=1
The last inequality holds by Lemma 4.
Now we will show almost sure convergence. By the first part of Theorem 2, EY1 = 0 and E |Y1 |p < ∞ imply
∞
X
n− 1 P
max |Sk | ≥ ε n1/p
1≤k≤n
n =1
< ∞,
for all ε > 0.
Hence
∞>
∞
X
n− 1 P
max |Sm | > ε n1/p
1≤m≤n
n =1
2k
=
∞ X
X
n
−1
1/p
1≤m≤n
∞
X
P
k=1
max |Sm | > ε n
P
k=1 n=2k−1
≥ 1/2
max
1≤m≤2k−1
|Sm | > ε2k/p .
By Borel–Cantelli lemma,
2−k/p max |Sm | → 0
almost surely
1≤m≤2k
which implies that Sn /n1/p → 0 almost surely.
Acknowledgements
The research of P. Chen has been supported by the National Natural Science Foundation of China. The research of T.-C. Hu
has been partially supported by the National Science Council of R.O.C. The research of A. Volodin has been partially supported
by the National Science and Engineering Research Council of Canada.
References
Burton, R.M., Dehling, H., 1990. Large deviations for some weakly dependent random processes. Statist. Probab. Lett. 9, 397–401.
Chen, P., Hu, T.-C., Volodin, A., 2006. A note on the rate of complete convergence for maximums of partial sums for moving average processes in Randemacher
type Banach spaces. Lobachevskii J. Math. 21, 45–55.
Hsu, P.L., Robbins, H., 1947. Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 33, 25–31.
Ibragimov, I.A., 1962. Some limit theorem for stationary processes. Theory Probab. Appl. 7, 349–382.
Li, D., Rao, M.B., Wang, X.C., 1992. Complete convergence of moving average processes. Statist. Probab. Lett. 14, 111–114.
Seneta, E., 1976. Regularly Varying Function. In: Lecture Notes in Math., vol. 508. Springer, Berlin.
Shao, Q.M., 1988. A moment inequality and its application. Acta Math. Sinica 31, 736–747 (in Chinese).
Zhang, L., 1996. Complete convergence of moving average processes under dependence assumptions. Statist. Probab. Lett. 30, 165–170.
Related documents