A Note on the Central Limit Theorems for Dependent Random ...

11
International Scholarly Research Network ISRN Probability and Statistics Volume 2012, Article ID 192427, 10 pages doi:10.5402/2012/192427 Research Article A Note on the Central Limit Theorems for Dependent Random Variables Yilun Shang Institute for Cyber Security, University of Texas at San Antonio, San Antonio, TX 78249, USA Correspondence should be addressed to Yilun Shang, [email protected] Received 25 June 2012; Accepted 13 September 2012 Academic Editors: D. Fiems, A. Hutt, and M. Montero Copyright q 2012 Yilun Shang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Classical central limit theorem is considered the heart of probability and statistics theory. Our interest in this paper is central limit theorems for functions of random variables under mixing conditions. We impose mixing conditions on the dierences between the joint cumulative distribution functions and the product of the marginal cumulative distribution functions. By using characteristic functions, we obtain several limit theorems extending previous results. 1. Introduction The central limit theorem is one of the most remarkable results of the theory of probability 1, which is critical to understand inferential statistics and hypothesis testing 2, 3. The assumption of independence for a sequence of observations X 1 ,X 2 ,... is often a technical convenience. Real data frequently exhibit some dependence and at the least some correlation at small lags. One extensively investigated kind of dependence is the m-dependence case see e.g., 48, in which random variables are considered as independent as long as they are m- step apart. More general measures of dependence are called mixing conditions, which are derived from the estimation of the dierence between characteristic functions of averages of dependent and independent random variables. These conditions have some appealing physical interpretations. Various mixing conditions have been proposed by researchers; see e.g., 914 to name just a few. Our main interest in this note is the central limit theorem for dependent classes of random variables. Following the work 12, instead of estimating the dierence between the characteristic functions of the sum of dependent random variables and those of the sum of independent random variables of the same distributions, we compute the exact value of this dierence. Our results weaken the conditions imposed on the random variable sequences in

Transcript of A Note on the Central Limit Theorems for Dependent Random ...

Page 1: A Note on the Central Limit Theorems for Dependent Random ...

International Scholarly Research NetworkISRN Probability and StatisticsVolume 2012, Article ID 192427, 10 pagesdoi:10.5402/2012/192427

Research ArticleA Note on the Central Limit Theorems forDependent Random Variables

Yilun Shang

Institute for Cyber Security, University of Texas at San Antonio, San Antonio, TX 78249, USA

Correspondence should be addressed to Yilun Shang, [email protected]

Received 25 June 2012; Accepted 13 September 2012

Academic Editors: D. Fiems, A. Hutt, and M. Montero

Copyright q 2012 Yilun Shang. This is an open access article distributed under the CreativeCommons Attribution License, which permits unrestricted use, distribution, and reproduction inany medium, provided the original work is properly cited.

Classical central limit theorem is considered the heart of probability and statistics theory. Ourinterest in this paper is central limit theorems for functions of random variables under mixingconditions. We impose mixing conditions on the differences between the joint cumulativedistribution functions and the product of the marginal cumulative distribution functions. By usingcharacteristic functions, we obtain several limit theorems extending previous results.

1. Introduction

The central limit theorem is one of the most remarkable results of the theory of probability[1], which is critical to understand inferential statistics and hypothesis testing [2, 3]. Theassumption of independence for a sequence of observations X1, X2, . . . is often a technicalconvenience. Real data frequently exhibit some dependence and at the least some correlationat small lags. One extensively investigated kind of dependence is them-dependence case (seee.g., [4–8]), in which random variables are considered as independent as long as they are m-step apart. More general measures of dependence are called mixing conditions, which arederived from the estimation of the difference between characteristic functions of averagesof dependent and independent random variables. These conditions have some appealingphysical interpretations. Various mixing conditions have been proposed by researchers; (seee.g., [9–14]) to name just a few.

Our main interest in this note is the central limit theorem for dependent classes ofrandom variables. Following the work [12], instead of estimating the difference between thecharacteristic functions of the sum of dependent random variables and those of the sum ofindependent random variables of the same distributions, we compute the exact value of thisdifference. Our results weaken the conditions imposed on the random variable sequences in

Page 2: A Note on the Central Limit Theorems for Dependent Random ...

2 ISRN Probability and Statistics

Theorem 1 and Theorem 2 of [12] and can be used to describe systems which are globallydetermined but locally random. It is noteworthy that the work [12] has been extended inanother direction, where the sum of a random number of random variables is examined [15].

The rest of the paper is organized as follows. In Section 2, we present our central limittheorems, and in Section 3, we give the proofs.

2. Main Results

Before proceeding, we introduce some notations. For a, b ∈ R, denote by a ∨ b and a ∧ b themaximal and minimal values between them, respectively. Let X1, X2, . . . and Z be random

variables. The image of Z is denoted by I(Z) ⊆ R. We write XnD−→ Z meaning that Xn

converges in distribution to Z as n → ∞. Denote by N(0, 1) the standard normal variable.Let E(Z) (or simply EZ) and Var(Z) represent the mean and variance of Z, respectively.

Theorem 2.1. Let {Xi}i≥1 be a sequence of identically distributed random variables and {fi}i≥1 asequence of measurable functions such that fi : I(X1) → R satisfying, for all x ∈ I(X1), either

0 ≤ α1xγ1 + β1 ≤ fi(x) ≤ α2x

γ2 + β2, (2.1)

or

α1xγ1 + β1 ≤ fi(x) ≤ α2x

γ2 + β2 ≤ 0, (2.2)

for some γi > 0 and αi, βi ∈ R, i = 1, 2. Assume that α1EXγ11 + β1 ≥ 0, α2EX

γ21 + β2 ≤ 0, and

E|X1|(γ1∨γ2)(2+ε) < +∞ for some ε > 0. Let 0 < ε1 < ε/(2(1 + ε)) and s2k=∑k

i=1 Var(fi(Xi)). If forsufficiently large k

sup

{∣∣∣∣∣P

(j⋂

i=1

{fvi(Xvi) ≤ xvi

})

−j∏

i=1

P(fvi(Xvi) ≤ xvi

)∣∣∣∣∣:(xv1 , . . . , xvj

)∈ R

j

}

≤ (1 − k−ε1)k−kε1−j,

(2.3)

where v1, . . . , vj is any choice of indices such that kε1 < v1 < · · · < vj ≤ k, then

1sk

k∑

i=1

(fi(Xi) − Efi(Xi)

) D−→ N(0, 1) (2.4)

as k → ∞.

Corollary 2.2. Let {Xi}i≥1 be a sequence of identically distributed random variables and {fi}i≥1 asequence of measurable functions such that fi : I(X1) → R satisfying, for all x ∈ I(X1), expressions

Page 3: A Note on the Central Limit Theorems for Dependent Random ...

ISRN Probability and Statistics 3

(2.1) or (2.2). Assume that E|X1|n < +∞ for n ∈ N. Suppose there exists δ, 0 < δ < 1, such that forsufficiently large k the inequality

sup

{∣∣∣∣∣P

(j⋂

i=1

{fvi(Xvi) ≤ xvi

})

−j∏

i=1

P(fvi(Xvi) ≤ xvi

)∣∣∣∣∣:(xv1 , . . . , xvj

)∈ R

j

}

≤(1 − k−δ/2

)k−kδ/2−j(2.5)

holds, where v1, . . . , vj is any choice of indices such that kδ/2 < v1 < · · · < vj ≤ k, then

1sk

k∑

i=1

(fi(Xi) − Efi(Xi)

) D−→ N(0, 1) (2.6)

as k → ∞.

Generally, {fi}i≥1 is a sequence of nonlinearly bounded functions. If we set α1 = α2 =γ1 = γ2 = 1 and β1 = β2 = 0, then fi(x) = x for all i, which is the special case consideredin [12] (Theorem 3). It is worth noting that in the conditions of Theorem 2.1 we used theexpression X

γ11 , where γ1 is a real number. Naturally, we preclude the situation where the

random variable X1 takes negative values and γ1 is not an integer, since such kind of poweris undefined.

Fix k and let j be relatively small, compared to k, then the right-hand side of (2.3) isclose to zero. If we let j be close to k−kε1 , then the right-hand side of (2.3) is close to 1. Hence,the dependence condition (2.3) allows a stronger dependence within a larger class of randomvariables and demands more independence within a smaller class. This structure can be usedto describe systems which are globally determined but locally random.

Theorem 2.1 and Corollary 2.2 deal with dependence conditions on an “event-to-event” basis, while the following analogous results consider dependence on an “average”matter.

Theorem 2.3. Let {Xi}i≥1 be a sequence of identically distributed random variables and {fi}i≥1 asequence of measurable functions such that fi : I(X1) → R satisfying, for all x ∈ I(X1), either

0 ≤ α1xγ1 + β1 ≤ fi(x) ≤ α2x

γ2 + β2, (2.7)

or

α1xγ1 + β1 ≤ fi(x) ≤ α2x

γ2 + β2 ≤ 0, (2.8)

Page 4: A Note on the Central Limit Theorems for Dependent Random ...

4 ISRN Probability and Statistics

for some γi > 0 and αi, βi ∈ R, i = 1, 2. Assume that α1EXγ11 + β1 ≥ 0, α2EX

γ21 + β2 ≤ 0, and

E|X1|(γ1∨γ2)(2+ε) < +∞ for some ε > 0. Let 0 < ε1 < ε/(2(1 + ε)) and s2k =∑k

i=1 Var(fi(Xi)). If forsufficiently large k

Rj

∣∣∣∣∣P

(j⋂

i=1

{fvi(Xvi) ≤ xvi

})

−j∏

i=1

P(fvi(Xvi) ≤ xvi

)∣∣∣∣∣dxv1 · · · dxvj

≤ (1 − k−ε1)k−kε1−j,

(2.9)

where v1, . . . , vj is any choice of indices such that kε1 < v1 < · · · < vj ≤ k, then

1sk

k∑

i=1

(fi(Xi) − Efi(Xi)

) D−→ N(0, 1) (2.10)

as k → ∞.

Corollary 2.4. Let {Xi}i≥1 be a sequence of identically distributed random variables and {fi}i≥1 asequence of measurable functions such that fi : I(X1) → R satisfying, for all x ∈ I(X1), expressions(2.7) or (2.8). Assume that E|X1|n < +∞ for n ∈ N. Suppose there exists δ, 0 < δ < 1, such that forsufficiently large k the inequality

Rj

∣∣∣∣∣P

(j⋂

i=1

{fvi(Xvi) ≤ xvi

})

−j∏

i=1

P(fvi(Xvi) ≤ xvi

)∣∣∣∣∣dxv1 · · · dxvj

≤(1 − k−δ/2

)k−kδ/2−j(2.11)

holds, where v1, . . . , vj is any choice of indices such that kδ/2 < v1 < · · · < vj ≤ k, then

1sk

k∑

i=1

(fi(Xi) − Efi(Xi)

) D−→ N(0, 1) (2.12)

as k → ∞.

3. Proofs

We will prove Theorems 2.1 and 2.3 in this section through several lemmas, and the proof ofcorollaries follows directly.

For convenience, denote by (∏k

i=1ai)/(av1 · · ·avj ), 1 ≤ v1 < · · · < vj ≤ k, theproduct of all ai’s except av1 , . . . , avj . The following lemma regarding the difference betweencharacteristic functions of sums of dependent and independent random variables is stated in[12] without proof.

Page 5: A Note on the Central Limit Theorems for Dependent Random ...

ISRN Probability and Statistics 5

Lemma 3.1. Let X1, X2, . . . , Xk be random variables satisfying 0 ≤ Xi ≤ M, 1 ≤ i ≤ k. Letg1, . . . , gk : [0,M] → C be absolutely continuous integrable functions. Then one has the identity

E

(k∏

i=1

gi(Xi)

)

−k∏

i=1

Egi(Xi)

=k∑

j=2

1≤v1<···<vj≤k(−1)j

∏ki=1gi(M)

gv1(M) · · · gvj (M)·∫M

0· · ·∫M

0

j∏

i=1

g ′vi(xvi)

×(

P

(j⋂

i=1

{Xvi ≤ xvi})

−j∏

i=1

P(Xvi ≤ xvi)

)

dxv1 · · · dxvj .

(3.1)

Proof. Let Fi(x) = P(Xi ≤ x) for i ≥ 1. Therefore, for k = 1, we have

E(g1(X1)

)=∫M

0g1(x)dF1(x)

= g1(M) −∫M

0F1(x)g ′

1(x)dx

(3.2)

by integration by parts. Let F(x1, x2) = P(X1 ≤ x1, X2 ≤ x2). Then, for k = 2, we derive from(3.2) that

E(g1(X1)g2(X2)

) − Eg1(X1)Eg2(X2) =∫ ∫M

0g ′1(x1)g ′

2(x2)F(x1, x2)dx1dx2

−∫M

0g ′1(x1)F1(x1)dx1

∫M

0g ′2(x2)F2(x2)dx2

=∫ ∫M

0g ′1(x1)g ′

2(x2)

· (F(x1, x2) − F1(x1)F2(x2))dx1dx2.

(3.3)

The general results then follow by induction.

The following lemma compares with the Lindeberg condition.

Lemma 3.2. Let {Xi}i≥1 be a sequence of identically distributed random variables and fi : I(X1) →R satisfying, for all x ∈ I(X1), expressions (2.1) or (2.2), i ≥ 1. Assume that α1EX

γ11 + β1 ≥ 0,

Page 6: A Note on the Central Limit Theorems for Dependent Random ...

6 ISRN Probability and Statistics

α2EXγ21 + β2 ≤ 0, and E|X1|(γ1∨γ2)(2+ε) < +∞ for some ε > 0. Let 0 < ε1 < ε/(2(1 + ε)). Then, there

exists a sequence of reals {εk}k≥1 such that

εk −→ 0, εksk −→ +∞ as k −→ ∞, (3.4)

1sk

k∑

i=1

{|fi(Xi)|>εksk}

∣∣fi(Xi)

∣∣ dP −→ 0 as k −→ ∞, (3.5)

1s2k

k∑

i=1

{|fi(Xi)|>εksk}f2i (Xi) dP −→ 0 as k −→ ∞. (3.6)

Proof. Without loss of generality, we can assume that fi(Xi) are centered at 0, that is,Efi(Xi) =0. In fact, let f ′

i := fi − Efi(X1), and then Ef ′i(Xi) = 0. By the assumptions (2.1) and (2.2), we

have

−∞ < α1EXγ11 + β1 ≤ Efi(X1) ≤ α2EX

γ21 + β2 < +∞. (3.7)

Therefore, f ′i : I(X1) → R satisfies

α1xγ1 + β1 −

(α2EX

γ21 + β2

)≤ f ′

i(x) ≤ α2xγ2 + β2 −

(α1EX

γ11 + β1

). (3.8)

For x ∈ I(X1), if (2.1) holds, then α1xγ1 + β1 − (α2EX

γ21 + β2) ≥ 0; if (2.2) holds, then α2x

γ2 +β2 − (α1EX

γ11 + β1) ≤ 0. Furthermore, by assumptions, we have α1EX

γ11 + β1 − (α2EX

γ21 + β2) ≥ 0

and α2EXγ21 + β2 − (α1EX

γ11 + β1) ≤ 0. Hence, we may assume that Efi(Xi) = 0 without loss of

generality.Choose ε2 so that ε1 < ε2 < ε/(2(1 + ε)). Define εk = k−ε2 for k ≥ 3. We will show that

the constructed sequence {εk}k≥1 is what we want.Since s2k =

∑ki=1 Var(fi(Xi)) andXi’s are identically distributed, by virtue of the bounds

in (2.1) and (2.2) we obtain

kC21 ≤ s2k ≤ kC2

2, (3.9)

whereC21 = E((α2

1X2γ11 +2α1β1X

γ11 +β21)∧(α2

2X2γ21 +2α2β2X

γ21 +β22)) andC2

2 = E((α21X

2γ11 +2α1β1X

γ11 +

β21) ∨ (α22X

2γ21 + 2α2β2X

γ21 + β22)).

By our assumptions, we have

εksk ≥√k

kε2C1 −→ +∞ (3.10)

Page 7: A Note on the Central Limit Theorems for Dependent Random ...

ISRN Probability and Statistics 7

as k → ∞, which verifies (3.4). Let Δ(X1) := (|α1||X1|γ1 + |β1|) ∨ (|α2||X1|γ2 + |β2|). We have

1sk

k∑

i=1

{|fi(Xi)|>εksk}

∣∣fi(Xi)

∣∣dP

≤ k

sk

{Δ(X1)>εksk}Δ(X1)dP ≤ k

sk

(1

εksk

)1+ε ∫

{Δ(X1)>εksk}Δ(X1)2+εdP.

(3.11)

Since ε2 < ε/(2(1 + ε)) and C1√k ≤ sk ≤ C2

√k, the quantity before the integration in (3.11)

tends to zero. Consequently, (3.5) readily holds since the integration in (3.11) is boundedusing our assumptions.

Now we want to verify (3.6). Note that

1s2k

k∑

i=1

{|fi(Xi)|>εksk}f2i (Xi)dP ≤ k

s2k

{Δ(X1)>εksk}Δ(X1)2dP

= O

(∫

{Δ(X1)>εksk}Δ(X1)2dP

)

.

(3.12)

By the Markovian inequality (see e.g., [1]),

P(Δ(X1)2 > ε2ks

2k

)≤ EΔ(X1)2

ε2ks2k

−→ 0 (3.13)

as k → ∞. We have

EΔ(X1)2 =∫

{Δ(X1)2≤ε2

ks2k}Δ(X1)2dP +

{Δ(X1)2>ε2

ks2k}Δ(X1)2dP, (3.14)

where the first integration in (3.14) tends to EΔ(X1)2 < ∞ involving (3.13). Therefore, the

second integration in (3.14) tends to zero, which, together with (3.12), finally verifies (3.6).

Proof of Theorem 2.1. We assume that Efi(Xi) = 0. For large enough k, we have

kε1 <kε2

ln(1 + kε2). (3.15)

In what follows, we suppose that kε1 is an integer for notation convenience. In the sequel, wetake a specific sequence {εk}k≥1 as in Lemma 3.2, and then (3.15) amounts to kε1 < (εk ln(1 +ε−1k))−1.

Page 8: A Note on the Central Limit Theorems for Dependent Random ...

8 ISRN Probability and Statistics

We define a sequence {Yk,j}1≤j≤k−kε1 of random variables as follows:

Yk,j = fkε1+j(Xkε1+j

)I{|fkε1 +j (Xkε1 +j )|≤εksk} + εksk

− εkskI{fkε1 +j (Xkε1 +j )<−εksk} + εkskI{fkε1 +j (Xkε1 +j )>εksk}.(3.16)

Hence, we have 0 ≤ Yk,j ≤ 2εksk and {Yk,j ≤ x} = {fkε1+j(Xkε1+j) ≤ −εksk +x} for 0 ≤ x ≤ 2εksk.The inequality (2.3) then implies that, for large enough k,

sup

{∣∣∣∣∣P

(j⋂

i=1

{Yk,vi ≤ xvi})

−j∏

i=1

P(Yk,vi ≤ xvi)

∣∣∣∣∣:(xv1 , . . . , xvj

)∈ [0, 2εksk]j

}

≤ (1 − k−ε1)k−kε1−j.

(3.17)

For k ≥ 1, denote c2k=∑k−kε1

j=1 Var(Yk,j). Involving Lemma 3.2, it follows that c2k/s2

k→ 1 as

k → ∞.Next, we calculate the difference between characteristic functions of sums of

independent and dependent random variables. Taking gj(·) = exp(it(·/ck)) andM = 2εksk inLemma 3.1, where i =

√−1, we obtain

∣∣∣∣∣∣E

⎝exp

⎝ it

ck

k−kε1∑

j=1

(Yk,j − EYk,j

)⎞

⎠ −k−kε1∏

j=1

E

(

exp(

it

ck

(Yk,j − EYk,j

)))∣∣∣∣∣∣

≤∣∣∣∣∣∣

k−kε1∑

j=2

1≤v1<···<vj≤k−kε1

(−1)j exp(

2εkskck

it(k − kε1 − j

))

·∫

[0,2εksk]j

(it

ck

)j

exp(

it

ck

(xv1 + · · · + xvj

))

·(

P

(j⋂

i=1

{Yk,vi ≤ xvi})

−j∏

i=1

P(Yk,vi ≤ xvi)

)

dxv1 · · ·dxvj

∣∣∣∣∣

≤ ck−kε1∑

j=2

(k − kε1

j

)(1 − k−ε1)k−kε1−j

∣∣∣∣2tk

−ε2(skck

)∣∣∣∣

j

≤ c

(

1 − k−ε1 + 2|t|k−ε2 skck

)k−kε1

,

(3.18)

where the second inequality comes from (3.17) and c is a positive constant. Since ε1 < ε2 andsk/ck → 1 as k → ∞, (3.18) tends to 0 as k → ∞.

Define a sequence {Wj}1≤j≤k−kε1 of independent random variables such thatWj has thesame distribution with Yj . By our construction, for sufficiently large k,

{∣∣Yk,j − EYk,j

∣∣ = εkck

}={∣∣fkε1+j

(Xkε1+j

) − Efkε1+j(Xkε1+j

)∣∣ = εksk

}. (3.19)

Page 9: A Note on the Central Limit Theorems for Dependent Random ...

ISRN Probability and Statistics 9

Involving Lemma 3.2, it yields that Yk,j satisfies the Lindeberg condition. Accordingly, Wj

also satisfies the Lindeberg condition. Thus, the central limit theorem is true for Wj since(3.18) approaches 0, which implies that the central limit theorem holds for Yk,j , that is,

1ck

k−kε1∑

j=1

(Yk,j − EYk,j

) D−→ N(0, 1) (3.20)

as k → ∞. Taking into account that sk/ck → 1 and the definition of Yk,j , by (3.18) we knowthat

1sk

k∑

j=kε1+1

fj(Xj

) D−→ N(0, 1). (3.21)

On the other hand, it is obvious that

1sk

kε1∑

i=1

∫∣∣fi(Xi)

∣∣dP =

1sk

kε1∑

i=1

{|fi(Xi)|≤εksk}

∣∣fi(Xi)

∣∣dP

+1sk

kε1∑

i=1

{|fi(Xi)|>εksk}

∣∣fi(Xi)

∣∣dP.

(3.22)

The first quantity on the right-hand side of (3.22) is up-bounded by

1sk

εkskkε1 = εkk

ε1 ≤ 1ln(1 + ε−1

k

) −→ 0 (3.23)

as k → ∞, using (3.15). Combining (3.5), (3.22), and (3.23), we obtain

1sk

kε1∑

i=1

fi(Xi)L1−→ 0 (3.24)

as k → ∞. Therefore, (3.21) and (3.24) imply that

1sk

k∑

i=1

fi(Xi)D−→ N(0, 1), (3.25)

which concludes the proof of Theorem 2.1.

Page 10: A Note on the Central Limit Theorems for Dependent Random ...

10 ISRN Probability and Statistics

Proof of Theorem 2.3. We define Yk,j , ck, and εk as in the proof of Theorem 2.1. We obtain

∣∣∣∣∣∣E

⎝exp

⎝ it

ck

k−kε1∑

j=1

(Yk,j − EYk,j

)⎞

⎠ −k−kε1∏

j=1

E

(

exp(

it

ck

(Yk,j − EYk,j

)))∣∣∣∣∣∣

≤ ck−kε1∑

j=2

(k − kε1

j

)(1 − k−ε1)

k−kε1−j∣∣∣∣t

ck

∣∣∣∣

j

≤ c

(

1 − k−ε1 +∣∣∣∣t

ck

∣∣∣∣

)k−kε1

,

(3.26)

where c is a positive constant. Since sk = Θ(√k) and sk/ck → 1 as k → ∞, the expression

(3.26) tends to 0 as k → ∞. The remaining part of the proof follows as in the proof ofTheorem 2.1.

References

[1] P. Billingsley, Probability and Measure, John Wiley & Sons, New York, NY, USA, 3rd edition, 1995.[2] A. DasGupta, Asymptotic Theory of Statistics and Probability, Springer, New York, NY, USA, 2008.[3] T. S. Ferguson, A Course in Large Sample Theory, Chapman & Hall, New York, NY, USA, 1996.[4] K. N. Berk, “A central limit theorem for m-dependent random variables with unbounded m,” Annals

of Probability, vol. 1, no. 2, pp. 352–354, 1973.[5] T. C. Christofides and P. M. Mavrikiou, “Central limit theorem for dependent multidimensionally

indexed random variables,” Statistics & Probability Letters, vol. 63, no. 1, pp. 67–78, 2003.[6] P. H. Diananda, “The central limit theorem for m-dependent variables,” vol. 51, pp. 92–95, 1955.[7] W. Hoeffding and H. Robbins, “The central limit theorem for dependent random variables,” Duke

Mathematical Journal, vol. 15, pp. 773–780, 1948.[8] Y. Shang, “A central limit theorem for randomly indexed m-dependent random variables,” Filomat,

vol. 26, no. 4, pp. 713–717, 2012.[9] R. Balan and I.-M. Zamfirescu, “Strong approximation for mixing sequences with infinite variance,”

Electronic Communications in Probability, vol. 11, pp. 11–23, 2006.[10] J. Beran, “Statistical methods for data with long-range dependence (with discussion),” Statistical

Science, vol. 7, no. 4, pp. 404–427, 1992.[11] R. C. Bradley, “Basic properties of strong mixing conditions. A survey and some open questions,”

Probability Surveys, vol. 2, pp. 107–144, 2005.[12] M. Kaminski, “Central limit theorem for certain classes of dependent random variables,” Theory of

Probability and its Applications, vol. 51, no. 2, pp. 335–342, 2007.[13] E. Ould-Saı̈d and A. Tatachak, “Strong consistency rate for the kernel mode estimator under strong

mixing hypothesis and left truncation,” Communications in Statistics, vol. 38, no. 8–10, pp. 1154–1169,2009.

[14] M. Rosenblatt, “A central limit theorem and a strong mixing condition,” Proceedings of the NationalAcademy of Sciences of the United States of America, vol. 42, no. 1, pp. 43–47, 1956.

[15] Y. Shang, “Central limit theorem for the sum of a random number of dependent random variables,”Asian Journal of Mathematics and Statistics, vol. 4, no. 3, pp. 168–173, 2011.

Page 11: A Note on the Central Limit Theorems for Dependent Random ...

Submit your manuscripts athttp://www.hindawi.com

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttp://www.hindawi.com

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

CombinatoricsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

International Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com

Volume 2014 Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Stochastic AnalysisInternational Journal of