Abstract

We obtain the complete convergence for weighted sums of -mixing random variables. Our result extends the result of Peligrad and Gut (1999) on unweighted average to a weighted average under a mild condition of weights. Our result also generalizes and sharpens the result of An and Yuan (2008).

1. Introduction

In many stochastic models, the assumption that random variables are independent is not plausible. So it is of interest to extend the concept of independence to dependence cases. One of these dependence structures is -mixing.

Let be a sequence of random variables defined on a probability space and let denote the -algebra generated by the random variables For any define Given two -algebras in put

where Define the -mixing coefficients by

Obviously, The sequence is called -mixing (or -mixing) if there exists such that Note that if is a sequence of independent random variables, then for all

A number of limit results for -mixing sequences of random variables have been established by many authors. We refer to Bradley [1] for the central limit theorem, Bryc and Smoleński [2], Peligrad and Gut [3], and Utev and Peligrad [4] for moment inequalities, Gan [5], Kuczmaszewska [6], and Wu and Jiang [7] for almost sure convergence, and An and Yuan [8], Cai [9], Gan [5], Kuczmaszewska [10], Peligrad and Gut [3], and Zhu [11] for complete convergence.

The concept of complete convergence of a sequence of random variables was introduced by Hsu and Robbins [12]. A sequence of random variables converges completely to the constant if

In view of the Borel-Cantelli lemma, this implies that almost surely. Therefore, the complete convergence is a very important tool in establishing almost sure convergence of summation of random variables as well as weighted sums of random variables. Hsu and Robbins [12] proved that the sequence of arithmetic means of independent and identically distributed random variables converges completely to the expected value if the variance of the summands is finite. Erdös [13] proved the converse. The result of Hsu-Robbins-Erdös is a fundamental theorem in probability theory and has been generalized and extended in several directions by many authors. One of the most important generalizations is Baum and Katz [14] strong law of large numbers.

Theorem 1.1 (Baum and Katz [14]). Let and Let be a sequence of independent and identically distributed random variables with Then the following statements are equivalent: (i)(ii) for all
Peligrad and Gut [3] extended the result of Baum and Katz [14] to -mixing random variables.

Theorem 1.2 (Peligrad and Gut [3]). Let and Let be a sequence of identically distributed -mixing random variables with Then the following statements are equivalent: (i)(ii) for all
Cai [9] complemented Theorem 1.2 when
Recently, An and Yuan [8] obtained a complete convergence result for weighted sums of identically distributed -mixing random variables.

Theorem 1.3 (An and Yuan [8]). Let and Let be a sequence of identically distributed -mixing random variables with Assume that is an array of real numbers satisfying Then the following statements are equivalent: (i)(ii) for all

Note that the result of An and Yuan [8] is not an extension of Peligrad and Gut's [3] result, since condition (1.4) does not hold for the array with An and Yuan [8] proved the implication (i)(ii) under condition (1.4), and proved the converse under conditions (1.4) and (1.5). However, the array satisfying both (1.4) and (1.5) does not exist. Noting that we have that But, this does not hold when is fixed and is large enough.

In this paper, we obtain a new complete convergence result for weighted sums of identically distributed -mixing random variables. Our result extends the result of Peligrad and Gut [3], and generalizes and sharpens the result of An and Yuan [8].

Throughout this paper, the symbol denotes a positive constant which is not necessarily the same one in each appearance, denotes the integer part of and

2. Main Result

To prove our main result, we need the following lemma which is a Rosenthal-type inequality for -mixing random variables.

Lemma 2.1 (Utev and Peligrad [4]). Let be a sequence of -mixing random variables with and for some and all Then there exists a constant depending only on and such that for any where

Now we state the main result of this paper.

Theorem 2.2. Let and Let be a sequence of identically distributed -mixing random variables with Assume that is an array of real numbers satisfying If then Conversely, if (2.3) holds for any array satisfying (2.2), then

To prove Theorem 2.2, we first prove the following lemma which is the sufficiency of Theorem 2.2 when the array is bounded.

Lemma 2.3. Let be a sequence of identically distributed -mixing random variables with and for some and Assume that is an array of real numbers satisfying for and Then (2.3) holds.

Proof. For and define Since and we have that as Hence for large enough, we have It follows that Noting that we have Thus, it remains to show that
We have by Markov's inequality and Lemma 2.1 that for any In the last inequality, we used the fact that for and
If then we take large enough such that Since we get Since we also get If then we take Since (2.9) still holds, and so

We next prove the sufficiency of Theorem 2.2 when the array is unbounded.

Lemma 2.4. Let be a sequence of identically distributed -mixing random variables with and for some and Assume that is an array of real numbers satisfying or and Then (2.3) holds.

Proof. If then we can take such that Since or we have that Thus we may assume that (2.10) holds for some when
Let for and In view of we get since Hence for large enough, we have that It follows that For and let Then are disjoint, and for since For convenience of notation, let Since or and we have It follows that Since we obtain We also obtain From and we have Thus, it remains to show that
We have by Markov's inequality and Lemma 2.1 that for any Observe that for and So for and
For and we proceed with two cases.
(i) If then we take large enough such that Then we obtain that The second inequality follows by the fact that or
Noting that we also obtain that Since and we have that Since and we also have that From and we have
(ii) If then we take As noted above, we may assume that Since as in the case we have

We now prove Theorem 2.2 by using Lemmas 2.3 and 2.4.

Proof of Theorem 2.2. Sufficiency. Without loss of generality, we may assume that for some For let and let if otherwise, and if otherwise. Then It follows that By Lemma 2.3, we have By Lemma 2.4, we have Hence (2.3) holds.Necessity. Choose, for each Then satisfies (2.2). By (2.3), we obtain that which implies that Observe that Hence we have that for any as and so as The rest of the proof is same as that of Peligrad and Gut [3] and is omitted.

Remark 2.5. Taking for and we can immediately get Theorem 1.2 from Theorem 2.2. If the array satisfies (1.4), then it satisfies (2.2): taking such that we have So the implication (i)(ii) of Theorem 1.3 follows from Theorem 2.2. As noted after Theorem 1.3, the implication (ii)(i) of Theorem 1.3 is not true. Therefore, our result extends the result of Peligrad and Gut [3] to a weighted average, and generalizes and sharpens the result of An and Yuan [8].

Acknowledgments

The author is grateful to the editor Leonid Shaikhet and the referees for the helpful comments and suggestions that considerably improved the presentation of this paper. This work was supported by the Korea Science and Engineering Foundation (KOSEF) Grant funded by the Korea government (MOST) (no. R01-2007-000-20053-0).