On adjusted method of moments estimators on uniform distribution samples

14

Click here to load reader

Transcript of On adjusted method of moments estimators on uniform distribution samples

Page 1: On adjusted method of moments estimators on uniform distribution samples

METRON2012, vol. LXX, n. 1, pp. 27-40

AHMAD REZA SOLTANI – KAMEL ABDOLLAHNEZHAD

On adjusted method of moments estimatorson uniform distribution samples

Summary - In this article, we prove that with probability one the k-th order upperrandom Stieltjes sum defined on a random sample from a distribution supported bya finite interval converges to the corresponding k-th moment distribution. Whenthe underlying distribution is uniform U(0, θ), we prove that the adjusted methodof moments (AMM) estimator, introduced by Soltani and Homei (2009a), is indeedstrongly consistent and asymptotically unbiased for the parameter θ . We demonstratethat the asymptotic distribution of the AMM estimator is not normal, and introduce apivotal variable for inference on θ using the AMM estimator. The upshot is that themean square relative error for θ2 using the square AMM estimator is asymptotically1/2 of the corresponding term using the MLE.

Key Words - Parametric inference; Adjusted method of moments; Strong consistency;Simulation.

1. Introduction

Let us consider the adjusted method of moments (AMM), given by Soltaniand Homei (2009a). Let X1, · · · , Xn be a random sample from a populationwith an arbitrary distribution function F , determined by certain unknown pa-rameters. Since μ′

k = EXk = ∫� XkdP = ∫ xkdF(x), the classical integrationtheory reveals that μ′

k can be approximated by

X̃ kF(n) =

n∑j=1

Xk( j)[F(X( j))− F(X( j−1))], k = 1, 2, . . . (1.1)

where F(X(0)) = 0, F(X(n+1)) = 1, and X(1), . . . , X(n) is the order statisticof X1, · · · , Xn. The expression given by (1.1) is a certain randomly weightedaverage, see Soltani and Homei (2009b). We refer to the expression given in

This research is supported by the Kuwait University Research Administration under Research ProjectSS-01-11.Received January 2011 and revised December 2011.

Page 2: On adjusted method of moments estimators on uniform distribution samples

28 AHMAD REZA SOLTANI – KAMEL ABDOLLAHNEZHAD

(1.1) as the upper random Stieltjes sum (URSS). We note that if in (1.1) F isreplaced by its empirical distribution function, then F̂(X(i))− F̂(X(i−1)) = 1/n,

and X̃ kF(n) will become Xk = 1

n

∑nj=1 X

kj . Naturally X̃

kF(n) gives more accurate

value for E(Xk) than Xk .

AMM is the same as the method of moments (MM), but the k-th sample

moments are replaced by the corresponding URSS. The idea is to use X̃ kF(n)

instead of Xk in the MM. Thus if the distribution F has say J unknownparameters, then the parameters can be estimated, in the AMM, by solving thefollowing system of nonlinear equations:

μ′k = X̃ k

F(n), k = 1, · · · , J. (1.2)

The resulting equations in AMM are more complicated than those in MM.This is not a serious disadvantage if numerical derivation in the parameterestimation is concerned. Since, comparing to the sample moments, URSSgives more accurate values for the population moments, AMM is expected togive more accurate parameter estimates. It is hard in general to derive analyticalexpressions for AMM estimators. But still it is possible to provide inference andestablish statistical properties for AMM estimators. In this article we apply theAMM to estimate parameter θ, whenever the sample X1, · · · , Xn comes froma uniform distribution on (0, θ). We derive AMM estimator for θ, and proveits consistency. We also make comparisons between AMM estimator, ML andMM estimators for θ , and highlight superiority of the AMM estimator. Indeedwe prove that n2E([θ̂2AMM − θ2]/θ2)2 → 4, while n2E([θ̂2ML − θ2]/θ2)2 → 8, asn → ∞.

Through simulation, we show that the limiting distribution for the normal-ized AMM estimator is not standard normal. We have not been successful inobtaining the asymptotic distribution of the AMM estimator for θ. Nevertheless,we present a pivotal variable for θ, that enables us to do inference on θ usingthe AMM estimator, through simulation. We address this issue in more detailsin the conclusion section.

We learned from a referee about the Hosking L-moment estimation proce-dure (1990), where certain linear combinations of the sample order statistics andtheir expected values are employed for the distribution identification, parameterestimations and testing hypothesis. In the conclusion section we discuss simi-larities and differences between the Hosking L-moment and AMM procedures.

This article is organized as follows. In Section 2 we prove that the k-thorder URSS converges to the corresponding k-th moment distribution withprobability one (w.p.1). We also establish the strong consistency and asymp-totic unbiasedness of the AMM estimator for the parameter θ of the uniformdistribution, and investigate the limiting distribution for the AMM estimator in

Page 3: On adjusted method of moments estimators on uniform distribution samples

On adjusted method of moments estimators on uniform distribution samples 29

Section 3. In Section 4 we make comparisons between performances of theAMM estimator to the corresponding ones for the ML and MM estimators.In Section 5, based on AMM estimator for θ , through simulation, we con-struct confidence interval for θ and also do testing hypothesis, then comparecorresponding statistical measures, as the coverage probability, length, size andpower. Certain side issues concerning AMM estimators are addressed in theconclusion section.

2. Limiting Behavior of URSS

Let X1, . . . , Xn be i.i.d. random variables with a common distribution Fon [a, b] and X(1), . . . , X(n) be the order statistics corresponding to X1, . . . , Xn.Also let

X̃ k(n + 1) =n+1∑j=1

Xk( j)(F(X( j))− F(X( j−1))), k = 1, 2, . . . (2.1)

andMX (n) = max{X( j) − X( j−1), j = 1, 2, . . . , n + 1},

represents the maximum spacings, where X(0) = a, X(n+1) = b.Let us show that X̃ k(n + 1) → μ′

k, w.p.1, as n → ∞. We present ourapproach based on the theory of Stieltjes integral, Rudin (1979). Recall thatμ′k = ∫ xkdF(x) = E(Xk

1).

Lemma 2.1. Let X̃ k(n + 1) be given by (2.1). Assume∫ ba x

kdF(x) < ∞ , and

MX (n) → 0, with probability one (w.p.1), as n → ∞. Then X̃k(n + 1) → μ′k,

w.p.1, as n → ∞ .

Proof. Let A ⊂ �, P(A) = 1, such that for ω ∈ A, MX (n)ω → 0, as n → ∞.

Fix ω ∈ A, since∫ ba x

kdF(x) < ∞, then for ε > 0 there is a partitionPn = {x0, x1, . . . , xn+1} such that∣∣∣∣∣∣

n+1∑j=1

t kj [F(xj)− F(xj−1)]−∫ b

axkdF(x)

∣∣∣∣∣∣ < ε

for every choice of t1, t2, . . . , tn+1 such that tj ∈ [xj−1, xj ]. This is indeed truefor any refinement of Pn.

Let δ = min{xj −xj−1; j = 1, 2, . . . , n+1}. Choose n such that MX (n)ω <δ, and a partition Pn giving the above estimate. Then in every subinterval[xj−1, xj ], there is an Xjm (ω) ∈ [xj−1, xj ], because otherwise, MX (n)ω > xj −

Page 4: On adjusted method of moments estimators on uniform distribution samples

30 AHMAD REZA SOLTANI – KAMEL ABDOLLAHNEZHAD

xj−1 > δ. Now let Pn,m = {Xjm (ω); j = 1, 2, . . . , n + 1}, then Pn,m is arefinement of Pn. Therefore

∣∣∣∣∣∣n+1∑j=1

Xkjm (ω)[F(X( jm ))− F(X( jm−1))]−

∫ b

axkdF(x)

∣∣∣∣∣∣ < ε.

The proof is complete.

Let us assume U1, . . . ,Un are i.i.d. with a common uniform (0, θ) distri-bution, and U(1), . . . ,U(n) be the order statistics corresponding to U1, . . . ,Un.Also let

X̃ kU (n) =

n∑j=1

Uk( j)(F(U( j))− F(U( j−1))), (2.2)

and

MU (n) = max{U( j) −U( j−1), j = 1, 2, . . . , n + 1}, (2.3)

where U(0) = 0,U(n+1) = θ .

Our main aim in this section is to show X̃ kU (n) → μ′

k, w.p.1, as n → ∞,

and also that E X̃kU (n) → μ′

k .

Lemma 2.2. Assume MU (n) is given by (2.3). Then MU (n) → 0, w.p.1, asn → ∞.

Proof. It was shown that the maximum spacings tends to zero when the sampleis taken from uniform distribution on (0,1), see Devroye (1981). Then obviouslyMU (n) → 0, w.p.1, as n → ∞.

We are in a position to state the main result of this section.

Theorem 2.1. Assume X̃kU (n) is given by (2.2). Then

a) X̃ kU (n) → μ′

k, w.p.1, as n → ∞,

b) X̃ kU (n)− Xk → 0, w.p.1, as n → ∞,

c) E X̃kU (n) → μ′

k, as n → ∞.

Proof. It follows from Lemmas 2.1, 2.2 that X̃ kU (n + 1) → μ′

k, w.p.1, as

n → ∞. Since 1-F(X(n))→ 0, as n → ∞, we obtain that X̃ kU (n) → μ′

k,

w.p.1, as n → ∞. Part (b) is also immediate, as X̃ kU (n) → μ′

k, w.p.1, and

Page 5: On adjusted method of moments estimators on uniform distribution samples

On adjusted method of moments estimators on uniform distribution samples 31

Xk → μ′k , w.p.1. Part (c), follows from the following relation

E X̃kU (n)=

(1

k + 1− �(n + 1)�(k + 1)

�(n + k + 2)

)θ k

= μ′k − �(n + 1)�(k + 1)

�(n + k + 2)θ k, n, k = 1, 2, . . . ,

(2.4)

and the fact that �(n+1)�(k+1)�(n+k+2) → 0, as n → ∞.

The term �(n+1)�(k+1)�(n+k+2) is the rate of convergence in Theorem 2.1(c). In the

special case of k = 1, the rate of convergence in Theorem 2.1(c) reduces to1

(n+1)(n+2) .

Amplifications. The following alternative approach was suggested by a ref-eree that is more probabilistic and relaxes the restriction that the populationdistribution having compact support. The approach is outlined below.

(a) For a random sample of size n, X1, . . . , Xn, let Fn to be the distributionwith mass proportional to F(X(i))−F(X(i−1)) allocated to Xi , i = 1, . . . , nwith X0 = −∞, [the proportionality factor has to be 1/F(X(n))].

(b) Fn converges weakly to the population distribution F with probability one,without any restriction on the support of F .

(c) Lemma 2.1 and Theorem 2.1 can be deduced from (b) for F with compactsupport, and can be generalized by replacing compact support conditionby a requirement of certain finite moments.

We just mention that because of the proportionality factor∫xd Fn(x) is not quite

the random Stieltjes sum with probability one. The main distinction betweenthe Reiman-Stieltjes [R-S] integral with the Lebesgue-Stieltjes [L-S] integral isthe partitioning. In R-S integral the partitioning is on the x axis, while in L-Sthe partitioning in on the y axis. The AMM method follows R-S integral.

3. Properties of AMM Estimator for θ

Let U1, . . . ,Un be i.i.d. random variables with a common uniform (0, θ)distribution, and U(1), . . .,U(n) be the order statistics corresponding to U1, . . .,Un.

We prove that the AMM estimator for the parameter θ of the U(0, θ) is stronglyconsistent and asymptotically unbiased, and we show that the limiting distri-bution for the normalized AMM estimator is not standard normal.

Page 6: On adjusted method of moments estimators on uniform distribution samples

32 AHMAD REZA SOLTANI – KAMEL ABDOLLAHNEZHAD

Now, by using the first moment (k = 1), the AMM equation (1.2) becomes

θ

2=

n∑j=1

U( j)

[U( j)θ

− U( j−1)θ

],

that can be solved for θ which gives the AMM estimator for θ by θ̂AMM(n) =[2∑n

j=1U( j)(U( j) − U( j−1))]1/2 . In what follows we will prove that θ̂AMM(n)

is strongly consistent for θ . We observe that {Ujθ, j = 1, 2, . . . , n} d= {Vj ,

j = 1, 2, . . . , n}, where Vj be i.i.d. of U(0, 1). Consequently { θ̂AMM(n)θ

, n =1, 2, 3, . . . } d= {∨n, n = 1, 2, . . . }, where

∨n =⎡⎣2 n∑

j=1V( j)(V( j) − V( j−1))

⎤⎦1/2 . (3.1)

But, as we observed above, the summation term in (3.1) converges to 12 , w.p.1,

as n → ∞. Thus ∨n → 1, w.p.1, which in turn implies that θ̂AMM(n)/θ → 1,w.p.1, as n → ∞. Thus θ̂AMM(n) is strongly consistent for θ . Since θ̂AMM(n)is bounded in n, it follows that θ̂AMM(n) is also asymptotically unbiased for θ .

It is a challenging task to derive the limiting distribution for

θ̂AMM(n)− E(θ̂AMM(n))√var(θ̂AMM(n))

= Z [θ̂AMM(n)].

Through simulation we show that the limiting of Z [θ̂AMM(n)] is far from beingnormal. Indeed based on 10,000 simulated values for θ̂AMM(n), we calculateZ [θ̂AMM(n)] values. We repeat this procedure for n = 50, n = 100 and n = 500.The resulting histograms are given in Figure 1, that indicates Z [θ̂AMM(n)] doesnot assume standard normal distribution. The Kolmogorov-Smirnov test alsorejects normality assumption. We observe from Figure 1 that the distributionof Z [θ̂AMM(n)] is supported roughly by [−4, 2], it is skewed to the left. Forn = 100, %63 of Z [θ̂AMM(n)] are positive. Also Q1 = −0.3937, Q2 = 0.2882and Q3 = 0.7028.

We leave it as an open problem to identity the limiting distribution ofZ [θ̂AMM(n)]. However, we can make inference about θ by using a MonteCarlo simulation. Details are given in Section 5.

Page 7: On adjusted method of moments estimators on uniform distribution samples

On adjusted method of moments estimators on uniform distribution samples 33

-4 -2 0 2 4

01000

2000

3000

4000

5000

6000

-4 -2 0 2 4

0500

1000

1500

2000

2500

3000

3500

-4 -2 0 2 4

01000

2000

3000

Figure 1. Histogram of Z [θ̂AMM(n)] for θ = 4 and left: n = 50 middle: n = 100 right: n = 500.

4. Mean of Square Relative Error

In this section we first derive an expression for the variance of URSS, thenwill show that n2E([θ̂2AMM − θ2]/θ2)2 → 4, while n2E([θ̂2ML − θ2]/θ2)2 → 8,as n → ∞. The terms are mean of square relative errors for θ2 of AMM andML estimators respectively. The corresponding mean absolute error and meansquare error for AMM estimator are hard to compute analytically, but throughsimulation numerically are computed, and depicted in Figure 2.

20 40 60 80

0.0

0.2

0.4

0.6

0.8

MLE

MME

AMM

Figure 2. Mean square error for AMM, MLE and MME.

Page 8: On adjusted method of moments estimators on uniform distribution samples

34 AHMAD REZA SOLTANI – KAMEL ABDOLLAHNEZHAD

We provide formula for the variance of X̃U (n), and then derive the meanand the mean square error for θ̂2AMM(n) and θ̂

2ML(n). In the following we denote

the i-th order statistics of X1, . . . , Xn by Xi :n to highlight the sample size. Formore on calculus of order statistics we refer our readers to Arnold, Balakrishnanand Nagaraja (1992).

Theorem 4.1. The variance of X̃U (n) is given by the following formula.

Var(X̃U (n)) = 12

(n + 1)(n + 2)Var(X)+ o

(1

n3

). (4.1)

Proof. It is enough we derive E(X̃U (n))2. We have

E(X̃U (n))2=

n∑j=1

E{U 2j :n[F(U j :n)− F(U j−1:n)]

2}

+2n−1∑i=1

n∑j=i+1

E{Ui :n[F(Ui :n)−F(Ui−1:n)]U j :n[F(U j :n)−F(U j−1:n)]},

and E{U 2j :n[F(U j :n)− F(U j−1:n)]

2} is equal to

c[n; j]∫ ∫

x<yy2F j−2(x)(F(y)− F(x))2(1− F(y))n− j dF(x)dF(y), (4.2)

where c[n; j] = n!( j−2)!(n− j)! . But it is easy to see that E(U

0j−1:n+2U

2j+2:n+2) is

given by

d[n; j]∫ ∫

x<yy2F j−2(x)(F(y)− F(x))2(1− F(y))n− j dF(x)dF(y), (4.3)

where d[n; j] = (n+2)!2!( j−2)!(n− j)! . Therefore it follows from (4.2) and (4.3) that

E{U 2j :n[F(U j :n)− F(U j−1:n)]

2} = 2

(n + 1)(n + 2)E(U 2

j+2:n+2).

Similarly

E{Ui :n[F(Ui :n)− F(Ui−1:n)]U j :n[F(U j :n)− F(U j−1:n)]}= 1

(n + 1)(n + 2)E(Ui+1:n+2Uj+2:n+2).

Page 9: On adjusted method of moments estimators on uniform distribution samples

On adjusted method of moments estimators on uniform distribution samples 35

Thus

E(X̃U (n))2= 2

(n + 1)(n + 2)

n∑j=1

E(U 2j+2:n+2)

+ 2

(n + 1)(n + 2)

n−1∑i=1

n∑j=i+1

E(Ui+1:n+2Uj+2:n+2).

We note that E(U 2j+2:n+2) = ( j+2)( j+3)

(n+3)(n+4) θ2 and E(Ui+1:n+2Uj+2:n+2) = (i+1)( j+3)

(n+3)(n+4) θ2,

then after doing some algebraic work we have that

E(X̃U (n))2= 2

3(n + 1)− 192− n(n − 1)(3n2 + 25n + 58)

12(n + 1) . . . (n + 4), (4.4)

by using (2.4) and (4.4),

Var(X̃U (n)) =[

1

(n + 1)(n + 2)− 6

(n + 1) . . . (n + 4)− 1

((n + 1)(n + 2))2

]θ2.

Consequently (4.1) follows. The proof of the theorem is complete.

Since E(X̃U (n) − θ2 )2 = 12

(n+1)(n+2)Var(X) + o( 1n3), it follows that, for the

U(0, θ), X̃U (n) possesses much smaller mean square error (MSE) than Xfor large n. Naturally AMM estimators are expected to be better than MMestimators. Also

E

[θ̂2AMM(n)− θ2

θ2

]= − 2

(n + 1)(n + 2),

E

[θ̂2AMM(n)− θ2

θ2

]2= 4(n + 6)

(n + 2)(n + 3)(n + 4),

and the corresponding terms for the ML estimator are:

E

[θ̂2ML(n)− θ2

θ2

]= − 2

n + 2, E

[θ̂2ML(n)− θ2

θ2

]2= 8

(n + 2)(n + 4),

giving that n2E[(θ̂2AMM(n)− θ2)/θ2]2 → 4, while n2E[(θ̂2ML(n)− θ2)/θ2]2 → 8,as n → ∞.

Page 10: On adjusted method of moments estimators on uniform distribution samples

36 AHMAD REZA SOLTANI – KAMEL ABDOLLAHNEZHAD

Concerning the mean absolute error (MAE) and mean square error (MSE),we recall that the AMM, ML and MM estimators for θ are given by

θ̂AMM(n) =⎡⎣2 n∑

j=1U( j)(U( j) −U( j−1))

⎤⎦1/2 ,θ̂ML(n) = max{U( j)}, θ̂MM(n) = 2U ,

respectively, where U = 1n

∑nj=1Uj .

By using simulated samples and 10000 replications [θ=4,n=10,30,. . .,90],MAE for θ̂AMM(n), θ̂ML(n) and θ̂MM(n) are computed in Table 1. Also for[θ = 2, n = 5, 10, 15, . . . , 90], the corresponding values for MSE are computedand depicted in Figure 2, using 10,000 replications. Interestingly, according toTable 1 and Figure 2, θ̂AMM(n) shows better performance than, ML and MMfor every n.

Table 1: MAE values; for different kinds of estimators and different sample sizes.

n 10 30 50 70 90

AMM 0.20271 0.07285 0.04395 0.03118 0.02412MLE 0.27184 0.09765 0.05876 0.04179 0.03273MM 0.44590 0.25264 0.19521 0.16559 0.14801

5. Inferences for θ

Recall that U1, . . . ,Un is a random sample from uniform distribution on(0, θ), and U(1), . . . ,U(n) is the order statistic of U1, . . . ,Un. In this sectionwe construct (1-α)% confidence interval for θ and do the classical testinghypotheses on θ , using the AMM estimator for θ . Then resulting confidenceinterval is also compared with the the shortest confidence interval, the AMMtest performance is compared with the uniformly most powerful test (UMP-test) for θ . Recall that the 100(1 − α)% shortest confidence interval for θ is(max{U( j)},max{U( j)}/ n

√α), and the critical regions of UMP-test for classical

hypotheses on θ are derived using the left and right limits of this shortestconfidence interval, Casella and Berger (1990).

The pivotal variable for θ based on θ̂AMM(n) is given by

T =

[n∑j=1

u( j)(u( j) − u( j−1))

]1/2[

n∑j=1

V( j)(V( j) − V( j−1))

] 12

Page 11: On adjusted method of moments estimators on uniform distribution samples

On adjusted method of moments estimators on uniform distribution samples 37

where u( j) is the observed value of random variable U( j) and V( j)d= U( j)/θ ,

j = 1, . . . , n. T is a pivotal variable for θ and can be used to construct aCI, and test hypotheses about θ . The 100(1− α)% CI for θ is (T( α2 ), T(1−α

2 )),

where Tα is the α-th quantile of T . We evaluate its coverage probabilities ofthis CI by simulation, and show that the coverage probabilities are close tonominal level.

The confidence intervals for θ and the p-value for testing, based on T,can be computed by the Monte Carlo simulation. The following algorithm infour steps leads us to obtain the simulated valued for the variable T : generateUj ∼ U(0, θ); generate Vj ∼ U(0, 1), j = 1, . . . , n; calculate T ; repeat theseactions for m times.

Let T(p) denote the 100p-th percentile of values of T . Then (T( α2 ), T(1−α2 ))

is a Monte Carlo estimate of (1− α) confidence interval for θ .A simulation study is performed to compare coverage probability and length

average using AMM estimator and MLE (shortest CI). For this purpose wegenerate samples with different size of n from U(0, θ). Using 10,000 simulationiteration, coverage probability (C) and average of length (L) are estimated.Also we use the algorithm, by m = 5000 for obtaining the confidence intervals.The numerical derivations for θ = 2, α = 0.05 are given in Table 2. We observethat:

(i) for the AMM method, as well as for ML method, the coverage proba-bilities which cover θ is close to the nominal level, 95%, for all samplesizes;

(ii) average length of the confidence intervals constructed by the AMM methodis also as good as the one for the ML method.

Table 2: Simulated coverage probability (C) and average length (L) of 95% two sided confi-dence interval for θ = 2.

α = 0.05 AMM ML

n C L C L

5 0.9510 1.4357 0.9515 1.369710 0.9511 0.6821 0.9486 0.634320 0.9545 0.3228 0.9546 0.307930 0.9484 0.2297 0.9492 0.213250 0.9522 0.1340 0.9531 0.1310100 0.9523 0.0648 0.9540 0.0630200 0.9513 0.0312 0.9531 0.0300500 0.9499 0.0123 0.9502 0.01201000 0.9507 0.0061 0.9511 0.0060

Page 12: On adjusted method of moments estimators on uniform distribution samples

38 AHMAD REZA SOLTANI – KAMEL ABDOLLAHNEZHAD

In testing the following classical hypotheses on θ :

H0 : θ = θ0 vs H1 : θ = θ0,

H0 : θ ≤ θ0 vs H1 : θ > θ0,

H0 : θ ≥ θ0 vs H1 : θ < θ0,

the p-values of the tests based on T -values are correspondingly given by p =2min{P(T ≤ θ0), P(T ≥ θ0)}, p = P(T ≤ θ0) and p = P(T ≥ θ0), whereP(T ≤ θ0) and P(T ≥ θ0) are estimated using simulated T values.

A simulation study is performed to compare the size and the power oftest based on AMM estimator, and also ML estimator (UMP-test). For thispurpose we generated samples with different sample sizes n from U(0, θ).Using 10, 000 simulation iteration, the power of test and the size of test areestimated. Also we use the algorithm given above, m = 10, 000, to obtainthe test p-values. The numerical results for θ = 2, θ0 = 1.5, 1.75, 1.9, 2.0 arecollected in Tables 3, 4. We observe that the size of the test based on AMMestimator is always less than the nominal level. The power of tests based onAMM and ML estimators generally increase as the sample size n increases.For n large (n > 25), the powers of two methods are nearly close together.

Table 3: Simulated actual size of 0.95 two sided test for θ = 2.

Test \ n 5 10 20 30 50 100 500

AMM 0.0481 0.0495 0.0494 0.0491 0.0486 0.0499 0.05ML 0.046 0.0462 0.0494 0.049 0.0484 0.0496 0.05

Table 4: Simulated power test of 0.95 two sided test for θ = 2.

θ0 Test \ n 5 10 20 30 50 75 100

1.5 AMM 0.7404 0.9347 0.9980 0.9995 – – –ML 0.7774 0.9475 0.9979 0.9992 – – –

1.75 AMM 0.4254 0.6901 0.9197 0.9804 0.9984 – –ML 0.5092 0.7395 0.9230 0.9813 0.9986 – –

1.9 AMM 0.1483 0.3257 0.5891 0.7712 0.9234 0.9774 0.9920ML 0.2637 0.4011 0.6295 0.7863 0.9296 0.9776 0.9919

6. Conclusion

Certain statistical properties and effectiveness of the AMM estimator forthe parameter θ whenever the population distribution is uniform (0, θ) are

Page 13: On adjusted method of moments estimators on uniform distribution samples

On adjusted method of moments estimators on uniform distribution samples 39

presented in this article. There are still certain important issues that requiresfurther investigations. Our attempts in deriving the asymptotic distribution ofthe AMM estimator has failed. We are given certain hints by referees, but stillwe need to learn more. The main barrier is the dependency between the termsin the formulation of the AMM estimator given in (3.1). Thus the classicaltechniques, as the continuity theorem, can not be directly applied. Throughthe simulation we have examined normal, skewed normal, exponential and betadistributions. None were passed by the Kolmogorov-Smironov test. Intuitivelythe asymptotic distribution is a univariate distribution induced from the Dirichletdistribution, due to the close connection between the Dirichlet distribution andjoint distribution of order statistics of a sample from a uniform distribution.

Let us also make some remarks concerning possible overlaps of the AMMestimation procedure with the Hosking L-moment procedure, Hosking (1990,2006). Sample L-moment are linear combinations of the sample order statisticswith non-random coefficients. The k-th Hosking L-moments coefficients, forgiven k = 1, 2, . . . , are fixed and do not depend on the population distribu-tion. Thus Hosking sample L-moments are very different from the k-th Stieltjesrandom sums. We recall that the k-th Stiltjes random sums are linear combina-tions of the k-th power of the sample order statistics with random coefficients,determined by the population distribution and the sample order statistics. TheHosking L-moment estimation procedure obtains the parameter estimates byequating the first J sample L-moments to the corresponding population L-moments. In AMM procedure, in contrast to the L-moment procedure, there isno need to the expectations of the order statistics. Nevertheless, in the methodof moments, Hosking L-moment procedure and adjusted method of moments,parameter estimates are obtained by equating certain sample and correspondingpopulation quantities.

Acknowledgments

The authors express their sincere appreciations to referees for valuable comments.

REFERENCES

Arnold, B. C., Balakrishnan N. and Nagaraja, H. N. (1992) A First Course in Order Statistics,New York, John Wiley & Sons.

Bilingsley, P. (1995) Probability and Measure, 3rd ed. New York, Wiley.

Casella, G. and Berger, R. L. (1990) Statistical Inference, Wadsworth, Pacific Grove, CA.

Cramer, H. (1974)Mathematical Methods of Statistics, Princeton University Press.

Page 14: On adjusted method of moments estimators on uniform distribution samples

40 AHMAD REZA SOLTANI – KAMEL ABDOLLAHNEZHAD

David, H. A. and Nagaraja, H. N. (2003) Order Statistics, 3rd ed. New Jersey, Wiley.

Devroye, L. (1981) Laws of the iterated logarithm for order statistics of uniform spacings, Ann. ofprobab., 9, 860–867.

Govindarajulu, Z. (1968) Certain general properties of unbiased estimates of location and scaleparameters based on ordered observations, SIAM J. Appl. Math., 16, 533–551.

Hosking, J. R. M. and Wallis, J. R. (1987) Parameter and quantile estimation for the generalizedPareto distribution, Technometrics, 29, 339–349.

Hosking, J. R. M. (1990) L-moments: analysis and estimation of distributions using linear combi-nations of order statistics, J. Roy. Statist. Soc., Ser. B, 52, 105–124.

Hosking, J. R. M. (2006) On the characterization of distributions by their L-moments, J. Statist.Plann. Infer., 136, 193–198.

Johnson, N. L., Kotz, S. and Balakrishnan, N. (1995) Continuous Univariate Distributions,Vol. 2, 2nd ed. New York, Wiley.

Rudin, W. (1979) Principles of Mathematical Analysis, Mc Graw-Hill.

Soltani, A. R. and Homei, H. (2009a) A generalization for two-sided power distributions andadjusted method of moments, Statistics, 43, 611–620.

Soltani, A. R. and Homei, H. (2009b)Weighted averages with random proportions that are jointlyuniformly distributed over the unit simplex, Statist. Probab. Lett., 79, 1215–1218.

AHMAD REZA SOLTANIDepartment of Statisticsand Operations ResearchFaculty of ScienceKuwait UniversityP.O. Box 5969Safat 13060 (Kuwait)andDepartment of StatisticsFaculty of ScienceShiraz UniversityShiraz 71454 (Iran)[email protected]

KAMEL ABDOLLAHNEZHADDepartment of StatisticsFaculty of ScienceShiraz UniversityShiraz 71454 (Iran)andDepartment of StatisticsCollege of ScienceGolestan UniversityGorgan (Iran)[email protected]