Estimation of location extremes within general families of scale mixtures

12
Journal of Statistical Planning and Inference 100 (2002) 197–208 www.elsevier.com/locate/jspi Estimation of location extremes within general families of scale mixtures Mark Carpenter Medical Statistics Section, University of Alabama at Birmingham, Birmingham, AL 35294, USA Abstract In this paper, we study the estimation of the minimum and maximum location parameters, respectively, representing the minimum guaranteed lifetime of series and parallel systems of components, within a general class of scale mixtures. The conditional or underlying distribution has only the primary restriction of being a location-scale family with positive support. The mixing distribution is also quite general in that we only assume that it has positive support and nite second moment. For demonstrative purposes several special cases are highlighted such as the gamma, inverse-Gaussian, and discrete mixture. Various estimators, including bootstrap bias corrected estimators, are compared with respect to both mean-squared-error and Pitman’s measure of closeness. c 2002 Published by Elsevier Science B.V. MSC: primary 62F10; secondary 62N05 Keywords: Common environment; Pitman’s measure of closeness; Bootstrap bias corrected estimator; Mean-squared-error; Dependency through mixture 1. Introduction Suppose that given a scale parameter (hazard rate), the lifetime of a component is distributed as an exponential random variable. If the scale parameter is random, then the actual lifetime distribution is not exponential but is a scale mixture of exponen- tial distributions. Because of the utility of such models, their properties are considered extensively in the literature. McNolty (1964), Harris and Singpurwalla (1968) and Mc- Nolty et al. (1980) consider the gamma-exponential model and Bhattacharya and Kumar (1986) and Whitmore and Lee (1991) derive properties of the inverse-Gaussian expo- nential mixture. Lindley and Singpurwalla (1986), Nayak (1987) and Whitmore and Lee (1991) consider properties of multivariate exponential mixtures. However, these papers do not suciently address the location estimation problem within exponential (or other) mixtures. In this paper, we introduce a location parameter into a general E-mail address: [email protected] (M. Carpenter). 0378-3758/02/$ - see front matter c 2002 Published by Elsevier Science B.V. PII: S0378-3758(01)00134-3

Transcript of Estimation of location extremes within general families of scale mixtures

Page 1: Estimation of location extremes within general families of scale mixtures

Journal of Statistical Planning andInference 100 (2002) 197–208

www.elsevier.com/locate/jspi

Estimation of location extremes within general families ofscale mixtures

Mark CarpenterMedical Statistics Section, University of Alabama at Birmingham, Birmingham, AL 35294, USA

Abstract

In this paper, we study the estimation of the minimum and maximum location parameters,respectively, representing the minimum guaranteed lifetime of series and parallel systems ofcomponents, within a general class of scale mixtures. The conditional or underlying distributionhas only the primary restriction of being a location-scale family with positive support. Themixing distribution is also quite general in that we only assume that it has positive supportand /nite second moment. For demonstrative purposes several special cases are highlighted suchas the gamma, inverse-Gaussian, and discrete mixture. Various estimators, including bootstrapbias corrected estimators, are compared with respect to both mean-squared-error and Pitman’smeasure of closeness. c© 2002 Published by Elsevier Science B.V.

MSC: primary 62F10; secondary 62N05

Keywords: Common environment; Pitman’s measure of closeness; Bootstrap bias correctedestimator; Mean-squared-error; Dependency through mixture

1. Introduction

Suppose that given a scale parameter (hazard rate), the lifetime of a component isdistributed as an exponential random variable. If the scale parameter is random, thenthe actual lifetime distribution is not exponential but is a scale mixture of exponen-tial distributions. Because of the utility of such models, their properties are consideredextensively in the literature. McNolty (1964), Harris and Singpurwalla (1968) and Mc-Nolty et al. (1980) consider the gamma-exponential model and Bhattacharya and Kumar(1986) and Whitmore and Lee (1991) derive properties of the inverse-Gaussian expo-nential mixture. Lindley and Singpurwalla (1986), Nayak (1987) and Whitmore andLee (1991) consider properties of multivariate exponential mixtures. However, thesepapers do not suAciently address the location estimation problem within exponential(or other) mixtures. In this paper, we introduce a location parameter into a general

E-mail address: [email protected] (M. Carpenter).

0378-3758/02/$ - see front matter c© 2002 Published by Elsevier Science B.V.PII: S0378 -3758(01)00134 -3

Page 2: Estimation of location extremes within general families of scale mixtures

198 M. Carpenter / Journal of Statistical Planning and Inference 100 (2002) 197–208

class of positive support lifetime distributions forming a general location-scale family.Since these distributions have positive support, the location parameter represents theminimum guaranteed lifetime.

The main problem addressed in this paper is the estimation of the minimum andmaximum of two locations parameters. These location extremes represent the mini-mum guaranteed lifetime of series and parallel systems of components, respectively.Carpenter and Hebert (1994, 1997) provide similar results for the gamma-exponentialand the inverse-Gaussian exponential mixtures. Carpenter and Hebert (1997) extendtheir results to include a general class of exponential mixtures. The current manuscriptextends these results further to general mixtures of general positive support location-scale families and, in addition, provides an estimation technique for when theparameters of the mixing distribution are unknown.

2. General mixture models

Let Z have the standard p.d.f. of the form

f(z) = g(z)I(0;∞)(z); (2.1)

where g(·) is a function de/ned on (0;∞) such that f(z) is a well-de/ned densityfunction. Suppose we have two components with conditionally independent lifetimesX1 and X2 possessing a common scale �= , but having diIerent minimum guaranteedlifetimes or locations 1 and 2. Suppose further that these lifetimes follow distributionswith p.d.f.’s

fXi(x|) = g((x − i))I(i;∞)(x); i = 1; 2: (2.2)

The conditional survival function is given by SX (x|) = JG((x − )); whereJG(u) =

∫∞u g(t) dt. We see that the family of distributions given in (2.2) de/nes a gen-

eral location-scale family of lifetime distributions with standard p.d.f. given in (2.1).Assuming � is a positive random variable with distribution function, M (); then themixed or unconditional joint distribution of X1 and X2 is given by

f(x1; x2) =∫ ∞

02g((x1 − i))g((x2 − 2)) dM (); (2.3)

Xi¿ i. The density in (2.3) is called a scale mixture of the density given in (2.2).The sampling distribution in (2.2) and M (·) are referred to as the conditional andmixing distributions, respectively. The integral given in (2.3) is with respect to theStieltjes integration, thus allowing for both continuous and discrete positive supportmixing distributions.

From (2.3), we see that, while conditionally independent, X1 and X2 are actuallydependent through mixture. As a result, one might be tempted to conclude that (2.3)de/nes a class of multivariate lifetime distributions. However, as we shall show furtherin this section, this class is restricted to those of positive correlation.

Page 3: Estimation of location extremes within general families of scale mixtures

M. Carpenter / Journal of Statistical Planning and Inference 100 (2002) 197–208 199

The objective in this paper is to develop natural estimators of the location extremes�1 =�1(1; 2) and �2 =�2(1; 2); where �1(a; b)=min{a; b} and �2(a; b)=max{a; b};using estimators of the original location parameters 1 and 2. Whenever possible com-peting estimators are compared with respect to mean-squared-error (MSE) and Pitman’smeasure of closeness. From Keating et al. (1993), we give the following de/nition:

De�nition 2.1. Let �1 and �2 be two real-valued estimators of the real parameter �.Then Pitman’s measure of closeness of these two competing estimators is denoted byP(�2; �1|�) and de/ned by

P(�2; �1|�) =Pr(|�2 − �|¡ |�1 − �|): (2.4)

�2 is said to be Pitman closer to � than �1, if P(�2; �1)¿ 1−P(�2; �1) over the wholeparameter space.

Before proceeding to the proposed problem, we present interesting properties of themixture given in (2.3). Since, given , (2.2) de/nes a location-scale family, any randomvariable Xi from this family can be represented as X = i + −1Z; where Z has density(2.1). From this fact and since E(Xi) =E�E(Xi|), it is fairly easy to show that forany positive integers p and q,

E(Xp1 X q

2 ) = E�E(Xp1 X q

2 |)

= E�E((1 + −1Z)p|)E((2 + −1Z)q|)

=p∑

i=0

q∑i=0

(p)i(q)jp−i1 q−j

2 E(Z (i+j))E(�−(i+j));

where (k)j is the number of permutations of k items taken j at a time. SinceVar(Xi) =E�(Var(Xi|)) + Var(E(Xi|));

Corr(X1; X2) =(E(Z))2Var{�−1}

(E(Z2))E{�−2} − (E(Z))2(E{�−1})2 :

The right-hand side of the above equation is positive. Moreover, since E(Z2)¿ (EZ)2;we have

0¡Corr(X1; X2)¡(E(Z))2

E(Z2)¡ 1:

Note that these bounds hold regardless of the mixing distribution or even the conditionaldistribution given in (2.2). Lee and Gross (1991) achieve no tighter bounds for thegeneralized gamma distribution, a special case of our proposed model. Carpenter andHebert (1997) show that in the exponential mixture case, the correlation is boundedby 1

2 . Whitmore and Lee derive an upper bound of 25 for the inverse Gaussian mixture

of exponentials.

Page 4: Estimation of location extremes within general families of scale mixtures

200 M. Carpenter / Journal of Statistical Planning and Inference 100 (2002) 197–208

We begin, in Section 3, by deriving estimators of the individual location parameters.In Section 4, these estimators are used as inputs for the natural estimators of thelocation extremes. For the case when the mixing parameters are unknown, bootstrapbias corrected estimators are proposed and studied in Section 5.

3. Individual component lifetimes

Suppose Xi1; Xi2; : : : ; Xin are conditionally independent given the common factor �= with p.d.f. fXi(x|) given in (2.2). Then the unconditional joint density of this samplebecomes∫ ∞

0n

n∏j=1

g((xij − i)) dM ();

for Xi(1)¿ i, where Xi(1) = min{Xi1; : : : ; Xin}; i = 1; 2.Given �= , the distribution of Xi(1) is easily shown to have form

fXi(1) (x|) = ng((x − i)) JGn−1

((x − i)); x¿ i; (3.1)

where JG(t) =∫∞t g(u) du. We see that in general cases of (3.1), such as the generalized

gamma, the distribution of Xi(1) is not of the same family as the sampling distribution.However, in the Weibull case, for example, the distribution of Xi(1) is again Weibullwith only a change in scale. It is true, though, that (3.1) de/nes a location-scale familyof distributions. The unconditional density is given by fXi(1) (x) =

∫∞0 ng(x|) JG

n−1

((x − i)) dM ().From (3.1), we have for any positive integer r

E(X ri(1)|) =EZ(i + −1Z)r =

r∑j=0

r!j!(r − j)!

ji (

−1E(Z))r−j: (3.2)

Therefore, we see that the bias for i =Xi is

bias(i) =∫ ∞

0−1E(Z) dM () =E(Z)E(�−1) = (!); i = 1; 2; (3.3)

where ! represents the vector of all unknown parameters from both the sampling andmixing distributions and E(Z) =

∫∞0 nug(u) JG

n−1(u) du.

If the conditional distribution is the two-parameter exponential distribution then Xi(1)

is the maximum likelihood estimator (MLE) of i; i = 1; 2. In the more general cases,the maximum likelihood estimator is diAcult to /nd and often either the MLE doesnot exist or there is no closed form expression (see Lawless, 1982). However, Result3:1, given below, implies that Xi(1) is a strongly consistent estimator of i; i = 1; 2,regardless of the unconditional and mixing distributions.

Page 5: Estimation of location extremes within general families of scale mixtures

M. Carpenter / Journal of Statistical Planning and Inference 100 (2002) 197–208 201

Result 3.1. Suppose Xi(1) has the conditional density function (3.1) and let i =Xi(1);i = 1; 2. Then

(i) i converges almost surely to i, as n → ∞, i.e., P(limn→∞ i = i) = 1.(ii) i is asymptotically unbiased, i.e., (!) → 0, as n → ∞.

(iii) Var(i) → 0 as n → ∞.

Proof. (i) This result and its proof can be found in Sen and Singer (1993, Theorem4:4:1, p. 173). (ii) From the bias expression in (3.3) we see i is asymptotically

unbiased if and only if E(Z) → 0 as n → ∞, where E(Z) =∫∞

0 nug(u) JGn−1

(u) du.By integration-by-parts, we have

E(Z) = u JGn(u)|∞0 +

∫ ∞

0

JGn(z) dz =

∫ ∞

0

JGn(z) dz

which implies that

limn→∞E(Z) = lim

n→∞

∫ ∞

0

JGn(u) du:

Since | JGn(z)|6 1 and limn→∞ JG

n(z) = 0, for all z ∈ (0;∞), then by the Bounded

Convergence Theorem (see Royden, 1988), the limit given on the right-hand side ofthe above equation is zero. Therefore, limn→∞ E(Z) = 0. (iii) From (3.2), it is easy toshow that

Var(Xi(1)) =E−2EZ2 − (E−1)2(E(Z))2:

Since we showed that limn→∞ E(Z) = 0, to show that limn→∞ Var(Xi(1)) = 0, we need

only show that limn→∞ E(Z2) = 0, where E(Z2) =∫∞

0 nu2g(u) JGn−1

(u) du. This resultfollows similarly to the proof of part (ii).

Notice that (!), given in (3.3), is functionally independent of but it is not neces-sarily functionally independent of the parameters coming from either the sampling ormixing distributions. Following Carpenter and Hebert (1997), if these parameters areknown, then a competing estimator of i is the unbiased estimator

∗1 =Xi(1) − (!); i = 1; 2: (3.4)

Result 3:2 below demonstrates that i is an inadmissible estimator of i; i = 1; 2, byshowing that the unbiased estimator in (3.4) dominates the estimator given in (3.3) interms of mean-squared-error. This relationship is true for all positive support mixingdistributions, including point mass distributions.

Result 3.2. MSE(∗i )6MSE(i); i = 1; 2 for all i and all positive support mixing

distributions M (·).

Proof. From (3.3), MSE(∗i ) =E(Xi(1) − (!) − i)2 = MSE(i) − 2(!)6MSE(i).

Clearly from Result 3:2, if MSE is the criteria by which the worth of an estimatoris to be judged, one should choose ∗

i over i. However, this is not necessarily true

Page 6: Estimation of location extremes within general families of scale mixtures

202 M. Carpenter / Journal of Statistical Planning and Inference 100 (2002) 197–208

if Pitman’s measure of closeness is used. Following De/nition 2.1 we have

P(∗i ; i) = Pr(|Xi(1) − (!) − i|¡ |Xi(1) − i|)

= Pr(Xi(1) ¿i + (!)=2)

=∫ ∞

0

JGn( (!)=2) dM ():

Notice that the right-hand side of the above equation is functionally independent ofi; i = 1; 2. To show that one estimator or the other is Pitman closer to i, it must bedemonstrated that the function is bounded below or above by a constant p(¿ 0:0:5).However, it can be shown that even in simple cases of (2.2), this relationship isdependent on the other parameters in the conditional distribution. For example, in thecase of a Weibull (b; ; i), i.e., f(Xi(1)|) = bb(x − i)b−1 exp{−[(x − i)]b}, andP(�= 0) = 1,

P(∗i ; i) = exp{−[$(1 + 1=b)=2]b}¿ 1

2;

where $(k) =∫∞

0 uk−1e−u, for all b such that $(1 + 1=b)6 (−ln(0:5))1=b. Otherwiseit is less than 1

2 . Note that b¿ 1 satis/es the condition for ∗i to be Pitman closer to

i than i; i = 1; 2.

4. Guaranteed lifetimes of systems

In this section, we develop the concept of natural estimators of �1(1; 2) and�2(1; 2) the minimum guaranteed lifetimes of series and parallel systems of compo-nents, respectively. Suppose that i is a “reasonable” estimator of i, such as thosegiven in Section 3, the natural estimators of �1 =�1(1; 2) and �2 =�2(1; 2) arede/ned as �1 =�1(1; 2) and �2 =�2(1; 2), where �1 = (a; b) = min{a; b} and�2 = (a; b) = min{a; b}.

The /rst estimator of �i that we consider is

�i =�i(1; 2) =�i(X1(1); X2(1)) =Yi: (4.1)

The second estimator is de/ned as

�∗i =�i(X1(1) − (!); X2(1) − �(!)) =�i(

∗1 ;

∗2) =Yi − (!); (4.2)

where (!) is de/ned in (3.2). Recall that 1 and 2 are strongly consistent estimatorsof 1 and 2. Therefore, by Chen and Dudewicz (1973), �1 and �2 are strongly consis-tent and asymptotically unbiased estimators for �1 and �2. Similarly, �

∗1 and �

∗2 are also

strongly consistent and asymptotically unbiased estimators for �1 and �2. However, toestablish the relative performance of these estimators, we must derive the joint densityfor (Y1; Y2). To do so, we use the following lemma.

Page 7: Estimation of location extremes within general families of scale mixtures

M. Carpenter / Journal of Statistical Planning and Inference 100 (2002) 197–208 203

Lemma 4.1. Let X= (X1; : : : ; Xk) be a continuous random vector with joint densityfunction f(x1; : : : ; xk) and let Y= (Y1; : : : ; Yk) be the vector of order statistics. Then

gY = (y1; : : : ; yk) =∑'∈(

f(y'(1) ; : : : ; y'(k) ) for y16 · · ·6yk ;

where ( is the set of all permutations of the 6rst k positive integers.

Proof. The proof can be found in most graduate mathematical statistics books such asDudewicz and Mishra (1988).

Theorem 4.1. Suppose Yi is de6ned by (4:1). Then the joint p.d.f. of (Y1; Y2) is

f(y1; y2) =

∫ ∞

02h((y1 − �1))

×h((y2 − �2)) dM (); �16y16 �26y2 ¡∞;

∑(i; j)={(1;2);(2;1)}

∫ ∞

02h((y1 − �i))

×h((y2 − �j)) dM (); �16 �26y16y2;

where �1 =�1{1; 2} and �2 =�2{1; 2} and h(u) = ng(u) JGn−1

(u).

Proof. Assuming �= is given, the result follows directly from Lemma 4.1.

Carpenter and Hebert (1994) derive a similar result for the gamma-exponentialmixture as given below:

f(y1; y2|) =

n2+(+− 1),+

[n(y1 + y2 − �1 − �2 + ,=n)]+; �16y16 �26y2;

2n2+(+− 1),+

[n(y1 + y2 − �1 − �2 + ,=n)]+; �16 �26y16y2:

From Theorem 4.1 we have that the marginal density for Y1 is given as

f(y1) =

∫ ∞

0h((y1 − �1)) dM (); �16y16 �2

∑(i; j)={(1;2);(2;1)}

∫ ∞

0h((y1 − �i))

× JGn((y1 − �j)) dM (); �26y1:

Similarly, the marginal density for Y2 is given as

f(y2) =∑

(i; j)={(1;2);(2;1)}

∫ ∞

0h((y2 − �i))(1 − JG

n((y2 − �j)) dM (); �26y2:

Page 8: Estimation of location extremes within general families of scale mixtures

204 M. Carpenter / Journal of Statistical Planning and Inference 100 (2002) 197–208

Using the above marginals and the fact that E(Y1) + E(Y2) =E(X1) + E(X2), we have

E(Yi|) = �i + (!) + (−1)i-(�2 − �1|); (4.3)

where -(�1; �2|) is given as

−∫ ∞

�2

zh((z − �2)) JGn((z − �1)) dz

+∫ ∞

�2

zh((z − �1))[1 − JGn((z − �2)] dz:

From (4.3), we have

MSE(�∗i |) = MSE(�i|) − 2(!) + 2(−1)i−1 (!)-(�1; �2|): (4.4)

Theorem 4.2. Let �= (�1; �2)′; �∗

= (�∗1 ; �

∗2)′; and �= (�1; �2)′; where �i and �

∗i are

given in (4:1) and (4:2). Then; if we de6ne the generalized MSE for estimation ofthe vector � as GMSE(�

∗) =E(�

∗ − �)′(�∗ − �); we haveGMSE(�

∗)6GMSE(�); ∀�1 ;�2 ; i = 1; 2:

Proof. For a given �= , from (4.4) we have

GMSE(�∗) = MSE(�1) + MSE(�2) − 2 (!) = GMSE(�) − 2 (!):

The result follows directly.

Thus we see that the vector estimator derived from (4.1) has smaller GMSE than thevector of estimators derived from (4.2). Unfortunately, establishing the marginal domi-nance in terms of MSE is not as straightforward. From (4.4) we see MSE(�

∗i )6MSE(�i)

iI 2(!) + 2(−1)i (!)-(�1; �2|)¿ 0. This implies that for MSE(�∗i )6MSE(�i) for

i = 1; 2, we must have − (!)=26 -(�1; �2|)6 (!)=2. However, establishing sucha relationship is impossible without making some assumptions about g(·) given in(2.1). For example, Carpenter and Hebert (1997) establish such a relationship wheng(z) = e−u, i.e., the exponential case. A similar statement can be made about comparingthese estimators with respect to Pitman’s measure of closeness. For example, Quan etal. (1996) show that in the gamma-exponential mixture case, �

∗1 is not Pitman closer

to �1 than �1 and vice versa, but �∗2 is Pitman closer to �2 than �2.

5. The bootstrap bias corrected estimator

Given the possible complexity of the conditional distribution along with the furthercomplexity of the mixture model, it is unlikely that all of the parameters in ! will beknown. Hence, for practical reasons, an alternative to the bias corrected estimator givenin (3.4) must be developed. A reasonable approach is to use the bootstrap estimate of

Page 9: Estimation of location extremes within general families of scale mixtures

M. Carpenter / Journal of Statistical Planning and Inference 100 (2002) 197–208 205

bias(Xi) as a substitute for (!) given in (3.2). Denoting the bootstrap estimate of biasas (!), the bootstrap bias corrected estimator that we consider here is de/ned to be

�booti =Yi − (!): (5.1)

Algorithm for Bootstrap Bias Corrected EstimatorStep 1. Collect a random samples of size n; Xi1; : : : ; Xin, from (2.2), and /nd the

minimum order statistic, Xi(1); i = 1; 2. Then calculate Y1 =�1(X1(1); X2(1)) andY2 =�2(X1(1); X2(1)).Step 2. By sampling with replacement from X11; : : : ; X1n and X21; : : : ; X2n collect

bootstrap samples X ∗11; : : : ; X

∗1n and X ∗

21; : : : ; X∗2n and calculate Y ∗

1 and Y ∗2 as in Step 1.

Step 3. Repeat Step 2 independently a total of B times.Step 4. Using the B bootstrap samples calculate the bootstrap estimate of (!) using

a bootstrap bias estimation technique from Efron and Tibshirani (1993).Step 5. Calculate bootstrap bias corrected estimator of �i given in (5.1).

Example. The following data set is from Lawless (1982, p. 504). The observationsrepresent the survival times of two groups of young mice irradiated at 300 R. The sur-vival times are recorded upon death caused by reticulum cell sarcoma. The /rst group,the control, consists of 39 mice that lived in a conventional laboratory environment andthe second group, germ-free, consists of 38 mice that lived in a germ-free laboratoryenvironment.

Control Group Germ-Free Group40; 42; 51; 62; 163; 179; 136; 246; 255; 376; 421;

206; 222; 228; 252; 259; 565; 616; 617; 652; 655282; 324; 333; 341; 366; 658; 660; 662; 675; 681;

385; 407; 420; 431; 441; 734; 736; 737; 757; 769461; 462; 482; 517; 517; 777; 800; 806; 825; 855;

524; 564; 567; 586; 619; 857; 864; 868; 870; 870;620; 621; 622; 647; 651; 873; 882; 895; 910; 934;

686; 761; 763 942; 1015; 1019

From the above data, we have �1 =Y1 = 40 and �2 =Y2 = 136. Using B= 1500 boot-strap samples, we get our bootstrap estimate of bias (!) = 26:1023. Using this estimateof bias, we get our bootstrap bias corrected estimators �

∗1 = 13:8977 and �

∗2 = 109:898.

The above example only serves as a demonstration of how to calculate the bootstrapbias corrected estimators using a given data set. In previous sections, arguments weremade as to the advantages of the estimators �

∗1 and �

∗2 over the uncorrected estimators

�1 and �2, when the bias is known. The following simulation study provides evidencethat the bootstrap bias corrected estimators, given in (5.1), also oIer advantages interms of MSE, absolute bias, and Pitman’s closeness over �1 and �2, when the bias isunknown.

Page 10: Estimation of location extremes within general families of scale mixtures

206 M. Carpenter / Journal of Statistical Planning and Inference 100 (2002) 197–208

Table 1

Distribution Mixture Properties

|bias(�1)| |bias(�∗1 )| %imp MSE(�1) MSE(�

∗1 ) %imp P(�

∗1 ; �1)

(a) Estimator comparisons for �1Weibull G(3; 1) 0.00417 0.00089 78.7 0.00008 0.00006 30.7 0.643b= 0:8 IG(1,1) 0.01683 0.00357 78.8 0.00136 0.00091 32.8 0.647

D(0:8; 1; 2) 0.00775 0.00187 75.9 0.00017 0.00011 33 0.651

Weibull G(3; 1) 0.00996 0.00412 58.6 0.00038 0.00024 36 0.761b= 1 IG(1,1) 0.04008 0.01676 58.2 0.00545 0.00332 39.0 0.773

D(0:8; 1; 2) 0.01826 0.00775 57.6 0.00070 0.00444 36.9 0.76

Weibull G(3; 1) 0.06192 0.04610 25.6 0.00954 0.00659 30.9 0.964b= 2 IG(1,1) 0.25240 0.18691 26 0.13475 0.09069 32.7 0.963

D(0:8; 1; 2) 0.11291 0.08402 25.6 0.01704 0.11681 31.5 0.968

(b) Estimator comparisons for �2Weibull G(3; 1) 0.00424 0.00095 77.5 0.00008 0.00006 33.6 0.642b= 0:8 IG(1,1) 0.0169 0.00364 78.5 0.00124 0.0008 35.5 0.653

D(0:8; 1; 2) 0.00770 0.00183 76.3 0.00016 0.00107 33.3 0.647

Weibull G(3; 1) 0.01023 0.00439 57.1 0.00040 0.00026 35.6 0.761b= 1 IG(1,1) 0.04011 0.01679 58.1 0.00570 0.00360 36.8 0.759

D(0:8; 1; 2) 0.01822 0.07703 57.7 0.00069 0.00044 37.1 0.771

Weibull G(3; 1) 0.06279 0.04696 25.2 0.00982 0.00676 31.1 0.965b= 2 IG(1,1) 0.25510 0.18961 25.7 0.14386 0.09823 31.7 0.966

D(0:8; 1; 2) 0.11320 0.08431 25.5 0.01719 0.01172 31.5 0.965

Simulation Study. In this simulation study, 10,000 samples of size 50 were gener-ated. These samples were used to get the empirical bias, MSE, and Pitman’s measureof closeness. In order to simulate the mixture, at each of the 10,000 iterations a uniquevalue for the scale parameter is generated from the respective mixing distribution.

Various cases of the three-parameter Weibull distribution,

f(x|) = bb(x − )b−1 exp{−[(x − )]b}:

are used as the conditional distributions (�1 = 0 and �2 = 1). The results for �1 and�2 are given in Table 1(a) and (b), respectively, using the discrete, gamma, andinverse-Gaussian mixing distributions described below:

• Discrete(p): dM () =p if = 1 and dM () = 1 − p if = 2 for 06p6 1.• Gamma(+; ,): dM () =$(+)−1,++−1 exp{−,}; +; ,¿ 0.• Inverse-gamma (+; ): dM () =

√+=202 exp[ − (− )2=22]; ; +; ¿ 0.

Following Whitmore and Lee (1991), the inverse-Gaussian pseudorandom variatesare generated using an algorithm given by Michael et al. (1976).

Page 11: Estimation of location extremes within general families of scale mixtures

M. Carpenter / Journal of Statistical Planning and Inference 100 (2002) 197–208 207

From Table 1(a) and (b), we see that the bootstrap bias corrected estimator �∗i givessigni/cant improvement over the uncorrected estimator �i, in terms of absolute bias,MSE, and Pitman’s closeness. The MSE improvement ranges from 30% to 39% im-provement, for all of the assumed conditional and mixing distributions. The amount ofbias improvement, which ranges between 25% and 78% improvement, depends heavilyon the shape of the conditional distribution, i.e., bias improvement decreases as b goesfrom 0.8 to 2. Finally, the bootstrap bias corrected estimators are between 64% and96% closer, with respect to Pitman’s measure of closeness.

6. Uncited References

Bhattacharya, 1966; Carpenter et al., 1992.

Acknowledgements

The author wishes to thank the three referees for their helpful comments and sugges-tions and Professors Satya Mishra and Dulal Bauhmik for the interesting discussions.

References

Bhattacharya, S.K., 1966. 11A modi/ed bessel function model in life testing. Metrika 11, 133.Bhattacharya, S.K., Kumar, S., 1986. E–IG model in life testing. Calcutta Statist. Assoc. Bull. 35, 85–90.Carpenter, M., Hebert, J.L., 1994. Estimation of the minimum and maximum location parameters for two

gamma-exponential mixtures. Comm. Statist. Theory Methods 23 (8), 2367–2377.Carpenter, M., Hebert, J.L., 1997. Estimating guaranteed lifetimes of systems in a random environment.

Comm. Statist. Theory Methods 26 (2), 309–316.Carpenter, M., Pal, N., Kushary, D., 1992. Estimation of the smaller and larger of two exponential location

parameters. Comm. Statist. Theory Methods 21 (4), 881–895.Chen, H.J., Dudewicz, E.J., 1973. Estimation of ordered parameters and subset selection. Technical Report,

Division of Statistics, Ohio State University.Dudewicz, E.J., Mishra, S.N., 1988. Modern Mathematical Statistics. Wiley, New York.Efron, B., Tibshirani, R.J., 1993. An Introduction to the Bootstrap. Chapman & Hall, London.Harris, C.M., Singpurwalla, N.N., 1968. Life distributions derived from stochastic hazard rates. IEEE Trans.

Reliability R-17, 70–79.Keating, J.P., Mason, R.L., Sen, P.K., 1993. Pitman’s Measure of Closeness: A Comparison of Statistical

Estimators. SIAM, Philadelphia, PA.Lee, M.T., Gross, A.J., 1991. Lifetime distributions under unknown environment. J. Statist. Plann. Inference

29, 137–143.Lawless, J.F., 1982. Statistical Models and Methods for Lifetime Data. Wiley, New York.McNolty, F., 1964. Reliability density functions when the failure rate is randomly distributed. SankhyJa A-26,

287–292.McNolty, F., Doyle, J., Hansen, E., 1980. Properties of the mixed exponential failure process. Technometrics

22, 555–565.Michael, J.R., Schucany, W.R., Haas, R.W., 1976. Generating random variates using transformations with

multiple roots. Amer. Statist. 30, 88–90.Nayak, T.K., 1987. Multivariate Lomax distribution: properties and usefulness in reliability theory. J. Appl.

Probab. 24, 170–177.

Page 12: Estimation of location extremes within general families of scale mixtures

208 M. Carpenter / Journal of Statistical Planning and Inference 100 (2002) 197–208

Quan, R., Hebert, J.L., Carpenter, M., 1996. Estimating the minimum and maximum location parameters fortwo gamma-exponential scale mixtures in Pitman measure. Proceedings of the Joint Statistical Meetings,Chicago.

Royden, H.L., 1988. Real Analysis. Macmillan Publishing Company, New York.Sen, P.K., Singer, J.M., 1993. Large Sample Methods in Statistics. Chapman & Hall, New York.Whitmore, G.A., Lee, M.L.T., 1991. A multivariate survival distribution generated by an inverse Gaussian

mixture of exponentials. Technometrics 35 (1), 39–50.