Reliability estimation in Lindley distribution with progressively type II right censored sample

14
Available online at www.sciencedirect.com Mathematics and Computers in Simulation 82 (2011) 281–294 Original Articles Reliability estimation in Lindley distribution with progressively type II right censored sample Hare Krishna , Kapil Kumar Department of Statistics, Chaudhary Charan Singh University, Meerut, India Received 21 November 2010; received in revised form 23 June 2011; accepted 14 July 2011 Available online 23 July 2011 Abstract In this paper we discuss one parameter Lindley distribution. It is suggested that it may serve as a useful reliability model. The model properties and reliability measures are derived and studied in detail. For the estimation purposes of the parameter and other reliability characteristics maximum likelihood and Bayes approaches are used. Interval estimation and coverage probability for the parameter are obtained based on maximum likelihood estimation. Monte Carlo simulation study is conducted to compare the performance of the various estimates developed. In view of cost and time constraints, progressively Type II censored sample data are used in estimation. A real data example is given for illustration. © 2011 IMACS. Published by Elsevier B.V. All rights reserved. Keywords: Lindley distribution; Reliability and failure rate functions; Progressive type II censoring; Maximum likelihood estimation; Bayes estimates 1. Introduction In the lifetime theory, we study the lifetime of a system or an item. The word system (or item or component) is defined as an arbitrary device performing its intended task. A system can be electrical, electronic, mechanical, or chemical device. Apart from man made systems there are many natural or God made systems such as human beings, plants, animals, etc. that are included in the lifetime analysis. Applications of lifetime distribution methodology range from investigations into the endurance of manufactured items to research involving human diseases. Some methods of dealing with lifetime data are quite old, but many important developments are relatively recent and have to be searched out not only in statistics journals, but also in the literature of various other disciplines. Lifetime distribution methodology is most frequently used in the fields of engineering and biomedical sciences. Numerous other parametric models have also been suggested and used in the analysis of lifetime data. Most of them are positively skewed statistical distributions with range restricted to positive part of the real line. Among others, exponential, gamma, Weibull and lognormal are most frequently used lifetime models. Pareto, Burr and half normal have also been used to model lifetime data. For more detail of these distributions, see Lawless [10], Mann et al. [13], and Sinha [17]. Corresponding author. Tel.: +91 9897142947. E-mail addresses: [email protected], [email protected] (H. Krishna). 0378-4754/$36.00 © 2011 IMACS. Published by Elsevier B.V. All rights reserved. doi:10.1016/j.matcom.2011.07.005

Transcript of Reliability estimation in Lindley distribution with progressively type II right censored sample

A

mrtpa©

Ke

1

dcp

iite

teha

0

Available online at www.sciencedirect.com

Mathematics and Computers in Simulation 82 (2011) 281–294

Original Articles

Reliability estimation in Lindley distribution with progressively typeII right censored sample

Hare Krishna ∗, Kapil KumarDepartment of Statistics, Chaudhary Charan Singh University, Meerut, India

Received 21 November 2010; received in revised form 23 June 2011; accepted 14 July 2011Available online 23 July 2011

bstract

In this paper we discuss one parameter Lindley distribution. It is suggested that it may serve as a useful reliability model. Theodel properties and reliability measures are derived and studied in detail. For the estimation purposes of the parameter and other

eliability characteristics maximum likelihood and Bayes approaches are used. Interval estimation and coverage probability forhe parameter are obtained based on maximum likelihood estimation. Monte Carlo simulation study is conducted to compare theerformance of the various estimates developed. In view of cost and time constraints, progressively Type II censored sample datare used in estimation. A real data example is given for illustration.

2011 IMACS. Published by Elsevier B.V. All rights reserved.

eywords: Lindley distribution; Reliability and failure rate functions; Progressive type II censoring; Maximum likelihood estimation; Bayesstimates

. Introduction

In the lifetime theory, we study the lifetime of a system or an item. The word system (or item or component) isefined as an arbitrary device performing its intended task. A system can be electrical, electronic, mechanical, orhemical device. Apart from man made systems there are many natural or God made systems such as human beings,lants, animals, etc. that are included in the lifetime analysis.

Applications of lifetime distribution methodology range from investigations into the endurance of manufacturedtems to research involving human diseases. Some methods of dealing with lifetime data are quite old, but manymportant developments are relatively recent and have to be searched out not only in statistics journals, but also inhe literature of various other disciplines. Lifetime distribution methodology is most frequently used in the fields ofngineering and biomedical sciences.

Numerous other parametric models have also been suggested and used in the analysis of lifetime data. Most ofhem are positively skewed statistical distributions with range restricted to positive part of the real line. Among others,

xponential, gamma, Weibull and lognormal are most frequently used lifetime models. Pareto, Burr and half normalave also been used to model lifetime data. For more detail of these distributions, see Lawless [10], Mann et al. [13],nd Sinha [17].

∗ Corresponding author. Tel.: +91 9897142947.E-mail addresses: [email protected], [email protected] (H. Krishna).

378-4754/$36.00 © 2011 IMACS. Published by Elsevier B.V. All rights reserved.doi:10.1016/j.matcom.2011.07.005

282 H. Krishna, K. Kumar / Mathematics and Computers in Simulation 82 (2011) 281–294

A lifetime distribution represents an attempt to describe, mathematically, the length of the life of a system or device.With the advancement in technology new and complex types of consumable items are being produced every day. Tomodel the failure data of such types of new devices we need more plausible statistical distributions to be consideredas lifetime models.

The Lindley distribution was first proposed by Lindley [11] in connection with the Fiducial distribution and Bayestheorem. This distribution was also compounded with Poisson distribution, by (Lindley [12], p. 74) and later bySankaran [15]. In a recent study, Ghitany et al. [8] have discussed Lindley distribution in detail with its applicationsand real data example.

In reliability studies a sample of n items is put on test and the experiment is terminated when all of them fail.This procedure may take a long time when the lifetime distribution of items has thick tail. Moreover, if the items areexpensive such as medical equipments, it is costly to gather the whole sample information. There are many situationswhere experimental units are lost or removed from the test, before the complete failure. For example, individuals ina clinical trial may drop out of the study, or the study may have to be terminated early for lack of funds. The testunits may accidentally break. In other scenarios, the experiment may have to be terminated in order to free up testingfacilities for other purposes.

In view of above, censoring is used in life testing to save time and cost of testing units. The removal of units in a testmay be unintentional or pre-planned. Data obtained from such experiments are called censored sample. Life data arecollected for estimating the parameters and reliability functions. When complete sample information is not available,we use the information that, “the censored experimental units did not fail upto a specific time”, in the estimationprocedure.

There are many types of censoring schemes used in lifetime analysis. Conventional type I and type II censoring beingmost popular. But these schemes do not allow removal of units before the termination of the experiment. Therefore,we consider in the paper a more general kind of censoring scheme called progressive type II censoring scheme.Progressive type II censoring was introduced by Cohen [7]. Hofmann et al. [9] showed that in many situations thetype II progressive censoring schemes significantly improve upon conventional type II censoring. For a comprehensivereview of progressive censoring, see Balakrishnan and Aggarwala [2]. Some recent studies can be found in Balakrishnanand Sandhu [4], Balakrishnan and Hossain [3], Pradhan and Kundu [14], Basak et al. [6].

In this paper, we study Lindley distribution as a lifetime model. In Section 2 we discuss the model, some ofits distributional properties and reliability characteristics. Data collection of a progressively type II right censoredsample is considered in Section 3. Section 4 deals with maximum likelihood estimation (MLE) of the parameterand the reliability characteristics, observed Fisher’s information, interval estimation of parameter based on normalapproximation of the distribution of MLE and coverage probability. In Section 5 we give Bayes estimation of theparameter and the reliability characteristics. Section 6 deals with simulation study to compare various estimates.Finally, a real data example is discussed in Section 8 of this paper.

2. The model

The probability density function (pdf) of Lindley distribution LD(θ) is given by

f (x) = θ2

(1 + θ)(1 + x)e−θx; x > 0, θ > 0 (2.1)

The corresponding cumulative distribution function (cdf) is given by

F (x) = 1 − (1 + θ + θx)

1 + θe−θx; x > 0, θ > 0 (2.2)

Note that, Lindley distribution is the mixture of the exponential (θ) and gamma (2, θ) distributions with pdfs θe−θx

and θ2xe−θx, respectively. Also, the mixing proportions are θ/(1 + θ) and 1/(1 + θ), respectively.

2.1. Some distributional properties

Here, we discuss some important distributional properties of the Lindley distribution.

H. Krishna, K. Kumar / Mathematics and Computers in Simulation 82 (2011) 281–294 283

Table 1Mean, median, mode and variance for different values of θ.

θ 0.1 0.2 0.5 0.75 1 1.5 2 5 10 20

Mean 19.091 9.167 3.333 2.095 1.5 0.779 0.667 0.233 0.109 0.052Median 15.858 7.532 2.654 1.631 1.146 0.574 0.487 0.164 0.076 0.036MV

(

2

ode 9 4 1 0.333 0 0 0 0 0 0ariance 199.174 49.306 7.556 3.229 1.75 0.521 0.389 0.052 0.012 0.003

(i) The rth raw moment is

μ′r = E(Xr) = r!(θ + r + 1)

θr(θ + 1); r = 1, 2, . . .

so that,

μ′1 = mean = (θ + 2)

θ(θ + 1)and variance = (θ2 + 4θ + 2)

θ2(θ + 1)2

(ii) Median (Md) of the Lindley distribution is the solution of the following non-linear equation

F(Md) = 0.5

0.5 − e−θMd(1 + θ + θMd)

(1 + θ)= 0

iii) Mode is that value of x for which f(x) is maximum. By differentiation method, we obtain mode (Mo) as

Mo =⎧⎨⎩

1 − θ

θ, if 0 < θ < 1;

0 otherwise

According to Theorem 1 of Ghitany et al. [8] and from Table 1, for Lindley distribution, mode < median < mean.The numerical values of mean, median, mode and variance are given in Table 1. Some other distributional propertiesof Lindley distribution such as, central moments, cumulants, mean deviation, coefficients of skewness and kurtosis,mean residual lifetime, etc. can be seen in Ghitany et al. [8].

.2. Reliability characteristics

In this section, we study some reliability characteristics of Lindley distribution.

(i) The reliability function of LD(θ) is given by,

R(t) = P[X > t] = (1 + θ + θt)

(1 + θ)e−θt ; t > 0, θ > 0 (2.3)

Fig. 1 shows the reliability function of LD(θ) for different values of θ, namely 0.5, 1.0, 1.5, 2.0, and 3.0.

(ii) The mean time to system failure (MTSF) is

MTSF = E[X] = (θ + 2)

θ(θ + 1); θ > 0 (2.4)

284 H. Krishna, K. Kumar / Mathematics and Computers in Simulation 82 (2011) 281–294

Fig. 1. The reliability function of LD(θ).

(iii) The failure rate function of the LD(θ) is given by

h(t) = f (t)

R(t)= θ2(1 + t)

(1 + θ + θt); t > 0, θ > 0 (2.5)

Fig. 2 shows the failure rate function of LD(θ) for different values of θ, namely 0.5, 1.0, 1.5, 2.0, and 3.0.

Theorem 1. h(t) is increasing failure rate (IFR).

Proof. By Lemma 5.9, (see Barlow and Proschan [5], p. 77) it is sufficient to show that the log of the pdf in (2.1) isconcave. For LD(θ), log (f(x)) is defined and is twice differentiable in (0, ∞). Also, the second derivative of log (f(x))w.r.t. x is (−1/(1 + x)2) i.e. −ve ∀x, hence log (f(x)) is concave in x, and hence the proof.

Fig. 2. The failure rate function of LD(θ).

(

wa

3

((a∑sc

sg

w

Rw

123

45

s

H. Krishna, K. Kumar / Mathematics and Computers in Simulation 82 (2011) 281–294 285

Since the Lindley distribution belongs to the IFR class, the following chain of implications holds for this distribution,see Barlow and Proschan [5], p. 159).

IFR ⇒ IFRA ⇒ NBU ⇒ NBUE

here IFR, IFRA, NBU and NBUE denote increasing failure rate, increasing failure rate average, new better than used,nd new better than used in expectation, respectively.

. Data collection with progressive type II right censored sample

Suppose n identical items are put on a life testing experiment and the progressive censoring scheme R˜

=R1, R2, . . . , Rm) is pre-fixed such that after the first failure R1 surviving items are removed from remaining liven − 1) items and after the second failure R2 surviving items are removed from remaining live (n − R1 − 2) items,nd so on. This procedure is continued until all Rm remaining items are removed after mth failure. It clear that

mi=1Ri + m = n. Note that, if R1 = R2 = · · · = Rm = 0 then the progressive censoring scheme is reduced to complete

ampling scheme and if R1 = R2 = · · · = Rm−1 = 0 and Rm = n − m then this scheme reduces to conventional type IIensoring scheme.

Let x1, x2, . . ., xm be a progressively type II right censored sample from LD(θ) with progressive cen-oring scheme R

˜. On the basis of the progressively type-II right censored sample, the likelihood function is

iven by

L(x˜, θ) = A

m∏i=1

[f (xi){1 − F (xi)}Ri

] = A

m∏i=1

[θ2

1 + θ(1 + xi)e

−θxi

{e−θxi

(1 + θ + θxi

1 + θ

)}Ri]

(3.1)

here A = n(n − R1 − 1) (n − R1 − R2 − 2) . . . (n − R1 − R2 − · · · − Rm−1 − m + 1)

emark 1. For simulation studies to generate a progressive type II right censored sample from Lindley distributione make use the algorithm given in Balakrishnan and Sandhu [4] which involves the following steps:

. Generate m independent and identically (iid) random numbers (u1, u2, . . . um) from uniform distribution U(0,1).

. Set zi = − log (1 − ui), so that zi’s are iid standard exponential variates.

. Given censoring scheme R˜

= (R1, R2, . . . , Rm) set y1 = z1/m and for i = 2,. . .,m

zi

yi = yi−1 +

(n − ∑i−1j=1Rj − i + 1)

Now, (y1, y2, . . . , ym) is a progressive type II censored sample from standard exponential distribution withcensoring scheme R

˜= (R1, R2, . . . , Rm).

. Set wi = 1 − exp(−yi), so that wi’s form a progressive type II right censored sample from U(0,1).

. Set xi = F−1(wi)

here F(.) is the cdf of LD(θ). The xi can be obtained by solving non-linear equation 1 − wi − ((1 + θ +xi)/ (1 + θ))e−θxi = 0. Now, (x1, x2, . . ., xm) is a progressive type II censored sample from LD(θ) with censoringcheme R

˜= (R1, R2, . . . , Rm).

286 H. Krishna, K. Kumar / Mathematics and Computers in Simulation 82 (2011) 281–294

4. Maximum likelihood estimation

4.1. Estimation procedure

In this section we obtain the maximum likelihood estimates (MLEs) of the parameter and reliability characteristicsof Lindley distribution using progressively type II censored sample.

The likelihood function (3.1) can be written as

L(x˜, θ) = A

θ2m

(1 + θ)m

{m∏

i=1

(1 + xi)

}e

−θ

m∑i=1

(1+Ri)xi m∏i=1

(1 + θ + θxi

1 + θ

)Ri

(4.1)

The log likelihood function becomes

log L(x˜, θ) = log A + 2m log θ − m log (1 + θ)

+m∑

i=1

log (1 + xi) − θ

m∑i=1

(1 + Ri)xi +m∑

i=1

Ri log

(1 + θ + θxi

1 + θ

)(4.2)

The first derivative of Eq. (4.2) w.r.t. θ is given by

∂ log L(x˜, θ)

∂θ= 2m

θ− m

(1 + θ)−

m∑i=1

(1 + Ri)xi +m∑

i=1

Rixi

(1 + θ)(1 + θ + θxi)(4.3)

The second derivative of the log likelihood function (4.2) w.r.t. θ is given by

∂2 log L(x˜, θ)

∂θ2 = −2m

θ2 + m

(1 + θ)2 −m∑

i=1

Rixi(2 + 2θ + 2θxi + xi)

(1 + θ)2(1 + θ + θxi)2 (4.4)

4.2. Maximum likelihood estimation of θ and reliability characteristics

The MLE of θ is the solution of the log likelihood equation

∂ log L(x˜, θ)

∂θ= 0

⇒ 2m

θ− m

(1 + θ)−

m∑i=1

(1 + Ri)xi +m∑

i=1

Rixi

(1 + θ)(1 + θ + θxi)= 0

(4.5)

From Eq. (4.5), we see that MLE of θ cannot be obtained directly, except for R1 = R2 = R3 = · · · = Rm = 0; i.e. the

complete sample case, when MLE of θ becomes θ̂ =(−(x̄ − 1) +

√(x̄ − 1)2 + 8x̄

)/2x̄ which is same as the moment

estimator. Thus, for all other censoring schemes, MLE of θ can be obtained by using some suitable numerical iterativeprocedure such as Newton–Raphson method for solving Eq. (4.5) and the given values of (n, m, R

˜, x

˜). Once MLE of

θ is obtained as�

θ, the MLEs of MTSF, R(t) and h(t) can be derived using invariance property of MLEs as

M�T SF = (

θ + 2)�

θ(�

θ + 1)

�R(t) = (1 + �

θ + �

θt)

(1 + �

θ)e−

θt ; t > 0

h(t) =�

θ2(1 + t)

(1 + �

θ + �

θt); t > 0

4

(i

5

ph

t

w

(lo

H. Krishna, K. Kumar / Mathematics and Computers in Simulation 82 (2011) 281–294 287

.3. Observed Fisher’s information, confidence interval and coverage probability

The observed Fisher’s information for LD(θ) is given by

Io(�

θ) = −∂2 log L(x˜, θ)

∂θ2 |θ=�

θ

Also, the asymptotic variance of�

θ is given by

Varo(�

θ) = 1

Io(�

θ)

Since the LD(θ) belongs to one parameter exponential family of distributions, therefore, the sampling distribution of�

θ − θ)/√

Varo(�

θ) can be approximated by a standard normal distribution. The large-sample 100(1 − �)% confidencenterval for θ is given by [

θL,�

θU ] = �

θ ± zα/2

√Varo(

θ). Using simulation we can estimate the coverage probability

P

[∣∣∣∣∣ (θ − �

θ)√Varo(

θ)

∣∣∣∣∣ ≤ zα/2

]

Here, zp is such that p = ∫ ∞zp

(1/

√2π

)e−1/2z2

dz

. Bayes estimation

In this section we discuss the Bayes estimation of the parameter and reliability characteristics of LD(θ) usingrogressively censored sample. Suppose that the unknown parameter θ is the realization of a random variable Θ, whichas a gamma prior with density of the form

Observing the progressively type II right censored sample data and using the likelihood function given in Eq. (4.1),he posterior distribution of θ is given by

∏(θ|x

˜) = L(x

˜, θ)g(θ)∫

θ

L(x˜, θ)g(θ)dθ

= K−11

θm+β−1

(1 + 1/θ)m

[e−θ{α+

∑m

i=1(1+Ri)xi}

m∏i=1

(1 + θ + θxi

1 + θ

)Ri]

here

K1 =∫ ∞

0

θm+β−1

(1 + 1/θ)m

[e−θ{α+

∑m

i=1(1+Ri)xi}

m∏i=1

(1 + θ + θxi

1 + θ

)Ri]

Here, we see that K1 is not in a closed form but its value can be evaluated numerically for the given values ofα, β, n, m, R

˜, x

˜). The squared error loss function is appropriate when decisions become gradually more damaging for

arge errors. Under squared error loss function L(θ∗, θ) = (θ∗ − θ)2, where θ* is an estimate of θ, the Bayes estimatef θ is the mean of the posterior distribution and is given by

θ∗ = E(θ|x˜) =

∫θ

θ∏

(θ|x˜) dθ = K1

−1∫ ∞

0

θm+β

(1 + 1/θ)m

[e−θ{α+

∑m

i=1(1+Ri)xi}

m∏i=1

(1 + θ + θxi

1 + θ

)Ri]

288 H. Krishna, K. Kumar / Mathematics and Computers in Simulation 82 (2011) 281–294

Table 2�θ, M

�TSF,

�R(t) and

�h(t), N = 10,000, θ = 1.0, t = 0.80, MTSF = 1.50, R(t) = 0.6291, h(t) = 0.6429.

n m CS�θ CP-95 M

�T SF

�R(t)

�h(t)

EV MSE EV MSE EV MSE EV MSE

20 8 (12, 0*7) 1.0760 0.1094 0.954 1.5086 0.2031 0.6073 0.0134 0.7159 0.089720 8 (4*3, 0*5) 1.0980 0.1065 0.967 1.4679 0.1935 0.5981 0.0135 0.7350 0.086920 8 (0*5, 4*3) 1.0768 0.1022 0.953 1.4981 0.1883 0.6064 0.0126 0.7161 0.083820 8 (0*7, 12) 1.0893 0.1194 0.956 1.4895 0.2002 0.6026 0.0139 0.7277 0.098820 10 (10, 0*9) 1.0573 0.0770 0.951 1.5134 0.1755 0.6122 0.0106 0.6980 0.061720 10 (5*2, 0*8) 1.0705 0.0819 0.961 1.4922 0.1689 0.6072 0.0111 0.7098 0.066120 10 (0*8, 5*2) 1.0637 0.0691 0.955 1.4922 0.1603 0.6088 0.0098 0.7032 0.055220 10 (0*9, 10) 1.0639 0.0805 0.945 1.5026 0.1711 0.6098 0.0108 0.0649 0.064920 16 (4, 0*15) 1.0546 0.0474 0.954 1.4775 0.1100 0.6107 0.0070 0.6938 0.037620 16 (2*2, 0*14) 1.0411 0.0477 0.952 1.5023 0.1130 0.6163 0.0070 0.6821 0.037820 16 (0*14, 2*2) 1.0412 0.0429 0.954 1.4955 0.1038 0.6158 0.0064 0.6819 0.033920 16 (0*15, 4) 1.0513 0.0436 0.964 1.4774 0.0984 0.6117 0.6907 0.6907 0.034620 20 (0*20) 1.0301 0.0344 0.952 1.5044 0.0905 0.6196 0.0053 0.6717 0.026930 12 (18, 0*11) 1.0469 0.0638 0.941 1.5179 0.1610 0.6153 0.0092 0.6882 0.050630 12 (6*3, 0*9) 1.0507 0.0585 0.967 1.5001 0.1337 0.6133 0.0085 0.6912 0.046530 12 (0*9, 6*3) 1.0489 0.0534 0.948 1.4963 0.1276 0.6135 0.0076 0.6893 0.042530 12 (0*11, 18) 1.0604 0.0616 0.952 1.4833 0.1282 0.6096 0.0086 0.6997 0.049430 15 (15, 0*14) 1.0436 0.0466 0.954 1.4973 0.1130 0.6152 0.0069 0.6842 0.036830 15 (5*3, 0*12) 1.0465 0.0488 0.952 1.4935 0.1123 0.6142 0.0071 0.6869 0.038730 15 (0*12, 5*3) 1.0423 0.0472 0.959 1.4964 0.1047 0.6157 0.0067 0.6830 0.037630 15 (0*14, 15) 1.0291 0.0372 0.962 1.5098 0.0957 0.6203 0.0057 0.6710 0.029230 24 (6, 0*23) 1.0222 0.0279 0.952 1.5083 0.0751 0.6223 0.0044 0.6643 0.021730 24 (2*3, 0*21) 1.0217 0.0258 0.948 1.5065 0.0721 0.6223 0.0041 0.6637 0.020030 24 (0*21, 2*3) 1.0268 0.0253 0.963 1.4953 0.0654 0.6202 0.0040 0.6681 0.019730 24 (0*23, 6) 1.0262 0.0281 0.943 1.5005 0.0716 0.6207 0.0044 0.6678 0.022030 30 (0*30) 1.0183 0.0214 0.951 1.5055 0.0611 0.6233 0.0034 0.6605 0.016550 20 (30, 0*19) 1.0301 0.0318 0.954 1.5001 0.0832 0.6194 0.0049 0.6715 0.024950 20 (5*6, 0*14) 1.0289 0.0341 0.953 1.5052 0.0874 0.6201 0.0052 0.6706 0.026750 20 (0*14, 5*6) 1.0248 0.0289 0.954 1.5044 0.0749 0.6213 0.0045 0.6666 0.022650 20 (0*19, 30) 1.0316 0.0272 0.957 1.4888 0.0679 0.6184 0.0042 0.6725 0.021350 25 (25, 0*24) 1.0238 0.0240 0.960 1.4993 0.0652 0.6213 0.0038 0.6655 0.018650 25 (5*5, 0*20) 1.0339 0.0290 0.947 1.4878 0.0730 0.6176 0.0044 0.6746 0.022750 25 (0*20, 5*5) 1.0240 0.0242 0.950 1.4987 0.0637 0.6212 0.0038 0.6656 0.018850 25 (0*24, 25) 1.0247 0.0210 0.969 1.4924 0.0558 0.6207 0.0033 0.6660 0.016350 40 (10, 0*39) 1.0165 0.0161 0.948 1.4992 0.0446 0.6236 0.0026 0.6585 0.012450 40 (5*2, 0*38) 1.0157 0.0156 0.949 1.5000 0.0440 0.6239 0.0025 0.6577 0.012050 40 (0*38, 5*2) 1.0179 0.0161 0.948 1.4966 0.0441 0.6231 0.0026 0.6597 0.012450 40 (0*39, 10) 1.0109 0.0154 0.944 1.5079 0.0433 0.6259 0.0025 0.6536 0.011950 50 (0*50) 1.0062 0.0120 0.949 1.5110 0.0359 0.6275 0.0020 0.6492 0.0092

Here, N = Number of simulations, CS = Censoring Scheme, EV = Expected value, MSE = Mean Square Error, CP = Coverage Probability.

Similarly the Bayes estimates of the MTSF, R(t), and h(t), respectively are given by,

MTSF∗ = E (MTSF |x˜) =

∫θ

(θ + 2)

θ(θ + 1)

∏(θ|x

˜) dθ

= K−11

∞∫0

(θ + 2)θm+β−3

(1 + 1/θ)m+1

[e−θ{α+

∑m

i=1(1+Ri)xi}

m∏i=1

(1 + θ + θxi

1 + θ

)Ri]

H. Krishna, K. Kumar / Mathematics and Computers in Simulation 82 (2011) 281–294 289

Table 3θ*, MTSF*, R*(t) and h*(t), N = 10,000, θ = 1.0, α = 2, β = 2, t = 0.80, MTSF = 1.50, R(t) = 0.6291, h(t) = 0.6429.

n m CS θ* MTSF* R*(t) h*(t)

EV ER EV ER EV ER EV ER

20 8 (12, 0*7) 1.0574 0.0705 1.6192 0.1798 0.6177 0.0088 0.6388 0.002620 8 (4*3, 0*5) 1.0768 0.0708 1.5816 0.1667 0.6100 0.0089 0.6432 0.002520 8 (0*5, 4*3) 1.0583 0.0682 1.6034 0.1668 0.6167 0.0085 0.6399 0.002520 8 (0*7, 12) 1.0677 0.0769 1.5949 0.1755 0.6137 0.0094 0.6413 0.002720 10 (10, 0*9) 1.0459 0.0572 1.6062 0.1591 0.6201 0.0077 0.6388 0.002320 10 (5*2, 0*8) 1.0570 0.0606 1.5880 0.1550 0.6160 0.0080 0.6410 0.002320 10 (0*8, 5*2) 1.0524 0.0531 1.5816 0.1449 0.6167 0.0073 0.6412 0.002220 10 (0*9, 10) 1.0513 0.0605 1.5912 0.1565 0.6178 0.0080 0.6403 0.002420 16 (4, 0*15) 1.0498 0.0406 1.5438 0.1104 0.6153 0.0058 0.6440 0.001720 16 (2*2, 0*14) 1.0365 0.0413 1.5705 0.1203 0.6208 0.0060 0.6407 0.001820 16 (0*14, 2*2) 1.0376 0.0375 1.5600 0.1104 0.6198 0.0055 0.6417 0.001720 16 (0*15, 4) 1.0467 0.0381 1.5441 0.1054 0.6162 0.0055 0.6437 0.001620 20 (0*20) 1.0292 0.0313 1.5584 0.1022 0.6221 0.0048 0.6411 0.001530 12 (18, 0*11) 1.0386 0.0512 1.5987 0.1528 0.6217 0.0072 0.6387 0.002230 12 (6*3, 0*9) 1.0422 0.0474 1.5845 0.1320 0.6199 0.0067 0.6401 0.002030 12 (0*9, 6*3) 1.0413 0.0437 1.5738 0.1256 0.6195 0.0061 0.6409 0.001930 12 (0*11, 18) 1.0511 0.0501 1.5634 0.1272 0.6162 0.0068 0.6424 0.002030 15 (15, 0*14) 1.0383 0.0401 1.5692 0.1190 0.6201 0.0058 0.6410 0.001830 15 (5*3, 0*12) 1.0411 0.0417 1.5648 0.1175 0.6191 0.0060 0.6416 0.001830 15 (0*12, 5*3) 1.0378 0.0403 1.5624 0.1109 0.6200 0.0057 0.6415 0.001730 15 (0*14, 15) 1.0265 0.0327 1.5746 0.1044 0.6239 0.0049 0.6396 0.001530 24 (6, 0*23) 1.0244 0.0255 1.5489 0.0863 0.6232 0.0039 0.6415 0.001330 24 (2*3, 0*21) 1.0252 0.0235 1.5428 0.0804 0.6226 0.0036 0.6420 0.001230 24 (0*21, 2*3) 1.0305 0.0227 1.5298 0.0720 0.6204 0.0035 0.6435 0.001130 24 (0*23, 6) 1.0299 0.0254 1.5350 0.0796 0.6208 0.0039 0.6430 0.001230 30 (0*30) 1.0245 0.0189 1.5276 0.0658 0.6221 0.0030 0.6431 0.001050 20 (30, 0*19) 1.0299 0.0288 1.5518 0.0931 0.6216 0.0043 0.6417 0.001450 20 (5*6, 0*14) 1.0284 0.0309 1.5572 0.0992 0.6224 0.0047 0.6411 0.001550 20 (0*14, 5*6) 1.0263 0.0261 1.5477 0.0844 0.6226 0.0040 0.6417 0.001350 20 (0*19, 30) 1.0335 0.0243 1.5293 0.0740 0.6194 0.0036 0.6437 0.001150 25 (25, 0*24) 1.0273 0.0217 1.5344 0.0733 0.6215 0.0034 0.6428 0.001150 25 (5*5, 0*20) 1.0357 0.0260 1.5245 0.0793 0.6185 0.0039 0.6443 0.001250 25 (0*20, 5*5) 1.0277 0.0217 1.5289 0.0707 0.6212 0.0033 0.6433 0.001150 25 (0*24, 25) 1.0303 0.0184 1.5171 0.0583 0.6198 0.0028 0.6444 0.000950 40 (10, 0*39) 1.0248 0.0135 1.5047 0.0444 0.6210 0.0021 0.6448 0.000750 40 (5*2, 0*38) 1.0244 0.0130 1.5043 0.0437 0.6211 0.0021 0.6448 0.000750 40 (0*38, 5*2) 1.0263 0.0129 1.5001 0.0405 0.6203 0.0020 0.6453 0.000655

A

0 40 (0*39, 10) 1.0206 0.0126 1.5093 0.0422 0.6226 0.0020 0.6441 0.00060 50 (0*50) 1.0172 0.0087 1.5037 0.0300 0.6235 0.0014 0.6442 0.0004

bbreviations: CS = censoring scheme; EV = expected value; ER = expected risk.

R∗(t) = E (R(t)|x˜) =

∫θ

(1 + θ + θt)

(1 + θ)e−θt

∏(θ|x

˜) dθ

= K−11

∞∫0

(1 + θ + θt)θm+β−2

(1 + 1/θ)m+1

[e−θ{t+α+

∑m

i=1(1+Ri)xi}

m∏i=1

(1 + θ + θxi

1 + θ

)Ri]

h∗(t) = E (h(t)|x˜) =

∫θ

θ2(1 + t)

(1 + θ + θt)

∏(θ|x

˜) dθ

= K−11 (1 + t)

∞∫0

θm+β+1

(1 + θ + θt)(1 + 1/θ)m

[e−θ{α+

∑m

i=1(1+Ri)xi}

m∏i=1

(1 + θ + θxi

1 + θ

)Ri]

290 H. Krishna, K. Kumar / Mathematics and Computers in Simulation 82 (2011) 281–294

Table 4�θ, M

�TSF,

�R(t) and

�h(t) N = 10,000, θ = 2.0, t = 0.40, MTSF = 0.6667, R(t) = 0.5692, h(t) = 1.4737.

n m CS�θ CP-95

�MTSF

�R(t)

�h(t)

EV MSE EV MSE EV MSE EV MSE

20 8 (12, 0*7) 2.1728 0.4510 0.956 0.6689 0.0454 0.5463 0.0139 1.6432 0.40520 8 (4*3, 0*5) 2.1992 0.4988 0.971 0.6596 0.0422 0.5420 0.0143 1.6682 0.451020 8 (0*5, 4*3) 2.2098 0.5480 0.962 0.6583 0.0441 0.5408 0.0146 1.6785 0.498120 8 (0*7, 12) 2.1730 0.5097 0.950 0.6759 0.0504 0.5478 0.0152 1.6444 0.458920 10 (10, 0*9) 2.1749 0.4599 0.944 0.6645 0.0415 0.5458 0.0131 1.6448 0.415820 10 (5*2, 0*8) 2.1646 0.3711 0.957 0.6612 0.0379 0.5458 0.0119 1.6343 0.332320 10 (0*8, 5*2) 2.1113 0.3191 0.957 0.6764 0.0360 0.5554 0.0106 1.5839 0.284520 10 (0*9, 10) 2.1541 0.3600 0.955 0.6639 0.0367 0.5477 0.0117 1.6243 0.322120 16 (4, 0*15) 2.0821 0.1873 0.947 0.6694 0.0245 0.5577 0.0070 1.5543 0.165120 16 (2*2, 0*14) 2.0899 0.1985 0.955 0.6666 0.0234 0.5564 0.0071 1.5617 0.175820 16 (0*14, 2*2) 2.0872 0.1902 0.949 0.6665 0.0229 0.5568 0.0068 1.5590 0.168420 16 (0*15, 4) 2.1130 0.2340 0.957 0.6600 0.0234 0.5525 0.0076 1.5835 0.209320 20 (0*20) 2.0610 0.1601 0.945 0.6725 0.0206 0.5613 0.0060 1.5341 0.141230 12 (18, 0*11) 2.0874 0.2807 0.946 0.6818 0.0347 0.5593 0.0098 1.5610 0.249130 12 (6*3, 0*9) 2.1210 0.3109 0.942 0.6695 0.0329 0.5531 0.0101 1.5925 0.278030 12 (0*9, 6*3) 2.1361 0.2961 0.961 0.6592 0.0280 0.5495 0.0093 1.6062 0.265730 12 (0*11, 18) 2.1428 0.2794 0.963 0.6572 0.0293 0.5478 0.0095 1.6124 0.248930 15 (15, 0*14) 2.0945 0.2378 0.949 0.6692 0.0255 0.5566 0.0080 1.5665 0.212030 15 (5*3, 0*12) 2.0843 0.198 0.955 0.6694 0.0242 0.5576 0.0072 1.5565 0.175130 15 (0*12, 5*3) 2.0872 0.1952 0.956 0.6684 0.0249 0.5569 0.0072 1.5592 0.172330 15 (0*14, 15) 2.0908 0.2137 0.956 0.6678 0.0238 0.5567 0.0074 1.5627 0.189930 24 (6, 0*23) 2.0477 0.1034 0.971 0.6683 0.0141 0.5624 0.0041 1.5205 0.090630 24 (2*3, 0*21) 2.0436 0.1291 0.943 0.6748 0.0179 0.564 0.005 1.5173 0.113230 24 (0*21, 2*3) 2.0665 0.1381 0.943 0.666 0.0168 0.5595 0.0052 1.5386 0.121830 24 (0*23, 6) 2.0668 0.1220 0.955 0.6636 0.0157 0.5590 0.0047 1.5387 0.107330 30 (0*30) 2.0526 0.1052 0.949 0.6660 0.0135 0.5614 0.0041 1.5251 0.092550 20 (30, 0*19) 2.0760 0.1599 0.963 0.6657 0.0192 0.5582 0.0058 1.5479 0.141350 20 (5*6, 0*14) 2.0751 0.1513 0.955 0.6649 0.0186 0.5581 0.0056 1.5469 0.133650 20 (0*14, 5*6) 2.0689 0.1417 0.952 0.6658 0.0175 0.5591 0.0053 1.5410 0.125050 20 (0*19, 30) 2.0709 0.1643 0.946 0.6685 0.0195 0.5594 0.006 1.5433 0.145250 25 (25, 0*24) 2.0659 0.1251 0.948 0.6648 0.0163 0.5593 0.0048 1.5379 0.109950 25 (5*5, 0*20) 2.0481 0.1205 0.957 0.6705 0.0154 0.5628 0.0046 1.5212 0.106050 25 (0*20, 5*5) 2.0506 0.1092 0.964 0.6674 0.0137 0.5620 0.0042 1.5233 0.096050 25 (0*24, 25) 2.0559 0.1051 0.963 0.6648 0.0134 0.5607 0.0041 1.5281 0.092450 40 (10, 0*39) 2.0478 0.0735 0.955 0.6623 0.0096 0.5615 0.0029 1.5199 0.064450 40 (5*2, 0*38) 2.0351 0.0677 0.957 0.6663 0.0092 0.5639 0.0027 1.5080 0.059250 40 (0*38, 5*2) 2.0332 0.0724 0.949 0.6681 0.0101 0.5645 0.0029 1.5064 0.063350 40 (0*39, 10) 2.0317 0.0683 0.946 0.6679 0.0095 0.5647 0.0028 1.5048 0.0597

50 50 (0*50) 2.0222 0.048 0.964 0.6678 0.0069 0.5660 0.0020 1.4955 0.0418

The above estimates are not in closed forms but can be evaluated numerically for the given values of (α, β, n, m, R˜, x

˜).

6. Simulation study

In this section, a simulation study is conducted to obtain the estimates of the parameter and reliability characteristicsof LD(θ) developed in the previous sections. Maximum likelihood and Bayes estimates are obtained for progressivelytype II right censored samples generated for two sets of values of θ, α, and β. All calculations are performed on R2.11.0. We have taken three sample sizes (i) small sample size n = 20, (ii) moderate sample size n = 30, and (iii) largesample size n = 50. Also, each sample size has 13 censoring schemes. The study includes the following steps:

(i) Choose the values of hyper parameters α and β, the mission time t, sample size n, number of failures m and thecensoring scheme R

˜= (R1, R2, R3, . . . , Rm).

H. Krishna, K. Kumar / Mathematics and Computers in Simulation 82 (2011) 281–294 291

Table 5θ*, MTSF*, R*(t) and h*(t), N = 10,000, θ = 2.0, α = 3, β = 6, t = 0.40, MTSF = 0.6667, R(t) = 0.5692, h(t) = 1.4737.

n m CS θ* MTSF* R*(t) h*(t)

EV ER EV ER EV ER EV ER

20 8 (12, 0*7) 2.0750 0.1497 0.7147 0.0258 0.5650 0.0053 0.7316 0.551920 8 (4*3, 0*5) 2.0898 0.1547 0.7077 0.0236 0.5622 0.0053 0.7331 0.549720 8 (0*5, 4*3) 2.0948 0.1640 0.7056 0.0251 0.5612 0.0056 0.7336 0.549020 8 (0*7, 12) 2.0702 0.1706 0.7186 0.0297 0.5663 0.0061 0.7308 0.553320 10 (10, 0*9) 2.0879 0.1672 0.7043 0.0255 0.5620 0.0058 0.7335 0.549120 10 (5*2, 0*8) 2.0872 0.1500 0.7017 0.0232 0.5616 0.0053 0.7339 0.548420 10 (0*8, 5*2) 2.0537 0.1377 0.7121 0.0234 0.5678 0.0050 0.7313 0.552420 10 (0*9, 10) 2.0813 0.1513 0.7025 0.0231 0.5626 0.0054 0.7336 0.548920 16 (4, 0*15) 2.0548 0.1107 0.6957 0.0179 0.5653 0.0042 0.7339 0.548320 16 (2*2, 0*14) 2.0603 0.1144 0.6933 0.0170 0.5643 0.0042 0.7344 0.547520 16 (0*14, 2*2) 2.0590 0.1110 0.6927 0.0168 0.5643 0.0041 0.7344 0.547420 16 (0*15, 4) 2.0768 0.1270 0.6873 0.0168 0.5612 0.0045 0.7358 0.545420 20 (0*20) 2.0432 0.1038 0.6938 0.0161 0.5668 0.0039 0.7338 0.548330 12 (18, 0*11) 2.0441 0.1363 0.7126 0.0238 0.5692 0.0051 0.7309 0.552930 12 (6*3, 0*9) 2.0675 0.1429 0.7027 0.0220 0.5646 0.0051 0.7331 0.549630 12 (0*9, 6*3) 2.0817 0.1364 0.6931 0.0187 0.5613 0.0048 0.7351 0.546630 12 (0*11, 18) 2.0875 0.1378 0.6915 0.0194 0.5602 0.0049 0.7355 0.546030 15 (15, 0*14) 2.0594 0.1268 0.6970 0.0183 0.5650 0.0045 0.7338 0.548530 15 (5*3, 0*12) 2.0544 0.1120 0.6970 0.0175 0.5656 0.0042 0.7336 0.548630 15 (0*12, 5*3) 2.0574 0.1142 0.6952 0.0182 0.5649 0.0043 0.7340 0.548130 15 (0*14, 15) 2.0588 0.1197 0.6946 0.0173 0.5647 0.0044 0.7341 0.547930 24 (6, 0*23) 2.0378 0.0739 0.6868 0.0114 0.5665 0.0029 0.7347 0.546730 24 (2*3, 0*21) 2.0324 0.0915 0.6926 0.0146 0.5681 0.0036 0.7336 0.548530 24 (0*21, 2*3) 2.0518 0.0966 0.6844 0.0135 0.5642 0.0036 0.7355 0.545730 24 (0*23, 6) 2.0532 0.0864 0.6822 0.0125 0.5636 0.0033 0.7359 0.545030 30 (0*30) 2.0437 0.0791 0.6814 0.0112 0.5649 0.0030 0.7358 0.545250 20 (30, 0*19) 2.0558 0.1026 0.6879 0.0147 0.5641 0.0038 0.7351 0.546350 20 (5*6, 0*14) 2.0557 0.0983 0.6870 0.0143 0.5640 0.0037 0.7353 0.546150 20 (0*14, 5*6) 2.0515 0.0946 0.6868 0.0137 0.5646 0.0036 0.7351 0.546250 20 (0*19, 30) 2.0515 0.1082 0.6891 0.0153 0.5650 0.0040 0.7348 0.546950 25 (25, 0*24) 2.0524 0.0895 0.6830 0.0131 0.5639 0.0034 0.7358 0.545250 25 (5*5, 0*20) 2.0373 0.0856 0.6880 0.0126 0.5668 0.0033 0.7345 0.547150 25 (0*20, 5*5) 2.0404 0.0791 0.6845 0.0112 0.5659 0.0030 0.7352 0.546150 25 (0*24, 25) 2.0453 0.0766 0.6821 0.0109 0.5648 0.0029 0.7357 0.545250 40 (10, 0*39) 2.0428 0.0597 0.6743 0.0083 0.5640 0.0023 0.7369 0.543350 40 (5*2, 0*38) 2.0314 0.0552 0.6780 0.008 0.5662 0.0022 0.7359 0.544750 40 (0*38, 5*2) 2.0296 0.0594 0.6794 0.0088 0.5666 0.0024 0.7357 0.545250 40 (0*39, 10) 2.0283 0.0561 0.6792 0.0083 0.5668 0.0022 0.7356 0.54525

(

0 50 (0*50) 2.0208 0.041 0.6772 0.0062 0.5676 0.0017 0.7357 0.5449

(ii) Take θ = β/α, the mean of the prior distribution of θ.iii) Compute the actual values of MTSF, R(t), and h(t).

(iv) Generate a progressively type II right censored sample of size n with m failures using the algorithm given inRemark 1 in Section 3. For each value of n(=20, 30, 50), four values of m are considered, so that, the percentageof failure information (m/n) × 100 is 40%, 50%, 80% and 100%. The scheme with n = m, (Ri = 0, ∀ i = 1,. . .,m)denotes complete sample. The scheme with [(0 * (n − 1), n − m) = (Ri = 0, ∀ i = 1, . . ., m − 1 ; Rm = n − m)] denotesconventional type II censored sample i.e. n − m items are removed at mth failure. The scheme with [(n − m,0 * (n − 1)) = (R1 = n − m, Ri = 0 ∀ i = 2, . . ., m)] denotes type III censoring scheme in which (n − m) items areremoved from the test at first failure only. Rest of the two censoring for each pair of (n, m) are progressively type

II censoring schemes.

(v) Compute the maximum likelihood and Bayes estimates of θ, MTSF, R(t) and h(t) according to Sections 4.2 and 5.Also, compute the confidence intervals and corresponding coverage probabilities for θ according to Section 4.3.

(

(

(

292 H. Krishna, K. Kumar / Mathematics and Computers in Simulation 82 (2011) 281–294

(vi) Repeat steps (iv–v), N = 10,000 times for the values of (θ, α, β, t) = [(1, 2, 2, 0.8) and (2, 3, 6, 0.4)] with each ofthe censoring schemes given in the third column of Tables 2–5. Compute the expected value (EV) and expectedrisk (ER) of the estimates obtained in step (v) using the following formulae

EV = 1

N

∑φ̂(θ) and ER = 1

N

∑(φ̂(θ) − φ(θ))

2

where φ̂(θ) is an estimate of φ(θ).Note that in case of MLEs, ER is known as mean square error (MSE)

7. Conclusions of simulation study

The results of the Monte Carlo simulation study are presented in Tables 2–5 with the following conclusions:

(i) It is observed that for complete samples ML and Bayes estimates of θ are very nearly unbiased, thus for completesamples ML and Bayes estimates of θ are very good. It is also observed that for complete samples as samplesize n increases the average bias, average MSE and average SE decrease in the case of ML estimation. The samephenomenon is seen in the case of Bayes estimation.

ii) For all the censoring schemes we see that the bias, SE and MSE of the estimates are quite small and can be usedin all practical situations. Though the bias increases as the failure information m decreases. Here one has to makea trade off between the precision of the estimation and the cost of the experiment.

iii) The widths of the confidence intervals based on ML estimates narrow down with an increase in the sample sizen and failure information m. The coverage probabilities are very close to the corresponding nominal levels in thecase of ML estimation. Thus the asymptotic results can be used for all practical purposes.

iv) In this case Bayes estimates are not uniformly better than the MLEs showing that additional prior informationabout the parameter θ does not provide an improvement in the estimates.

(v) Thus, we recommend that a moderate sample size be chosen with not more than 50% removals. Also, maximumlikelihood estimation is suggested in this case as they are easy to compute.

8. Real data example

In this section, to illustrate the use of Lindley distribution as a reliability model, we use the data set of waiting times(in minutes) before service of 100 bank customers as discussed by Ghitany et al. [8]. The waiting times (in minutes)are as follows:0.8, 0.8, 1.3, 1.5, 1.8, 1.9, 1.9, 2.1, 2.6, 2.7, 2.9, 3.1, 3.2, 3.3, 3.5, 3.6, 4.0, 4.1, 4.2, 4.2, 4.3, 4.3, 4.4, 4.4, 4.6, 4.7, 4.7, 4.8, 4.9, 4.9, 5.0, 5.3, 5.5,5.7, 5.7, 6.1, 6.2, 6.2, 6.2, 6.3, 6.7, 6.9, 7.1, 7.1, 7.1, 7.1, 7.4, 7.6, 7.7, 8.0, 8.2, 8.6, 8.6, 8.6, 8.8, 8.8, 8.9, 8.9, 9.5, 9.6, 9.7, 9.8, 10.7, 10.9, 11.0,11.0, 11.1, 11.2, 11.2, 11.5, 11.9, 12.4, 12.5, 12.9, 13.0, 13.1, 13.3, 13.6, 13.7, 13.9, 14.1, 15.4, 15.4, 17.3, 17.3, 18.1, 18.2, 18.4, 18.9, 19.0, 19.9,20.6, 21.3, 21.4, 21.9, 23.0, 27.0, 31.6, 33.1, 38.5.

Here, we consider four reliability models, namely exponential, Lindley, gamma, and lognormal. Maximum likelihoodestimation is used to estimate the parameters of the above four distributions. These estimates, along with the data, areused to calculate negative log-likelihood for the corresponding distribution. To test the goodness of fit of above modelswe shall consider here (i) negative log-likelihood, (ii) Akaike’s information criterion (AIC) proposed by Akaike [1]and (iii) Bayesian information criterion (BIC) developed by Schwartz [16]. The negative log-likelihood, AIC and BICvalues are listed in Table 6.

According to maximum log-likelihood criterion for goodness of fit and AIC, the order of best fit for the above fourmodels is:

Best Gamma → Lindley → Lognormal → Exponential Worst

According to BIC, the order of best fit for the above four models is:

Best Lindley → Gamma → Lognormal → Exponential Worst

H. Krishna, K. Kumar / Mathematics and Computers in Simulation 82 (2011) 281–294 293

Table 6The MLEs of parameter (s), negative log-likelihood, AIC and BIC values for different reliability models.

Reliability model MLEs of parameters −Log L AIC BIC

Exponential (θ)�θ = 0.1012 329.0209 660.0418 662.6469

Lindley (θ)�θ = 0.1866 319.0374 640.0748 642.6800

Gamma (α,β) (�α,

�β) = (2.0088, 4.9168) 317.3001 638.6002 643.8106

Lognormal (μ,σ) (�μ,�σ) = (2.0211, 0.7801) 319.1941 642.3882 647.5986

Table 7The MLEs of θ and 95% confidence intervals and MLEs of reliability characteristics for various censoring schemes.

n m Censoring schemes�θ 95% Confidence intervals M

�TSF

�R(t = 9)

�h(t = 9)

100 40 (1*39, 21) 0.1218 (0.0974, 0.1461) 15.5315 0.6607 0.066940 (2*15, 0*10, 2*15) 0.1040 (0.0830, 0.1250) 18.3285 0.7248 0.053040 ((3,0)*20) 0.1054 (0.0840, 0.1267) 18.0765 0.7197 0.054140 (0*39, 60) 0.1845 (0.1479, 0.2212) 9.9936 0.4563 0.119750 (1*50) 0.1189 (0.0970, 0.1409) 15.9256 0.6710 0.064650 ((2, 0)*25) 0.1191 (0.0971, 0.1411) 15.9019 0.6703 0.064750 ((3, 0, 0)*16, 2, 0) 0.1195 (0.0974, 0.1415) 15.8499 0.6690 0.065050 (0*49, 50) 0.1834 (0.1502, 0.2167) 10.058 0.4595 0.118780 (1*20, 0*60) 0.1572 (0.1329, 0.1814) 11.8616 0.5402 0.096080 ((1, 0, 0, 0)*20) 0.1755 (0.1377, 0.2132) 10.5478 0.4831 0.111880 ((2, 0, 0, 0, 0, 0, 0, 0)*10) 0.1709 (0.1341, 0.2077) 10.8474 0.4969 0.1078

ta

t0gaorasoi

A

t

R

80 (0*79, 20) 0.1901 (0.1614, 0.2189) 9.6794 0.4404 0.1246100 (0*100) 0.1866 (0.1605, 0.2126) 9.8770 0.4505 0.1215

Thus, Lindley distribution is fitting the above data quite satisfactorily. The main advantage of using Lindley dis-ribution over gamma and lognormal distributions is that it involves only one parameter. Hence, maximum likelihoodnd other inferential procedures become simple to deal with, especially from computational point of view.

We use the data set of waiting times (in minutes) before service of 100 bank customers to illustrate the use of estima-ion methods discussed in this paper. For complete sample case, the MLEs are

θ = 0.1866, M�T SF = 9.8770,

�R(t) =

.4505, and�

h(t) = 0.1215. The 95% confidence interval for θ is (0.1605, 0.2126). Some other censoring schemes asiven in the third column of Table 7 are considered for the same data set and MLEs of MTSF, R(t) and h(t) are derivedlong with confidence intervals for θ. The results are summarized in Table 7. From this table, we conclude that the MLEf θ and standard error of

θ are same as obtained by Ghitany et al. [8] in complete sample case. We shall consider theseesults as standard values of θ and reliability characteristics. The estimates based on type II censored sample schemere very close to the complete sample case even when failure information m is small say m = 40. For other censoringchemes the MLE of θ is negatively biased and the bias increases as the failure information m decreases. This kindf patterns are also observed for the MLEs of MTSF, R(t) and h(t). From table we can say that upto 20% removals oftems using progressive type II censored sample is safe for estimating the parameter and the reliability characteristics.

cknowledgements

The authors are thankful to the editor and the learned referee whose valuable comments and suggestions helped inhe much improvement of the paper.

eferences

[1] H. Akaike, A new look at the statistical models identification, IEEE Trans. Autom. Control. AC 19 (1974) 716–723.[2] N. Balakrishnan, R. Aggarwala, Progressive Censoring, Theory, Methods and Applications, Birkhauser, Boston, 2000.

[3] N. Balakrishnan, A. Hossain, Inference for the type II generalized logistic distribution under progressive type II censoring, J. Stat. Comput.

Simul. 77 (12) (2007) 1013–1031.[4] N. Balakrishnan, R.A. Sandhu, A simple simulation algorithm for generating progressively type II censored samples, Am. Stat. 49 (2) (1995)

229–230.

[[[[

[[[[

294 H. Krishna, K. Kumar / Mathematics and Computers in Simulation 82 (2011) 281–294

[5] R.E. Barlow, F. Proschan, Statistical Theory of Reliability and Life Testing: Probability Models, Holt Rinehart and Winston, Inc., New York,1975.

[6] P. Basak, I. Basak, N. Balakrishnan, Estimation for the three-parameter lognormal distribution based on progressively censored data, Comput.Stat. Data Anal. 53 (2009) 3580–3592.

[7] A.C. Cohen, Progressively censored samples in life testing, Technometrics 5 (1963) 327–329.[8] M.E. Ghitany, B. Atieh, S. Nadarajah, Lindley distribution and its applications, Math. Comput. Simul. 78 (2008) 493–506.[9] G. Hofmann, E. Cramer, N. Balakrishnan, G. Kunert, An asymptotic approach to progressive censoring, J. Stat. Planin. Inference 130 (2005)

207–227.10] J.F. Lawless, Statistical Models and Methods for Lifetime Data, Wiely, New York, 1982.11] D.V. Lindley, Fiducial distributions and Bayes’ theorem, J. R. Stat. Soc. B 20 (1958) 102–107.12] D.V. Lindley, Introduction to Probability and Statistics from a Bayesian Viewpoint, Part II: Inference, Camb. Univ. Press, New York, 1965.13] N.R. Mann, R.E. Schafer, N.D. Singpurwala, Methods for Statistical Analysis of Reliability and Life Data, John Wiley & Sons, New York,

1974.

14] B. Pradhan, D. Kundu, On progressively censored generalized exponential distribution, Test 18 (2009) 497–515.15] M. Sankaran, The discrete Poisson–Lindley distribution, Biometrics 26 (1970) 145–149.16] G.E. Schwarz, Estimating the dimension of a model, Ann. Stat. 6 (2) (1978) 421–464.17] S.K. Sinha, Reliability and Life Testing, Wiley Estern Limited, New Delhi, 1986.