Jun Shao - Monte Carlo Approximations in Bayesian Decision Theory

7
7/28/2019 Jun Shao - Monte Carlo Approximations in Bayesian Decision Theory http://slidepdf.com/reader/full/jun-shao-monte-carlo-approximations-in-bayesian-decision-theory 1/7 Monte Carlo Approximations in Bayesian Decision Theory Author(s): Jun Shao Source: Journal of the American Statistical Association, Vol. 84, No. 407 (Sep., 1989), pp. 727- 732 Published by: American Statistical Association Stable URL: http://www.jstor.org/stable/2289654 . Accessed: 13/06/2013 22:20 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. .  American Statistical Association is collaborating with JSTOR to digitize, preserve and extend access to Journal of the American Statistical Association. http://www.jstor.org

Transcript of Jun Shao - Monte Carlo Approximations in Bayesian Decision Theory

Page 1: Jun Shao - Monte Carlo Approximations in Bayesian Decision Theory

7/28/2019 Jun Shao - Monte Carlo Approximations in Bayesian Decision Theory

http://slidepdf.com/reader/full/jun-shao-monte-carlo-approximations-in-bayesian-decision-theory 1/7

Monte Carlo Approximations in Bayesian Decision Theory

Author(s): Jun ShaoSource: Journal of the American Statistical Association, Vol. 84, No. 407 (Sep., 1989), pp. 727-732Published by: American Statistical Association

Stable URL: http://www.jstor.org/stable/2289654 .

Accessed: 13/06/2013 22:20

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of 

content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms

of scholarship. For more information about JSTOR, please contact [email protected].

.

 American Statistical Association is collaborating with JSTOR to digitize, preserve and extend access to Journal

of the American Statistical Association.

http://www.jstor.org

Page 2: Jun Shao - Monte Carlo Approximations in Bayesian Decision Theory

7/28/2019 Jun Shao - Monte Carlo Approximations in Bayesian Decision Theory

http://slidepdf.com/reader/full/jun-shao-monte-carlo-approximations-in-bayesian-decision-theory 2/7

Page 3: Jun Shao - Monte Carlo Approximations in Bayesian Decision Theory

7/28/2019 Jun Shao - Monte Carlo Approximations in Bayesian Decision Theory

http://slidepdf.com/reader/full/jun-shao-monte-carlo-approximations-in-bayesian-decision-theory 3/7

728 Journal f the American StatisticalAssociation, eptember1989

ofh. Generallypeaking, shouldmimic heposteriorp(O I x) and satisfyeveral echnicalonditionsuchasB.3, (2.5), and (3.1) in Sections and 3. Extensive is-cussionsnhowto choosethis unctionan be foundnBerger1985, ec. 4.9) and Geweke 1988b). n Step2,wehavereduced omputationyusing he amerandomsequenceO,}7?=.or omputingm(a, o) n 1.4) for ll aE A. Not onlydoes thismethodllowus touse a largem,but t lso resultsn certainesirable ropertiesor m,which ill e discussedn thenext ection.n Step3,onemay mploy nyavailable lgorithmor olving mini-mizationroblem.

Throughouthe article,we assume hat he ntegrals(1.2)-(1.3) arefinitendthe ikelihood(x I 0) andthepriorC(0) anbeevaluatedasilywhen and areknown.No other ssumptionn f(x 0) and 7c(0) is made. naddition,heresno restrictionn the engthf he ectorx, that s, thetotalnumberf observations.f7r(0) s ahierarchicalrior ftheform

7r(0) - 7 I( >)7c2Q) di,

where s a measure nAC Rs,then r(0) s not asy oevaluate for given ) andthegenerationfOi's rom7r(0)may lso be difficult.emay, owever,rawm idrandomp + s)-vectorsj= (0,,,j)T rom hedistributionwith ensity (Q) (with especto ux ron 0 x A) andcompute

m

pm(a, co) = m L(0,, a)f(x I 0i)7E1(0, ii)7EJ(,)Ih((,)i=1

Therest f his rticlesorganizedsfollows.n Section2,convergenceftheMonteCarlo pproximationsm(wo)andr1(wo) s studied. he rates fconvergencere alsoobtained.n Section , twomeasureshat an be usedtoindicate he ccuracyftheMonteCarlo pproximationsareproposed. ection containsn example fapplica-tion.

2. LIMITINGBEHAVIOROF THE MONTECARLO APPROXIMATIONS

2.1 Convergence

Here, n immediateuestions whetherrnot m on-vergesto a* as m diverges o infinity.ince am= am(w)

is a functionf random uantityo,we saythat m on-verges oa* a.s. iffor lmost ll co with espectoPh),

am(o) --a* as m oo.To provide n answero theforegoinguestion,et us

first onsider hesimple ase of finitectionproblems,that s,A consistsf finitely any ctions ('), a 2),. .

a(k). The mostmportantxamplesrehypothesisestingand lassificationroblems.ee alsoExample inSection4. Withoutossof generality,ssume hat . . ,

are Bayesian ctions,where is an integeressthanor

equal ok. Let = 2-1 mini<l?k {p(a(')) - p(a(1))}. Sincek s fixed,t followsromhe trongawof argenumbers

(SLLN) that or lmost llco, heres anintegerm,, uchthatwhenm ' mo,

Pm(a('), o) < p(a(')) + 8 = p(a( )) + 8

< p(a(')) - pmm(a(')l w))

holdsforany ? i and 1> i. This provesthat m(wO)=

a() for i < i, that s, am(w) s a Bayesian ction orm>- m_.From the SLLN, Mm(Wo)- M(x) a.s. Thus bothr(am(wo))nd rm(wo)onverge o r(a(1))a.s. If, n addition,theBayesian action s unique (i - 1), then m(wo) a()

a.s.Theseresultsanbe extendedothe ase n which is

a compact ubset fR . I state he followingesult ndomit tsproofwhich sesanargumentimilarothepre-viousdiscussionnda uniformLLN (e.g.,th.2 ofJenn-rich 969)].

Theorem. Suppose hat isa compactubset fR .Let

AB = {a: a is a Bayesian ction}, (2.1)and for fixed o,

A,-{a: a is a limit oint f am(wo)}m=i}.

IfL(O, a) is continuousna for ach0 E 0 and theresa measurableunction (0) such hat

sup{L(0, a): a E A} ? M(0)

and

f (O))f(x O7r(O)d1i 0oo,

then here xists Bayesian ction ndthefollowinge-sults old:

AW C AB for lmost llco, (2.2)

and

r(am(wo)) - r(a*) a.s. and rm(wo)- r(a*) a.s. (2.3)

If theBayesian ctions unique, henm(wo) ~a* a.s.

When * is notunique,m(wo)maynot onverge,ince{am(w)}ji=I mayhave everalimit oints; owever,m(wo)canstill e used npractices a gooddecision,ince heposteriorxpectedossof nl(co)onvergesotheminimum

posteriorxpectedoss. In addition,m(wo)s closeto aBayesianctionn he ense f 2.2).Regardlessfunique-nessof *, rm(wo)s closetor(am(wo))[orr(a*)] accordingto 2.3).

Often is a convex ubset fRq (e.g.,A = e for nestimationroblem)ndA is notcompact.n this ase,letA' be the losure fA and ssumehat he ossfunctionL(03,a) is definedn 0Ox A'. Furthermore,etA be anunboundedubset fR4. (IfA is bounded, henA' iscompact nd Theorem applies.)To establishhecon-vergencef m nthis ase,we needsome dditionaleg-ularityonditions.nthefollowing,(c) andN(c) denotethe ets a E A': Ia - aRI = C} and a E A':la - aAI'

c}for positiveonstant, where l s the uclideannormonRg.

This content downloaded from 190.236.124.194 on Thu, 13 Jun 2013 22:20:22 PMAll use subject to JSTOR Terms and Conditions

Page 4: Jun Shao - Monte Carlo Approximations in Bayesian Decision Theory

7/28/2019 Jun Shao - Monte Carlo Approximations in Bayesian Decision Theory

http://slidepdf.com/reader/full/jun-shao-monte-carlo-approximations-in-bayesian-decision-theory 4/7

Shao: MonteCarlo Approximationsn Bayesian Decision Theory 729

Condition .1. A is convex ndL(6, a) is a convexcontinuousunctionf a for ach0 E 0.

Condition.2. r(a) < oofor lla E A andtherexistsana* E A such hat (a*) = min{r(a): E A}.

Condition.3. There sa nonemptyetB(c) such hatinf{r(a): E B(c)} > r(a*).

Condition .4. There s a measurableunction (O)such hat up{L(O, ): a E N(c)} - M(O) andfE (O)f(x| 0)7i(O)d,u oo.

Examples fconvexossfunctionshat re commonlyused n the estimationf scalar0 arethesquared rrorloss (0 - a)2 and theabsoluteerror oss 10- al. See alsoExample 2 in Section4. For vector0 = (M(1), . ., (P))r

and a = (a(1), . .. , a(P))r, commonly sed loss is theweightedverage f the ossfunctionsi(0(0),(i)),whichsatisfiesondition .1 if achLi does.

UnderCondition .1, r(a) is convex.Condition .2simplyays hat Bayesian ction or heproblemnder

considerationxists. etABbe defineds in 2.1), and et

A= = {a E A': r(a) = r(a*)}. ThenAB C A' andAB -AB ff (a) > r(a*) for lla in theboundaryfA butnotin A. Condition .3 rules utthepossibilityhat (a) istooflatnthe ense hatAB sunbounded. ondition .3is alsonecessaryor B to be bounded.A sufficienton-dition orCondition .3 isthat (0, a) isstrictlyonvex,which lsoensuresheuniquenessftheBayesian ction.Condition .4 ismild, inceN(c) iscompactndL(0, a)iscontinuousna.

Theorem. UnderConditions .1-A.4., theasser-tions fTheorem hold withABreplaced yA').

Theproof fTheorem is givenn Shao (1988).Theconvergencef m(w) nthe ituationhere he ossfunc-tion snonconvexsalsostudiednShao 1988).

2.2 Convergence Rates andAsymptotic istributions

The convergenceates ndasymptoticistributionsfam(w) ndrm(w)reobtained nder omefurtheregu-larityonditionsn the ossfunction.he variancesf heasymptoticistributionsanbeused s accuracy easuresof m(O) ndrm(o) see Sec. 3). LetL'(0, a) andL"(0,a)bethegradientL(0, a)laa and heHessianmatrix2L(0,

a)laaaat, respectively.Condition.]. Conditions .1 and A.2 hold.Condition .2. Foreach0, L(0, a) is a twice ontin-

uously ifferentiableunctionf ina neighborhood(c)of a*.

Condition .3. fe IIL'(0, a*)112w2(0)h(0)du < oo,wherew(0) is definedn 1.5).

Condition .4. There s a measurable unction(0)such thatfor ny k, 1), sup{jL',(0, a)|: a E N(c)} c v(0)andf0 (0)f(x I )7t(0)dyu oo,where "1(0,a) isthe k,l)th lementfL"(0, a).

Condition .5. L"(0J,*) is positive-definiteor ach

0.Condition .5 implies hatfL"(0, a*)f(x 0)ir(0)d,i

is positive-definite.nderConditions .1-B.5, Condi-tionsA.3 andA.4 aresatisfiednda* is unique.Henceam(co) onverges o a* a.s.

Theorem. UnderConditions.1-B.5,wehave

am(w) - a* = O(m-1"2(log log M)1"2)a.s.,

r(am(w)) - r(a*) = O(m'llog logm) a.s.,and

m12(am(o) - a*) >d N(O, Y),

where *d denotes onvergencen distributionnd

X= 1-11- (2.4)

with

1= fL'(0, a*)][L'(O, a*)]Tw2(O)h(0)d,u

and

2= f"(0,a*)f(x I0)7r(0) yu.

If

a2 f L(0, a*) - r(a*)]2p2(0 Ix)lh(0) d1i < oo,

(2.5)

then

rm(w) - r(a*) = O(m-1/2(log log M)1/2) .s.,

rm(w)) r(am(w)) = O(m-1/2(log log M)1"2) a.s.,

m"2(rm(w) - r(a*)) d N(O, r)

and

m"2(rm(w) - r(am(o))) >d N(O, or)

Theproof ses tandardechniquesnasymptoticnal-ysis centralimit heoremnd aw ofthe teratedoga-rithm)nd somitted.

An importantpecial ase s theproblemfestimating0 with loss L(0, a) = 110 all2, 0 E O C RP, a E A C

RP.The Bayesian ctions theposterior ean, nd

m(1 Ojf(x i)7r(0j)/h(0j)

A straightforwardalculationhows hat

E= f0 - a*)(0 - a*)Tp2(0 Ix)lh(0) du

and

=f

ll0- a*112 - r(a*)]2p2(O Ix)/h(0) dfl.

When (0) is the osteriorensity(O I ), reducesothe posteriorovariancematrix. he posteriorxpected

This content downloaded from 190.236.124.194 on Thu, 13 Jun 2013 22:20:22 PMAll use subject to JSTOR Terms and Conditions

Page 5: Jun Shao - Monte Carlo Approximations in Bayesian Decision Theory

7/28/2019 Jun Shao - Monte Carlo Approximations in Bayesian Decision Theory

http://slidepdf.com/reader/full/jun-shao-monte-carlo-approximations-in-bayesian-decision-theory 5/7

730 Journal of the American Statistical Association, September 1989

loss ofam(co) n this ase is

r(am(co))= r(a*) + IIam(Qo) a*112

= r(a*) + O(m-'log log m) a.s.

ConditionsB.3 and (2.5) suggest hat the importancefunction (O) shouldbe selectedso thatthe tails ofh(O)

are heavier than the tails of theposterior (O Ix). Notethat heposteriormeanandvarianceofL(O, a*) are r(a*)and U2 = cr2withh(O) - p(O I x), respectively. romTheorem3, the standarderrordue to Monte Carlo ap-proximations the fractionym)"-12of a, wherey = U2/

r is called the relative numericalefficiencyGeweke1988b). This may ead to a guideto choosing heimpor-tancefunction (O) and the Monte Carlo approximationsamplesize m. See the discussionsnGeweke (1988a,b).

3. MEASURES F ACCURACY

In practice, t s also desirable o indicate heprecisions

of am(co) nd rm(w) [as approximations o the Bayesianactionand posterior xpected oss of am(co), espectively]by using omemeasuresofaccuracy.A nicefeature f theMonte Carlo method s thataccuracy stimates fam(co)and rm(co)an be computed.With method festimatingthe precisionof am(co), heMonte Carlo sample size mcan be chosenby using n iterativemethod.That s,start-ingwith n initialmo,we computeam,ndpm/an estimateoftheprecision fam) formj = mo+ jt, wheret s a stepsize and j = 0, 1, 2, .... Stop ifpm/s less thana pre-described mallnumber.This method s used inExample2 ofSection4.

We discuss twoaccuracymeasuresof theMonte Carloapproximationsndstudy hem nan illustrativexample.The variancesoftheasymptotic istributionsfam(co)

and rm(co) an be used as accuracymeasures. Since, ingeneral,X (2.4) and Ur22.5) do not have closed forms,theyhave to be approximated y the Monte Carlo ap-proximations

a= V)IV1V-1m = 2 VV2

and

m

Vr mlE[(, m(ct))) rm(ct))]2W2(0,)/[Mm(c;0]= m-1 [L(01,am(2)i=l

respectively, here

m

V1= m-1 [L'(01, am(co))][L'(Oi, am(w;))]zw2(i0)I1=1

and

mV2 = m- L"(O,,am(o))w(Oj).

The estimated symptotic ariancesVa/m and Vr/m canthen be used as accuracy measuresof am(w) and rn1(w),respectively.he followingheorem roves he almost ureconvergence f theseapproximations.

Theorem . Assume the conditionsnTheorem 3. If,in addition, here s a measurablefunction (0) suchthat

sup{IlIL'(0, )jj2:a E N(c)} c u(0)

f (O)w2(0)h(0) ,u ?, (3.1)

thenVa -E a.s. andVr -a a.s.Proof. The results ollowfrom heconvergence f amand Theorem 2 ofJennrich1969).

In some situations e.g., A is a finite et or L(0, a) isnot differentiable],hepreviousasymptotic ariance ap-proach does not apply.Anothermethodfor btaining c-curacymeasures of am(co) and rm(co) s as follows: Werepeat thecalculationof theBayesian action with nde-pendent ets of random0's) and compute hesample vari-ance. Suppose thatm = gk and we generate from h) g

independent ets ofrandomvariables o, (Olj, Okj)(j = 1, . . . , g). Calculate the ak((O), Pk(W1), rk(w1)

according o (1.6) (j =1 . .1 , g), and thesample vari-ances

g

Sgk = (g - 1)- (ak(w1) -a)m)(ak((j) -a

J= 1

am = E ak(w1)), (3.2)j=l

and

g g

Sgk =(g - (rk(w9) - rtn)2 r,j,= g E rk(w9).

Saklg and Sgklg are thenused as accuracymeasures fam(co) nd rm(co), espectively.

The motivation f the use of independent amplesforassessing he accuracyofam(co) nd rm(co)re intuitivelyobvious. Unlike the asymptotic ariance approach, thismethoddoes not requireConditionsB.1-B.5 and (3.1).But itmay requiremorecomputationsfthecomputationofam(co) nvolves difficult inimization roblem, inceit needs to solve the sameminimizationroblemg times.

Usually,k should be chosento be largeand g maybechosento be small or moderate for computational av-ing). For example, fm = 10,000,we maychooseg = 10

(k - 1,000). Unless bothg and k are large,theaccuracyestimates btainedbyusingthe asymptotic arianceap-proach and independent amples may be differentseeEx. 1).

When ndependentamples reused,toreduce hecom-putation,we mayuse a-m efined n (3.2) as the MonteCarlo approximation o a*. This is speciallypreferredfthecomputation f a1n s expensive.Although m nd a-mare generally ifferent,he differences usually nappre-ciable for argek.

We study he accuracy estimatesVa and Sgk in an il-lustrative xample where the exact values of a and theasymptoticariance can be obtained and,therefore, ecan assess theMonte Carlo approximations.

This content downloaded from 190.236.124.194 on Thu, 13 Jun 2013 22:20:22 PMAll use subject to JSTOR Terms and Conditions

Page 6: Jun Shao - Monte Carlo Approximations in Bayesian Decision Theory

7/28/2019 Jun Shao - Monte Carlo Approximations in Bayesian Decision Theory

http://slidepdf.com/reader/full/jun-shao-monte-carlo-approximations-in-bayesian-decision-theory 6/7

Shao: MonteCarlo ApproximationsnBayesian Decision Theory 731

Table 1. MonteCarlo Approximations opxand a (Example 1)

x= 1.5, x= .5, x= -.5,Ax= 1.6388, six= 1.0091, P= .6412,a = .015230 a = .009418 a = .006580

j am Vm am Vm am Vm

1 1.6540 .015389 1.0063 .009581 .6310 .006559

2 1.6307 .014763 1.0271 .009275 .6602 .0066823 1.6149 .015108 1.0033 .009359 .6409 .0065264 1.6339 .015107 1.0118 .009407 .6443 .0065765 1.6325 .015632 .9986 .009285 .6421 .0064716 1.6573 .015331 1.0114 .009530 .6378 .0066057 1.6915 .015439 1.0219 .009605 .6382 .0066538 1.6595 .015293 1.0173 .009413 .6483 .0065989 1.6186 .015091 1.0070 .009333 .6430 .006584

10 1.6555 .015415 1.0054 .009512 .6330 .006595

Mean 1.6448 .015257 1.0110 .009430 .6419 .006585

Sm = .021902 Sm = .008367 Sm = .007819

NOTE: h(O)= e-0/(0x)(0)ndm = 5,000.

Example 1 (Berger1985,sec. 4.3). Let X - N(O, 1),

where is a measure fsomepositive uantity.n thiscase, t sdifficultoestimate byusing he lassical p-proach.For example, hemaximumikelihoodstimateof0 ismax(x, ), which sunsuitablef heobservation< 0. Under thenoninformativerior r(0)= I(o0,)(0), heposterior eanof0 ispux x + +(x)/4>(x),where >(x)is the tandard ormal istributionunctionnd+(x) =

F'(x). In this ase,no MonteCarlo pproximationor,uxis needed. To see theaccuracyfthe MonteCarloap-proximations,0 ndependentetseachof izem= 5,000)ofrandom ariablesre generated romhedensity(0)= e-0I(O)(0) and the MonteCarlo approximationsm,nd

vm= (Va/rm)12re calculated ndreportednTable 1forx = 1.5, .5, and -.5. The exactvalues oftheposteriormean u,j and the asymptotictandarddeviation = (E/m)1'2 re included. Both am nd vm re accurate.

For comparison, Sm= (Sak/g)1 (g = 10, k = 5,000) isalso included in Table 1. In all threecases, Sm s quitedifferentfrom vm or o), even though k = m = 5,000.This is not a surprise, inceg = 10 is small.

Remark1. When the loss functions differentiable,accuracy stimates ased on asymptoticariances re rec-ommended.The importance unction (O) shouldbe cho-sen so thatConditions 2.8) and (3.1) are satisfied.

Remark2. When the asymptotic ariance approachcannotbe applied, one mayuse independent amplestoestimate heaccuracy f the MonteCarlo approximations.

Remark3. In some situations,tmaybe appropriatetofocuson relative ccuracymeasures uch as va/Iamlndsa/laml, where va = (V/mM)112nd Sa = (Sak/g)112reasymptotictandarddeviations.

4. NUMERICALMPLEMENTATION

In some situations, he minimizationroblem nStep3canbe solvedanalytically,or xample, f * is a posteriormomentrquantile. here re ituations,owever, heream(wo) can only be obtainednumerically. here is a largebodyof iteraturen optimizationnd nonlinear rogram-

ming,which rovidesmany fficientlgorithmsor olvingmin{pm(a,o): a E A}. See Avriel 1976) and Rao (1984),which lso provide manyotherreferences.

As was discussed n Section 1, the same random Oi}should be used for pproximating(a). Ifstorage s not aproblem,we mayreducecomputations ystoringw(Oi)}and computing 1,(a,co) = m-1 EI L(Oi, a)w(Oi) for ll

needed a, wherew(O) is defined n (1.5).Following, n example s used to llustratehenumerical

implementation f theMonte Carlo method n practicalproblems.Otherexamples can be found n Shao (1988).

Example2. A companyhas developed a newtypeofproduct nd mustdecidethe number fthe newproducts(denoted by a) to producebased on a marketing urvey.Let 0 be theunknown roportion f customerswho willfavor he newproduct.

1. Likelihood function.f a sample of n customers sinterviewed nd x of them ndicatedthatthey favorthe

new product, thenthe likelihoodfunctions (X)0x(I -Q)n-x. To illustratewithnumerical esults, et us supposethatn = 36 andx = 11 was observed.

2. Lossfunction. sing hemethodntroducednBecker,DeGroot, andMarschak 1964), DeGroot (1970,chap. 7),andBerger 1985,chap. 2), I obtained hecompany's ossfunction

L(O,a) = -2,500,000ca+ 1, a ' cO/2

= 10,562,5000-1(8a/13cO/2)2 1,640,625c20 1,

cO/2 a ' cO

= 1,500,OOOc(a 2c0) + 1, a : cO,

where = 1,640,625c2.Here, 0 E( 0 = (0, 1) and a E A= {all of the integers c}. For illustration, = 2 x 105is assumed.

3. Priordistribution.t is felt hat 2 < 0 < .5 is twiceas likely s 0 c .2 or 0 - .5, and there s no other nfor-mation bout 0 available. Thus I used thevagueprior

(2/9)I(.2,5)(0)+ (1/21)[1 I(2,5)(0)]-

Neither heposterior xpected oss nor ts MonteCarloapproximation as a closed form n thisproblem.NotethatA containsfinitelymany ctions.The number f ac-tions s so large,however, hat t s too expensive o eval-uate pm(a, co) for all actions. Since L(0, a) is a convexfunction fa, I usedthegolden ectionmethod see Avriel1976) with24 evaluationsofpm(a, co) to find heMonteCarlo approximationo theBayesian actiona*.

To assess theaccuracy ftheMonte Carlo approxima-tions, I used independentsamples (Sec. 3), since the

Table 2. Results From Four terationsExample 2)

j m am Sm Cm rm Sm c'

0 10,000 51,137 75.15 .0015 460,042 211.39 .00051 20,000 51 195 74.87 .0014 460,068 220.82 .00052 30,000 51 195 60.35 .0012 460,037 169.24 .0004

3 40,000 51,223 40.42 .0008 459,861 122.52 .0003

This content downloaded from 190.236.124.194 on Thu, 13 Jun 2013 22:20:22 PMAll use subject to JSTOR Terms and Conditions

Page 7: Jun Shao - Monte Carlo Approximations in Bayesian Decision Theory

7/28/2019 Jun Shao - Monte Carlo Approximations in Bayesian Decision Theory

http://slidepdf.com/reader/full/jun-shao-monte-carlo-approximations-in-bayesian-decision-theory 7/7

732 Journal of the American Statistical Association, September 1989

asymptoticariance pproach s not pplicable o this ase.The followingterativemethodwas used for selectingMonte Carlo samplesize: am,and rm, ere computedfor

m, = (ko + jt)g (j = 0, 1, 2, . . .), with tep size t =1,000,g = 10, and initialko = 1,000.At each iteration,the relative ccuracymeasures a = Sa la-m nd cr = Sr

rmwerecomputed,where , = (Sgk/g)112andSm (Sgkl

g)1/2.The iterationwas terminatedwhen both Ca andCr ? .001.

The importance unction as chosento be proportionalto the ikelihood unction. he random ,'sweregeneratedby using he MSL subroutineGGBTR. The computationwas done (afterfour terationswithCPU time1.96 min-utes) on a VAX 11/780 Unix 4.3BSD) at PurdueUni-versity.

The selectedMonteCarlo samplesize m = 40,000andtheresultingMonte Carlo approximationso a* and r(a*)are 51,223and 459,861.09,respectively.able 2 shows heaccuracymeasuresand theresults rom our terations.

[ReceivedApril 1988. RevisedFebruary 989.]

REFERENCES

Avriel,M. (1976), NonlinearProgramming: nalysis nd Methods, n-glewood Cliffs,NJ: Prentice-Hall.

Becker,G. M., DeGroot, M. H., and Marschak,J. 1964), "MeasuringUtility y a Single-Response equentialMethod," Behavioral cience,9, 226-232.

Berger,J. 0. (1985), Statistical ecision Theory nd BayesianAnalysis(2nd ed.), New York: Springer-Verlag.

DeGroot, M. H. (1970),Optimal tatisticalecisions,New York:McGraw-Hill.

Geweke, J. 1986), "Exact Inferencen the nequalityConstrainedNor-mal LinearRegressionModel," Journal f AppliedEconometrics, ,

127-142.(1988a), "Antithetic cceleration f MonteCarlo IntegrationnBayesian Inference,"Journal f Econometrics, 8, 73-90.- (1988b), "Bayesian nferencen EconometricModels UsingMonteCarlo Integration," npublishedmanuscriptubmitted o Economet-rica.

Jennrich,R. I. (1969), "AsymptoticPropertiesof Non-linearLeastSquares Estimators,"The AnnalsofMathematical tatistics, 0, 633-643.

Kloek,T., and Van Dijk, H. K. (1978), "BayesianEstimates f EquationSystem arameters:An Application f ntegration yMonteCarlo,"Econometrica, 6, 1-19.

Rao, S. S. (1984), Optimization:Theory nd Applications 2nd ed.),New York: JohnWiley.

Shao, J. (1988), "Monte Carlo Approximationsn Bayesian DecisionTheory" I, II, and III), TechnicalReports88-39,64, and 65, PurdueUniversity,tatistics ept.

Stewart, L. (1979), "Multiparameter Univariate Bayesian Anal-ysis," Journal of the American StatisticalAssociation, 74, 684-693.

(1983), "Bayesian AnalysisUsingMonte Carlo Integration-APowerfulMethodology orHandlingSome Difficult roblems,"TheStatistician,2, 195-200.

This content downloaded from 190.236.124.194 on Thu, 13 Jun 2013 22:20:22 PMAll se s bject to JSTOR Terms and Conditions